metrics

McCabe Cyclomatic Complexity for switch in Java

五迷三道 提交于 2019-12-05 03:43:31
I am using a switch statement with 13 cases, each case only has an one line return value. McCabe paints this in red. Is there an easier way to write a big switch statement? It doesn't seem complex to read, but I don't like the default setting turning red. If other people use the same tool on my code and see red stuff they might think I'm stupid :-) Edit: I'm mapping different SQL-Types to my own more abstract types, therefore reducing the total amount of types. case Types.TIME: return AbstractDataType.TIME; case Types.TIMESTAMP: return AbstractDataType.TIME; case Types.DATE: return

Keras custom RMSLE metric

喜欢而已 提交于 2019-12-05 02:24:19
How do I implement this metric in Keras? My code below gives the wrong result! Note that I'm undoing a previous log(x + 1) transformation via exp(x) - 1, also negative predictions are clipped to 0: def rmsle_cust(y_true, y_pred): first_log = K.clip(K.exp(y_pred) - 1.0, 0, None) second_log = K.clip(K.exp(y_true) - 1.0, 0, None) return K.sqrt(K.mean(K.square(K.log(first_log + 1.) - K.log(second_log + 1.)), axis=-1) For comparison, here's the standard numpy implementation: def rmsle_cust_py(y, y_pred, **kwargs): # undo 1 + log y = np.exp(y) - 1 y_pred = np.exp(y_pred) - 1 y_pred[y_pred < 0] = 0.0

Apache Beam Counter/Metrics not available in Flink WebUI

心已入冬 提交于 2019-12-05 01:29:08
I'm using Flink 1.4.1 and Beam 2.3.0, and would like to know is it possible to have metrics available in Flink WebUI (or anywhere at all), as in Dataflow WebUI ? I've used counter like: import org.apache.beam.sdk.metrics.Counter; import org.apache.beam.sdk.metrics.Metrics; ... Counter elementsRead = Metrics.counter(getClass(), "elements_read"); ... elementsRead.inc(); but I can't find "elements_read" counts available anywhere (Task Metrics or Accumulators) in Flink WebUI. I thought this will be straightforward after BEAM-773 . Once you have selected a job in your dashboard, you will see the

Micronaut: How to get metrics in the Prometheus format?

我与影子孤独终老i 提交于 2019-12-05 01:06:12
问题 How should I configure the Micronaut to get the /metrics in the Prometheus format ? Used: micronaut 1.0.0.M3 Now: micronaut: ... metrics: enabled: true export: prometheus: enabled: true and result: metrics name list {"names":["jvm.memory.max","executor.pool.size"...]} I need to get: metrics in the prometheus format(formats) 回答1: At the moment, we solved the problem as follows. Added a new endpoint. Or create a controller with a mapping on /metrics . The new endpoint added a return of scrape()

When using mectrics in model.compile in keras, report ValueError: ('Unknown metric function', ':f1score')

回眸只為那壹抹淺笑 提交于 2019-12-05 00:17:32
问题 I'm trying to run a LSTM, and when I use the code below: model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy', 'f1score', 'precision', 'recall']) It returns: ValueError: ('Unknown metric function', ':f1score'). I've done my searches and found this url: https://github.com/fchollet/keras/issues/5400 The "metrics" in the "model.compile" part in this url is exactly the same as mine, and no errors are returned. 回答1: I suspect you are using Keras 2.X. As explained in

Spark metrics on wordcount example

这一生的挚爱 提交于 2019-12-04 23:32:37
问题 I read the section Metrics on spark website. I wish to try it on the wordcount example, I can't make it work. spark/conf/metrics.properties : # Enable CsvSink for all instances *.sink.csv.class=org.apache.spark.metrics.sink.CsvSink # Polling period for CsvSink *.sink.csv.period=1 *.sink.csv.unit=seconds # Polling directory for CsvSink *.sink.csv.directory=/home/spark/Documents/test/ # Worker instance overlap polling period worker.sink.csv.period=1 worker.sink.csv.unit=seconds # Enable jvm

Sklearn 速查

ぃ、小莉子 提交于 2019-12-04 23:16:42
## 版权所有,转帖注明出处 章节 SciKit-Learn 加载数据集 SciKit-Learn 数据集基本信息 SciKit-Learn 使用matplotlib可视化数据 SciKit-Learn 可视化数据:主成分分析(PCA) SciKit-Learn 预处理数据 SciKit-Learn K均值聚类 SciKit-Learn 支持向量机 SciKit-Learn 速查 Scikit-learn是一个开源Python库,它使用统一的接口实现了一系列机器学习、预处理、交叉验证和可视化算法。 一个基本例子 from sklearn import neighbors, datasets, preprocessing from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score iris = datasets.load_iris() X, y = iris.data[:, :2], iris.target X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33) scaler = preprocessing.StandardScaler().fit(X

Sklearn K均值聚类

时光怂恿深爱的人放手 提交于 2019-12-04 23:16:34
## 版权所有,转帖注明出处 章节 SciKit-Learn 加载数据集 SciKit-Learn 数据集基本信息 SciKit-Learn 使用matplotlib可视化数据 SciKit-Learn 可视化数据:主成分分析(PCA) SciKit-Learn 预处理数据 SciKit-Learn K均值聚类 SciKit-Learn 支持向量机 SciKit-Learn 速查 到目前为止,我们已经非常深入地了解了数据集,并且把它分成了训练子集与测试子集。 接下来,我们将使用聚类方法训练一个模型,然后使用该模型来预测测试子集的标签,最后评估该模型的性能。 聚类(clustering)是在一组未标记的数据中,将相似的数据(点)归到同一个类别中的方法。聚类与分类的最大不同在于分类的目标事先已知,而聚类则不知道。K均值聚类是聚类中的常用方法,它是基于点与点的距离来计算最佳类别归属,即靠得比较近的一组点(数据)被归为一类,每个聚类都有一个中心点。 我们首先创建聚类模型,对训练子集进行聚类处理,得到聚类中心点。然后使用模型预测测试子集的标签,预测时根据测试子集中的点(数据)到中心点的距离来进行分类。 创建模型 示例 创建聚类模型。 import numpy as np from sklearn import datasets # 加载 `digits` 数据集 digits =

检测识别问题中的metrics

戏子无情 提交于 2019-12-04 17:53:24
之前一直记不熟各种指标的具体计算,本文准备彻底搞定这个问题,涵盖目前遇到过的所有评价指标。 TP,TN,FP,FN 首先是true-false和positive-negative这两对词。以二分类为例: positive和negative指的是预测的分类是正样本还是负样本,true和false指的是预测结果是对的还是错的。 因此: 实际类别\预测类别 正样本 负样本 正样本 TP FN 负样本 FP TN 基于这些数值可以计算各项指标: Accuracy, precision, recall等 Accuracy \[ A C C=\frac{T P+T N}{T P+T N+F P+F N} \] 分子为所有 正确预测 了的样本数目(即正样本被预测为正,负样本被预测为负),分母为所有样本总数。 Accuracy代表了 所有样本中,被正确预测的样本所占的比例 。 其缺陷在于,当正负样本严重不均衡的时候无法反映出模型的真实水平。 Precision \[ P=\frac{T P}{T P+F P} \] 分子为正样本被预测为正的个数,分母为所有被预测为正的个数。 Precision代表了 被预测为正的样本中,真的是正样本的比例 。又叫查准率。 Recall \[ \operatorname{Recall}=\frac{T P}{T P+F N} \] 分子为正被预测为正

SLOC for Java projects

帅比萌擦擦* 提交于 2019-12-04 16:51:04
问题 I neeed a free tool to count SLOC on a Java project. I only really need the following metrics: SLOC number of comment lines optionally javadoc metrics optionally sort statistics by file type (.java, .js, .css, .html, .xml, etc) Bonus: 100% Java, I don't like mix something like sloccount with cygwin netbeans plugin or preferably , maven plugin 回答1: Did you consider using Sonar (which uses its own internal tool since version 1.9, sonar-squid, instead of JavaNCSS which has some flaws and doesn't