metric

How does ImageMagick's '-subimage-search' operation work?

匿名 (未验证) 提交于 2019-12-03 01:48:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I have used ImageMagick in my application. I used ImageMagick for comparing images using the compare command with the -subimage-search option. But there is very little documentation of about how -subimage-search works. Can anyon provide me more information on how it works? For example: Does it compare using colormodel or does it image segmentation to achieve its task? What I know right now is it searches for the second image in the first. But how this is done? Please explain. 回答1: Warning: Conducting a subimage-search is slow -- extremely

Create RMSLE metric in caret in r

匿名 (未验证) 提交于 2019-12-03 01:33:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Could someone please help me with the following: I need to change my xgboost training model with caret package to an undefault metric RMSLE. By default caret and xgboost train and measure in RMSE. Here are the lines of code: create custom summary function in caret format custom_summary = function(data, lev = NULL, model = NULL){ out = rmsle(data[, "obs"], data[, "pred"]) names(out) = c("rmsle") out } create control object control = trainControl(method = "cv", number = 2, summaryFunction = custom_summary) create grid of tuning parameters grid

f1_score metric in lightgbm

匿名 (未验证) 提交于 2019-12-03 01:23:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want to train a lgb model with custom metric : f1_score with weighted average. I went through the advanced examples of lightgbm over here and found the implimentation of custom binary error function. I implemented as similiar functon to return f1_score as shown below. def f1_metric(preds, train_data): labels = train_data.get_label() return 'f1', f1_score(labels, preds, average='weighted'), True I tried to train the model by passing feval parameter as f1_metric as shown below. evals_results = {} bst = lgb.train(params, dtrain, valid_sets=

How to calculate top5 accuracy in keras?

匿名 (未验证) 提交于 2019-12-03 01:12:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want to calculate top5 in imagenet2012 dataset, but i don't know how to do it in keras. fit function just can calculate top 1 accuracy. 回答1: If you are just after the topK you could always call tensorflow directly (you don't say which backend you are using). from keras import backend as K import tensorflow as tf top_values, top_indices = K.get_session().run(tf.nn.top_k(_pred_test, k=5)) If you want an accuracy metric you can add it to your model 'top_k_categorical_accuracy' . model.compile('adam', 'categorical_crossentropy', ['accuracy',

Cluster hangs/shows error while executing simple MPI program in C

匿名 (未验证) 提交于 2019-12-03 01:03:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am trying to run a simple MPI program(multiple array addition), it runs perfectly in my PC but simply hangs or shows the following error in the cluster. I am using open mpi and the following command to execute Netwok Config of the cluster(master&node1) MASTER eth0 Link encap:Ethernet HWaddr 00:22:19:A4:52:74 inet addr:10.1.1.1 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::222:19ff:fea4:5274/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:16914 errors:0 dropped:0 overruns:0 frame:0 TX packets:7183 errors:0

'KD tree' with custom distance metric

匿名 (未验证) 提交于 2019-12-03 00:44:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want to use 'KDtree'(this is the best option. Other 'KNN' algorithms aren't optimal for my project) with custom distance metric. I checked some answers here for similar questions, and this should work...but doesn't. distance_matrix is symetric as should be by definition: array([[ 1., 0., 5., 5., 0., 3., 2.], [ 0., 1., 0., 0., 0., 0., 0.], [ 5., 0., 1., 5., 0., 2., 3.], [ 5., 0., 5., 1., 0., 4., 4.], [ 0., 0., 0., 0., 1., 0., 0.], [ 3., 0., 2., 4., 0., 1., 0.], [ 2., 0., 3., 4., 0., 0., 1.]]) I know my metric is not 'formally metric', but

26-高级路由:BGP下一跳实验

匿名 (未验证) 提交于 2019-12-03 00:40:02
一、实验拓扑: 二、实验要求: 1、路由器自有路由条目 在BGP本地通告的,下一跳为0.0.0.0; 2、路由器通过IGP获取到的路由条目通告进BGP进程中,该路由器中显示该路由条目的下一跳仍然是IGP路由条目对应的下一跳地址,这叫继承; 3、路由器将其本地通告的 本地路由 传递给其它任何BGP对等体,该路由条目下一跳改为本地对于这些邻居的更新源; 4、路由器通过EBGP学习到路由,则该路由在传递给IBGP对等体时,默认情况下一跳不变(next除外); 5、路由器将任何BGP路由通告给其EBGP对等体,则路由的下一跳会变成该路由器对于该邻居的BGP更新源地址。 三、命令部署: 四、验证: 1、路由器自有路由条目 在BGP本地通告的,下一跳为0.0.0.0: 举例:R1上自有本地路由1.1.1.1, 在BGP进程下本地宣告的,下一跳是0.0.0.0; R1#show ip bgp Network Next Hop Metric LocPrf Weight Path *> 1.1.1.0/24 0.0.0.0 0 32768 I 2、路由器通过IGP获取到的路由条目通告进BGP进程中,该路由器中显示该路由条目的下一跳仍然是IGP路由条目对应的下一跳地址,这叫继承: 举例:R2 上宣告通过EIGRP学到的3.3.3.3路由;show ip bgp:显示3.3.3.3下一跳依然是23.1

OpenTSDB使用总结-(3)

匿名 (未验证) 提交于 2019-12-03 00:27:02
功能介绍 从OpenTSDB数据库中查询数据。 URI URI格式 POST {OpenTSDB URL}/api/query 请求 请求样例 { "start": 1504527820, "end": 1504557820, "queries": [ { "aggregator": "sum", "metric": "cpu.system", "rate": "true", "filters": [ { "type":"regexp", "tagk":"host", "filter":"web[0-9]+.lax.mysite.com", "groupBy":true }, { "type":"literal_or", "tagk":"dc", "filter":"lax|dal", "groupBy":false }, ] } ] } 参数说明 表1 请求参数说明 名称 类型 是否必须 描述 start Integer 是 起始时间,单位秒。查询结果包含该时间的值。 说明: 建议使用4334400秒到4291718400秒之间的时间,即从1970/02/20 12:00:00到2106/01/01 00:00:00,也可以为0。否则可能导致查询结果不正确。 end Integer 否 结束时间,单位秒,默认值为OpenTSDB的当前系统时间。查询结果包含该时间的值。 说明:

OpenTSDB使用总结-(2)

匿名 (未验证) 提交于 2019-12-03 00:27:02
API 可以在一次请求中将多个数据写入OpenTSDB,每个数据都会被单独处理,且单数据出错不会影响其他数据的写入。建议每个请求中最大数据点数不超过50. URI URI格式 写入数据 POST {OpenTSDB URL}/api/put 写入数据并返回概要信息 POST {OpenTSDB URL}/api/put?summary 写入数据并返回详细信息 POST {OpenTSDB URL}/api/put?details 说明: 如果summary和details标志同时存在于查询字符串,该API将响应detailed信息。 写入数据并等待数据刷入磁盘 POST {OpenTSDB URL}/api/put?sync 写入数据等待数据刷入磁盘,并设置超时时间(毫秒)。当发生超时时,使用details标志将会返回成功和失败的数据点数量。 POST {OpenTSDB URL}/api/put?sync&sync_timeout=60000 请求 请求样例:单数据点写入 { " metric ": "sys.cpu.nice" , " timestamp ": 1346846400 , " value ": 18 , " tags ": { " host ": "web01" , " dc ": "lga" } } 请求样例:多数据点写入-传入Json数组 [ { "

计算机视觉笔记七:Person Re-Identification With Metric Learning Using Privileged Information

匿名 (未验证) 提交于 2019-12-03 00:22:01
TIP2018的 right @IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 2, FEBRUARY 2018 吸引我的是其中应用的metric learning。我一直以为,metric learning其实和传统的dense prediction以及各种CNN背后的东西是一致的。你现在有这些东西,比方说一堆图片,那么机器怎么看待它们之间的差异?一种度量方式就创建了一个非欧空间,在什么样的理想空间,能够让机器的识别率爆表。很有意思。 我花在abstract上的时间越来越多了,因为我发现,论文读到一半,可以回头看看abstract,佐证一下自己的思路。还有预见性,总之,一个好论文的abstract是作者思路的结晶,值得多读几遍。 1 abstract 本论文研究的领域是人脸再识别 ,比方说苹果手机的解锁,就是再识别你的脸。你要是画个大浓妆,指不定还能不能解锁。画个角度,换个表情,都会影响你的解锁(再识别)。 characteristic logistic discriminant metric learning exploit both original and auxiliary data(privileged information) auxiliary information only avaliable in