weighted

Whats the most concise way to pick a random element by weight in c#?

此生再无相见时 提交于 2019-11-30 12:12:21
Lets assume: List<element> which element is: public class Element(){ int Weight {get;set;} } What I want to achieve is, select an element randomly by the weight. For example: Element_1.Weight = 100; Element_2.Weight = 50; Element_3.Weight = 200; So the chance Element_1 got selected is 100/(100+50+200)=28.57% the chance Element_2 got selected is 50/(100+50+200)=14.29% the chance Element_3 got selected is 200/(100+50+200)=57.14% I know I can create a loop, calculate total, etc... What I want to learn is, whats the best way to do this by Linq in ONE line (or as short as possible), thanks. UPDATE

Whats the most concise way to pick a random element by weight in c#?

房东的猫 提交于 2019-11-29 18:03:22
问题 Lets assume: List<element> which element is: public class Element(){ int Weight {get;set;} } What I want to achieve is, select an element randomly by the weight. For example: Element_1.Weight = 100; Element_2.Weight = 50; Element_3.Weight = 200; So the chance Element_1 got selected is 100/(100+50+200)=28.57% the chance Element_2 got selected is 50/(100+50+200)=14.29% the chance Element_3 got selected is 200/(100+50+200)=57.14% I know I can create a loop, calculate total, etc... What I want to

Plot weighted frequency matrix

痞子三分冷 提交于 2019-11-29 09:24:18
This question is related to two different questions I have asked previously: 1) Reproduce frequency matrix plot 2) Add 95% confidence limits to cumulative plot I wish to reproduce this plot in R: I have got this far, using the code beneath the graphic: #Set the number of bets and number of trials and % lines numbet <- 36 numtri <- 1000 #Fill a matrix where the rows are the cumulative bets and the columns are the trials xcum <- matrix(NA, nrow=numbet, ncol=numtri) for (i in 1:numtri) { x <- sample(c(0,1), numbet, prob=c(5/6,1/6), replace = TRUE) xcum[,i] <- cumsum(x)/(1:numbet) } #Plot the

Weighted random sampling in Elasticsearch

孤人 提交于 2019-11-28 23:49:07
I need to obtain a random sample from an ElasticSearch index, i.e. to issue a query that retrieves some documents from a given index with weighted probability Wj/ΣWi (where Wj is a weight of row j and Wj/ΣWi is a sum of weights of all documents in this query). Currently, I have the following query: GET products/_search?pretty=true {"size":5, "query": { "function_score": { "query": { "bool":{ "must": { "term": {"category_id": "5df3ab90-6e93-0133-7197-04383561729e"} } } }, "functions": [{"random_score":{}}] } }, "sort": [{"_score":{"order":"desc"}}] } It returns 5 items from selected category,

Weighted Pearson's Correlation?

寵の児 提交于 2019-11-28 08:29:05
I have a 2396x34 double matrix named y wherein each row (2396) represents a separate situation consisting of 34 consecutive time segments. I also have a numeric[34] named x that represents a single situation of 34 consecutive time segments. Currently I am calculating the correlation between each row in y and x like this: crs[,2] <- cor(t(y),x) What I need now is to replace the cor function in the above statement with a weighted correlation. The weight vector xy.wt is 34 elements long so that a different weight can be assigned to each of the 34 consecutive time segments. I found the Weighted

Calculating weighted mean and standard deviation

浪子不回头ぞ 提交于 2019-11-28 06:48:33
I have a time series x_0 ... x_t . I would like to compute the exponentially weighted variance of the data. That is: V = SUM{w_i*(x_i - x_bar)^2, i=1 to T} where SUM{w_i} = 1 and x_bar=SUM{w_i*x_i} ref: http://en.wikipedia.org/wiki/Weighted_mean#Weighted_sample_variance The goal is to basically weight observations that are further back in time less. This is very simple to implement but I would like to use as much built in funcitonality as possible. Does anyone know what this corresponds to in R? Thanks R provides weighted mean. In fact, ?weighted.mean shows this example: ## GPA from Siegel

More efficient weighted Gini coefficient in Python

为君一笑 提交于 2019-11-28 03:49:32
问题 Per https://stackoverflow.com/a/48981834/1840471, this is an implementation of the weighted Gini coefficient in Python: import numpy as np def gini(x, weights=None): if weights is None: weights = np.ones_like(x) # Calculate mean absolute deviation in two steps, for weights. count = np.multiply.outer(weights, weights) mad = np.abs(np.subtract.outer(x, x) * count).sum() / count.sum() rmad = mad / np.average(x, weights=weights) # Gini equals half the relative mean absolute deviation. return 0.5

Plot weighted frequency matrix

三世轮回 提交于 2019-11-28 02:51:45
问题 This question is related to two different questions I have asked previously: 1) Reproduce frequency matrix plot 2) Add 95% confidence limits to cumulative plot I wish to reproduce this plot in R: I have got this far, using the code beneath the graphic: #Set the number of bets and number of trials and % lines numbet <- 36 numtri <- 1000 #Fill a matrix where the rows are the cumulative bets and the columns are the trials xcum <- matrix(NA, nrow=numbet, ncol=numtri) for (i in 1:numtri) { x <-

Weighted percentile using numpy

梦想的初衷 提交于 2019-11-27 19:06:56
Is there a way to use the numpy.percentile function to compute weighted percentile? Or is anyone aware of an alternative python function to compute weighted percentile? thanks! Unfortunately, numpy doesn't have built-in weighted functions for everything, but, you can always put something together. def weight_array(ar, weights): zipped = zip(ar, weights) weighted = [] for i in zipped: for j in range(i[1]): weighted.append(i[0]) return weighted np.percentile(weight_array(ar, weights), 25) Alleo Completely vectorized numpy solution Here is the code I'm using. It's not an optimal one (which I'm

Select a random item from a weighted list

心不动则不痛 提交于 2019-11-27 06:05:37
问题 I am trying to write a program to select a random name from the US Census last name list. The list format is Name Weight Cumulative line ----- ----- ----- - SMITH 1.006 1.006 1 JOHNSON 0.810 1.816 2 WILLIAMS 0.699 2.515 3 JONES 0.621 3.136 4 BROWN 0.621 3.757 5 DAVIS 0.480 4.237 6 Assuming I load the data in to a structure like Class Name { public string Name {get; set;} public decimal Weight {get; set;} public decimal Cumulative {get; set;} } What data structure would be best to hold the