weighted

weighted numpy bincount for 2D IDs array and 1D weights

我与影子孤独终老i 提交于 2020-07-09 08:38:17
问题 I am using numpy_indexed for applying a vectorized numpy bincount, as follows: import numpy as np import numpy_indexed as npi rowidx, colidx = np.indices(index_tri.shape) (cols, rows), B = npi.count((index_tri.flatten(), rowidx.flatten())) where index_tri is the following matrix: index_tri = np.array([[ 0, 0, 0, 7, 1, 3], [ 1, 2, 2, 9, 8, 9], [ 3, 1, 1, 4, 9, 1], [ 5, 6, 6, 10, 10, 10], [ 7, 8, 9, 4, 3, 3], [ 3, 8, 6, 3, 8, 6], [ 4, 3, 3, 7, 8, 9], [10, 10, 10, 5, 6, 6], [ 4, 9, 1, 3, 1, 1],

weighted numpy bincount for 2D IDs array and 1D weights

雨燕双飞 提交于 2020-07-09 08:37:30
问题 I am using numpy_indexed for applying a vectorized numpy bincount, as follows: import numpy as np import numpy_indexed as npi rowidx, colidx = np.indices(index_tri.shape) (cols, rows), B = npi.count((index_tri.flatten(), rowidx.flatten())) where index_tri is the following matrix: index_tri = np.array([[ 0, 0, 0, 7, 1, 3], [ 1, 2, 2, 9, 8, 9], [ 3, 1, 1, 4, 9, 1], [ 5, 6, 6, 10, 10, 10], [ 7, 8, 9, 4, 3, 3], [ 3, 8, 6, 3, 8, 6], [ 4, 3, 3, 7, 8, 9], [10, 10, 10, 5, 6, 6], [ 4, 9, 1, 3, 1, 1],

Modularity calculation for weighted graphs in igraph

不打扰是莪最后的温柔 提交于 2020-01-13 13:50:11
问题 I used the fastgreedy algorithm in igraph for my community detection in a weighted, undirected graph. Afterwards I wanted to have a look at the modularity and I got different values for different methods and I am wondering why. I included a short example, which demonstrates my problem: library(igraph) d<-matrix(c(1, 0.2, 0.3, 0.9, 0.9, 0.2, 1, 0.6, 0.4, 0.5, 0.3, 0.6, 1, 0.1, 0.8, 0.9, 0.4, 0.1, 1, 0.5, 0.9, 0.5, 0.8, 0.5, 1), byrow=T, nrow=5) g<-graph.adjacency(d, weighted=T, mode="lower"

Modularity calculation for weighted graphs in igraph

核能气质少年 提交于 2020-01-13 13:49:32
问题 I used the fastgreedy algorithm in igraph for my community detection in a weighted, undirected graph. Afterwards I wanted to have a look at the modularity and I got different values for different methods and I am wondering why. I included a short example, which demonstrates my problem: library(igraph) d<-matrix(c(1, 0.2, 0.3, 0.9, 0.9, 0.2, 1, 0.6, 0.4, 0.5, 0.3, 0.6, 1, 0.1, 0.8, 0.9, 0.4, 0.1, 1, 0.5, 0.9, 0.5, 0.8, 0.5, 1), byrow=T, nrow=5) g<-graph.adjacency(d, weighted=T, mode="lower"

Weighted sampling in Fortran

风流意气都作罢 提交于 2020-01-11 09:16:31
问题 In a Fortran program I would like to choose at random a specific variable (specifically its index) by using weights. The weights would be provided in a separate vector (element 1 would contain weight of variable 1 and so on). I have the following code who does the job without weight ( mind being an integer vector with the index of each variable in the original dataset) call rrand(xrand) j = int(nn * xrand) + 1 mvar = mind(j) 回答1: Here are two examples. The first one is integer, parameter ::

Weighted random sampling in Elasticsearch

爷,独闯天下 提交于 2020-01-09 19:30:52
问题 I need to obtain a random sample from an ElasticSearch index, i.e. to issue a query that retrieves some documents from a given index with weighted probability Wj/ΣWi (where Wj is a weight of row j and Wj/ΣWi is a sum of weights of all documents in this query). Currently, I have the following query: GET products/_search?pretty=true {"size":5, "query": { "function_score": { "query": { "bool":{ "must": { "term": {"category_id": "5df3ab90-6e93-0133-7197-04383561729e"} } } }, "functions": [{

Weighted random sampling in Elasticsearch

北慕城南 提交于 2020-01-09 19:29:59
问题 I need to obtain a random sample from an ElasticSearch index, i.e. to issue a query that retrieves some documents from a given index with weighted probability Wj/ΣWi (where Wj is a weight of row j and Wj/ΣWi is a sum of weights of all documents in this query). Currently, I have the following query: GET products/_search?pretty=true {"size":5, "query": { "function_score": { "query": { "bool":{ "must": { "term": {"category_id": "5df3ab90-6e93-0133-7197-04383561729e"} } } }, "functions": [{

How to use the R survey package to analyze multiple response questions in a weighted sample?

余生长醉 提交于 2020-01-03 15:56:22
问题 I'm relatively new to R. I am wondering how to use the 'survey' package (http://r-survey.r-forge.r-project.org/survey/) to analyze a multiple response question for a weighted sample? The tricky bit is that more than one response can be ticked so the responses are stored across several columns. Example: I have survey data from 500 respondents who were drawn randomly from across 10 districts. Let's say the main question that was asked was (stored in column H1_AreYouHappy): 'Are you happy?' -

How to set parts of positive samples weight in TensorFlow for binary classfication

丶灬走出姿态 提交于 2020-01-03 05:00:12
问题 I want to set the same weight for parts of positive samples. However, tf.nn.weighted_cross_entropy_with_logits can only set the weight for all positive samples in my opinion. for example, in the ctr predicition, I want set 10 weights for the order samples, and the weight of click samples and the unclick sample is still 1. Here is my unweighted code def my_model(features, labels, mode, params): net = tf.feature_column.input_layer(features, params['feature_columns']) for units in params['hidden

Insertion of weighted point with info in CGAL regular triangulation

巧了我就是萌 提交于 2020-01-01 19:37:13
问题 I'm facing a problem that I hope some others have faced before because I can't find a way out ! I have a regular triangulation in CGAL in which I wish to insert some weighted points with info std::pair<myweightpoint, myinfo> one by one and to get the handle to the vertex ( Vertex_handle ) once it is inserted ! The thing is that there is no such function. It exists several functions to insert : Vertex_handle Regular_triangulation::insert ( const Weighted_point & p ) ; That returns a Vertex