grouping

Merge Json array date based

a 夏天 提交于 2020-05-16 03:29:25
问题 I can't find a solution for this : i want to group json array based on one column ( date ) and sort it with Javascript / Jquery ? : I have been trying finding a solution but i can't figure out. [ { "date" : "2010-01-01", "price" : 30 }, { "date" : "2010-02-01", "price" : 40 }, { "date" : "2010-03-01", "price" : 50 }, { "date" : "2010-01-01", "price2" : 45 }, { "date" : "2010-05-01", "price2" : 40 }, { "date" : "2010-10-01", "price2" : 50 } ] I want this : [ { "date" : "2010-01-01", "price" :

Merge Json array date based

大兔子大兔子 提交于 2020-05-16 03:29:13
问题 I can't find a solution for this : i want to group json array based on one column ( date ) and sort it with Javascript / Jquery ? : I have been trying finding a solution but i can't figure out. [ { "date" : "2010-01-01", "price" : 30 }, { "date" : "2010-02-01", "price" : 40 }, { "date" : "2010-03-01", "price" : 50 }, { "date" : "2010-01-01", "price2" : 45 }, { "date" : "2010-05-01", "price2" : 40 }, { "date" : "2010-10-01", "price2" : 50 } ] I want this : [ { "date" : "2010-01-01", "price" :

grouping and sum with nested lists

孤者浪人 提交于 2020-05-13 04:55:43
问题 I have nested lists and I'm trying to group and sum to get the desired result using java streams and collectors . With this I'm not able to loop over multiple SubAccounts . Either I have to use for loop or some other logic. I want to achieve using streams api. Is there any possibility for that Map<Long, BigDecimal> assetQuanMap = subAccounts.getAssets.parallelStream().collect(Collectors.groupingBy(Asset::getAssetId, Collectors.reducing(BigDecimal.ZERO, Asset::getQuantity, BigDecimal::add)));

grouping and sum with nested lists

情到浓时终转凉″ 提交于 2020-05-13 04:55:11
问题 I have nested lists and I'm trying to group and sum to get the desired result using java streams and collectors . With this I'm not able to loop over multiple SubAccounts . Either I have to use for loop or some other logic. I want to achieve using streams api. Is there any possibility for that Map<Long, BigDecimal> assetQuanMap = subAccounts.getAssets.parallelStream().collect(Collectors.groupingBy(Asset::getAssetId, Collectors.reducing(BigDecimal.ZERO, Asset::getQuantity, BigDecimal::add)));

Konvajs: How to change position of group of texts

我是研究僧i 提交于 2020-04-30 06:25:04
问题 I'm using Konvajs, I have group of texts, and I want don't allow drag group outside of the canvas, I'm tried solved that using dragBoundFunc, but that don't help me, now I just try change group position during dragmove, but setPosition, setAbsloutePosition, nothing allow me to change group position stage.on('dragmove', (e) => stageOnDragMove(e, layer)); const stageOnDragMove = (e: Konva.KonvaEventObject<any>, layer: Konva.Layer) => { const selectionGroup = layer.findOne('#selection-group');

Conditional counting within groups

浪尽此生 提交于 2020-04-07 05:40:33
问题 I wanted to do conditional counting after groupby ; for example, group by values of column A , and then count within each group how often value 5 appears in column B . If I was doing this for the entire DataFrame , it's just len(df[df['B']==5]) . So I hoped I could do df.groupby('A')[df['B']==5].size() . But I guess boolean indexing doesn't work within GroupBy objects. Example: import pandas as pd df = pd.DataFrame({'A': [0, 4, 0, 4, 4, 6], 'B': [5, 10, 10, 5, 5, 10]}) groups = df.groupby('A'

After binning a column of a dataframe, how to make a new dataframe to count the number of elements in each bin?

不想你离开。 提交于 2020-03-20 07:29:27
问题 Say I have a dataframe, df : >>> df Age Score 19 1 20 2 24 3 19 2 24 3 24 1 24 3 20 1 19 1 20 3 22 2 22 1 I want to construct a new dataframe that bins Age and stores the total number of elements in each of the bins in different Score columns: Age Score 1 Score 2 Score 3 19-21 2 4 3 22-24 2 2 9 This is my way of doing it, which I feel is highly convoluted (meaning, it shouldn't be this difficult): import numpy as np import pandas as pd data = pd.DataFrame(columns=['Age', 'Score']) data['Age']

After binning a column of a dataframe, how to make a new dataframe to count the number of elements in each bin?

风格不统一 提交于 2020-03-20 07:27:47
问题 Say I have a dataframe, df : >>> df Age Score 19 1 20 2 24 3 19 2 24 3 24 1 24 3 20 1 19 1 20 3 22 2 22 1 I want to construct a new dataframe that bins Age and stores the total number of elements in each of the bins in different Score columns: Age Score 1 Score 2 Score 3 19-21 2 4 3 22-24 2 2 9 This is my way of doing it, which I feel is highly convoluted (meaning, it shouldn't be this difficult): import numpy as np import pandas as pd data = pd.DataFrame(columns=['Age', 'Score']) data['Age']

Data.table: Apply function over groups with reference to set value in each group. Pass resulting columns into a function

会有一股神秘感。 提交于 2020-03-20 06:08:00
问题 I have data in a long format which will be grouped by geographies. I want to calculate the difference in each group between one of the variables of interest against all the other variables of interest. I could not figure out how to do this efficiently in a single data table statement so did a workaround which also introduced some new errors along the way (I fixed those with more workarounds but help here would also be appreciated!). I then want to pass the resulting columns into a ggplot

How to group list of tuples?

混江龙づ霸主 提交于 2020-03-02 19:33:07
问题 Note : I know how I can do this of course in an explicit for loop but I am looking for a solution that is a bit more readable. If possible, I'd like to solve this by using some of the built-in functionalities. Best case scenario is something like result = [ *groupby logic* ] Assuming the following list: import numpy as np np.random.seed(42) N = 10 my_tuples = list(zip(np.random.choice(list('ABC'), size=N), np.random.choice(range(100), size=N))) where my_tuples is [('C', 74), ('A', 74), ('C',