aggregate

SQL - Consolidate Overlapping Data

浪子不回头ぞ 提交于 2020-01-03 17:29:07
问题 I have a simple data set in SQL Server that appears like this **ROW Start End** 0 1 2 1 3 5 2 4 6 3 8 9 Graphically, the data would appear like this What I would like to achieve is to collapse the overlapping data so that my query returns **ROW Start End** 0 1 2 1 3 6 2 8 9 Is this possible in SQL Server without having to write a complex procedure or statement? 回答1: Here's the SQL Fiddle for another alternative. First, all the limits are sorted by order. Then the "duplicate" limits within an

SQL - Consolidate Overlapping Data

天涯浪子 提交于 2020-01-03 17:27:21
问题 I have a simple data set in SQL Server that appears like this **ROW Start End** 0 1 2 1 3 5 2 4 6 3 8 9 Graphically, the data would appear like this What I would like to achieve is to collapse the overlapping data so that my query returns **ROW Start End** 0 1 2 1 3 6 2 8 9 Is this possible in SQL Server without having to write a complex procedure or statement? 回答1: Here's the SQL Fiddle for another alternative. First, all the limits are sorted by order. Then the "duplicate" limits within an

Error with dplyr group_by

泪湿孤枕 提交于 2020-01-03 13:29:06
问题 This is my dataset N Pl 10, WO 20, EI 10, WO 20, WO 30, EI My expected output is N Pl 10, 2 20, 1 30, 1 So, basically, I am counting number of pl with each value at N I am trying dplyr. I know probably this can also be done with aggregate() but I am not sure how to do with that. So in dplyr I am running this statement and getting the following error Statement: Diff %>% group_by(N) %>% summarise(pl=count(pl)) Here Diff is my table name Error in UseMethod("group_by_") : no applicable method for

How to aggregate between the two dates in R?

岁酱吖の 提交于 2020-01-03 04:34:05
问题 Below are the two tables Table1 Date OldPrice NewPrice 2014-06-12 09:32:56 0 10 2014-06-27 16:13:36 10 12 2014-08-12 22:41:47 12 13 Table2 Date Qty 2014-06-15 18:09:23 5 2014-06-19 12:04:29 4 2014-06-22 13:21:34 3 2014-06-29 19:01:22 6 2014-07-01 18:02:33 3 2014-09-29 22:41:47 6 I want to display the result in this manner Date OldPrice NewPrice Qty 2014-06-12 09:32:56 0 10 0 2014-06-27 16:13:36 10 12 12 2014-08-12 22:41:47 12 13 15 I used the command for(i in 1:nrow(Table1)){ startDate =

LINQ聚合算法解释

末鹿安然 提交于 2020-01-03 04:06:43
这可能听起来很蹩脚,但我还没有找到一个关于 Aggregate 的非常好的解释。 良好意味着简短,描述性,全面,有一个小而明确的例子。 #1楼 Aggregate主要用于分组或汇总数据。 根据MSDN“聚合函数在序列上应用累加器函数”。 示例1:添加数组中的所有数字。 int[] numbers = new int[] { 1,2,3,4,5 }; int aggregatedValue = numbers.Aggregate((total, nextValue) => total + nextValue); * important:默认情况下,初始聚合值是集合序列中的1个元素。 即:默认情况下,总变量初始值为1。 变量解释 total:它将保存func返回的总和值(聚合值)。 nextValue:它是数组序列中的下一个值。 将该值加到聚合值即总数上。 示例2:添加数组中的所有项。 同时将初始累加器值设置为从10开始添加。 int[] numbers = new int[] { 1,2,3,4,5 }; int aggregatedValue = numbers.Aggregate(10, (total, nextValue) => total + nextValue); 论点解释: 第一个参数是初始值(起始值即种子值),它将用于开始添加数组中的下一个值。 第二个参数是一个func

Django aggregate on .extra values

为君一笑 提交于 2020-01-03 02:25:43
问题 Model , with abstract base class: class MapObject(models.Model): start_date = models.DateTimeField(default= datetime.strptime('1940-09-01T00:00:00', '%Y-%m-%dT%H:%M:%S')) end_date = models.DateTimeField(default= datetime.strptime('1941-07-01T00:00:00', '%Y-%m-%dT%H:%M:%S')) description = models.TextField(blank=True) location = models.PointField() objects = models.GeoManager() user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add = True) last_modified = models

Redshift - Calculate monthly active users

筅森魡賤 提交于 2020-01-03 01:42:09
问题 I have a table which looks like this: Date | User_ID 2017-1-1 | 1 2017-1-1 | 2 2017-1-1 | 4 2017-1-2 | 3 2017-1-2 | 2 ... | .. ... | .. ... | .. ... | .. 2017-2-1 | 1 2017-2-2 | 2 ... | .. ... | .. ... | .. I'd like to calculate the monthly active users over a rolling 30 day period. I know Redshift does not do COUNT(DISTINCT)) windowing. What can I do to get the following output? Date | MAU 2017-1-1 | 3 2017-1-2 | 4 <- We don't want to count user_id 2 twice. ... | .. ... | .. ... | .. 2017-2

Redshift - Calculate monthly active users

女生的网名这么多〃 提交于 2020-01-03 01:41:43
问题 I have a table which looks like this: Date | User_ID 2017-1-1 | 1 2017-1-1 | 2 2017-1-1 | 4 2017-1-2 | 3 2017-1-2 | 2 ... | .. ... | .. ... | .. ... | .. 2017-2-1 | 1 2017-2-2 | 2 ... | .. ... | .. ... | .. I'd like to calculate the monthly active users over a rolling 30 day period. I know Redshift does not do COUNT(DISTINCT)) windowing. What can I do to get the following output? Date | MAU 2017-1-1 | 3 2017-1-2 | 4 <- We don't want to count user_id 2 twice. ... | .. ... | .. ... | .. 2017-2

KendoUI datasource group and aggregate by multiple fields

点点圈 提交于 2020-01-02 22:04:15
问题 I'm trying to group a datasource by two fields and get the average or sum of their values. But even if I specify both group and aggregate properties in the datasource, I can't get it. Here is the code: var dataSource = new kendo.data.DataSource({ data: [ { id: 1, name: "Amazon US", stock: 15, year: 2015}, {id: 2,name: "Amazon US", stock: 20, year: 2016 }, {id: 3,name: "Amazon US", stock: 7, year: 2016 }, { id: 4, name: "Amazon EU", stock: 30, year: 2015 }, { id: 5, name: "Amazon EU", stock: 7

Why is it not sufficient to group by a primary key?

风格不统一 提交于 2020-01-02 13:34:46
问题 Suppose I have a query like this: SELECT items.item_id, items.name GROUP_CONCAT(graphics.graphic_id) AS graphic_ids FROM order_items items LEFT JOIN order_graphics graphics ON graphics.item_id = items.item_id WHERE // etc GROUP BY items.item_id As I understand it, the proper thing to do is to include every unaggregated column in the GROUP_BY like so: GROUP BY items.item_id, items.name This is to prevent records from being lost because MySQL doesn't know how to group them. However, I'm not