group-by

SQL Show most recent record in GROUP BY?

烈酒焚心 提交于 2019-12-18 14:15:19
问题 I have a table that looks like this: id | SubjectCode | Grade | DateApproved | StudentId 1 SUB123 1.25 1/4/2012 2012-12345 2 SUB123 2.00 1/5/2012 2012-12345 3 SUB123 3.00 1/5/2012 2012-98765 I'm trying to GROUP BY SubjectCode but i'd like it to display the most recent DateApproved so it will look like: id | SubjectCode | Grade | DateApproved | StudentId 2 SUB123 2.00 1/5/2012 2012-12345 3 SUB123 3.00 1/5/2012 2012-98765 I'm a little bit lost on how to do it? EDIT: Ok guys now im on my real PC

TimeGrouper, pandas

这一生的挚爱 提交于 2019-12-18 12:52:06
问题 I use TimeGrouper from pandas.tseries.resample to sum monthly return to 6M as follows: 6m_return = monthly_return.groupby(TimeGrouper(freq='6M')).aggregate(numpy.sum) where monthly_return is like: 2008-07-01 0.003626 2008-08-01 0.001373 2008-09-01 0.040192 2008-10-01 0.027794 2008-11-01 0.012590 2008-12-01 0.026394 2009-01-01 0.008564 2009-02-01 0.007714 2009-03-01 -0.019727 2009-04-01 0.008888 2009-05-01 0.039801 2009-06-01 0.010042 2009-07-01 0.020971 2009-08-01 0.011926 2009-09-01 0.024998

How to group by a Calculated Field

女生的网名这么多〃 提交于 2019-12-18 12:47:22
问题 I need to group by a Calculated field ins SQL Server 2005/2008. I have the following sql: select dateadd(day, -7, Convert(DateTime, mwspp.DateDue) + (7 - datepart(weekday, mwspp.DateDue))), sum(mwspp.QtyRequired) from manufacturingweekshortagepartpurchasing mwspp where mwspp.buildScheduleSimID = 10109 and mwspp.partID = 8366 group by mwspp.DateDue order by mwspp.DateDue Instead of group by mwspp.DateDue I need to group by the result of the calculation. Is it possible ? Thanks in advance 回答1:

Is it possible to perform a bitwise group function?

我只是一个虾纸丫 提交于 2019-12-18 12:21:49
问题 I have a field in a table which contains bitwise flags. Let's say for the sake of example there are three flags: 4 => read, 2 => write, 1 => execute and the table looks like this * : user_id | file | permissions -----------+--------+--------------- 1 | a.txt | 6 ( <-- 6 = 4 + 2 = read + write) 1 | b.txt | 4 ( <-- 4 = 4 = read) 2 | a.txt | 4 2 | c.exe | 1 ( <-- 1 = execute) I'm interested to find all users who have a particular flag set (eg: write) on ANY record. To do this in one query, I

Is it possible to perform a bitwise group function?

纵然是瞬间 提交于 2019-12-18 12:20:59
问题 I have a field in a table which contains bitwise flags. Let's say for the sake of example there are three flags: 4 => read, 2 => write, 1 => execute and the table looks like this * : user_id | file | permissions -----------+--------+--------------- 1 | a.txt | 6 ( <-- 6 = 4 + 2 = read + write) 1 | b.txt | 4 ( <-- 4 = 4 = read) 2 | a.txt | 4 2 | c.exe | 1 ( <-- 1 = execute) I'm interested to find all users who have a particular flag set (eg: write) on ANY record. To do this in one query, I

MySQL Query with count and group by

不羁岁月 提交于 2019-12-18 12:07:31
问题 I've got table a table with different records for publishers, each record have a date in a column of type timestamp. id | id_publisher | date 1 1 11/2012 03:09:40 p.m. 2 1 12/2012 03:09:40 p.m. 3 2 01/2013 03:09:40 p.m. 4 3 01/2013 03:09:40 p.m. 5 4 11/2012 03:09:40 p.m. 6 4 02/2013 03:09:40 p.m. 7 4 02/2012 03:09:40 p.m. I need a count for number of records published by each publisher for each month. For example Month | id_publisher | num 11/2012 | 1 | 1 11/2012 | 2 | 0 11/2012 | 3 | 0 11

dplyr group_by and mutate, how to access the data frame?

夙愿已清 提交于 2019-12-18 11:40:10
问题 When using dplyr's "group_by" and "mutate", if I understand correctly, the data frame is split in different sub-dataframes according to the group_by argument. For example, with the following code : set.seed(7) df <- data.frame(x=runif(10),let=rep(letters[1:5],each=2)) df %>% group_by(let) %>% mutate(mean.by.letter = mean(x)) mean() is applied successively to the column x of 5 sub-dfs corresponding to a letter between a & e. So you can manipulate the columns of the sub-dfs but can you access

dplyr group_by and mutate, how to access the data frame?

孤街醉人 提交于 2019-12-18 11:39:46
问题 When using dplyr's "group_by" and "mutate", if I understand correctly, the data frame is split in different sub-dataframes according to the group_by argument. For example, with the following code : set.seed(7) df <- data.frame(x=runif(10),let=rep(letters[1:5],each=2)) df %>% group_by(let) %>% mutate(mean.by.letter = mean(x)) mean() is applied successively to the column x of 5 sub-dfs corresponding to a letter between a & e. So you can manipulate the columns of the sub-dfs but can you access

Using cumsum in pandas on group()

≡放荡痞女 提交于 2019-12-18 11:36:44
问题 From a Pandas newbie: I have data that looks essentially like this - data1=pd.DataFrame({'Dir':['E','E','W','W','E','W','W','E'], 'Bool':['Y','N','Y','N','Y','N','Y','N'], 'Data':[4,5,6,7,8,9,10,11]}, index=pd.DatetimeIndex(['12/30/2000','12/30/2000','12/30/2000','1/2/2001','1/3/2001','1/3/2001','12/30/2000','12/30/2000'])) data1 Out[1]: Bool Data Dir 2000-12-30 Y 4 E 2000-12-30 N 5 E 2000-12-30 Y 6 W 2001-01-02 N 7 W 2001-01-03 Y 8 E 2001-01-03 N 9 W 2000-12-30 Y 10 W 2000-12-30 N 11 E And I

T-SQL GROUP BY: Best way to include other grouped columns

烂漫一生 提交于 2019-12-18 11:28:48
问题 I'm a MySQL user who is trying to port some things over to MS SQL Server. I'm joining a couple of tables, and aggregating some of the columns via GROUP BY. A simple example would be employees and projects: select empID, fname, lname, title, dept, count(projectID) from employees E left join projects P on E.empID = P.projLeader group by empID ...that would work in MySQL, but MS SQL is stricter and requires that everything is either enclosed in an aggregate function or is part of the GROUP BY