group-by

Rails 4 where,order,group,count include zero's - postgresql

一个人想着一个人 提交于 2020-01-01 19:25:29
问题 Here is my query: User.where("created_at >= ? AND created_at <=?", date1,date2).order('DATE(created_at) DESC').group("DATE(created_at)").count and I get output as: {Thu, 15 May 2014=>1} But I want to get output as 0 for the rest of the days. For ex {Thu, 15 May 2014=>1,Fri, 15 May 2014=>0} What I want to get is Users created in a date range , ordered and grouped by created_at and number of such Users for each day. When no users are there for a particular day it should return 0, which the

count number of items in a row in mysql

烂漫一生 提交于 2020-01-01 18:17:36
问题 I have a list of students that shows whether they were present or absent from a particular class. CREATE TABLE classlist (`id` int, `studentid` int, `subjectid` int, `presentid` int) ; CREATE TABLE student (`id` int, `name` varchar(4)) ; CREATE TABLE subject (`id` int, `name` varchar(4)) ; CREATE TABLE classStatus (`id` int, `name` varchar(8)) ; INSERT INTO classlist (`id`, `studentid`, `subjectid`, `presentid`) VALUES (1, 111, 1, 1), (2, 222, 3, 0), (3, 333, 2, 1), (4, 111, 4, 0), (5, 111, 1

Efficient GROUP BY a CASE expression in Amazon Redshift/PostgreSQL

一笑奈何 提交于 2020-01-01 13:27:55
问题 In analytics processing there is often a need to collapse "unimportant" groups of data into a single row in the resulting table. One way to do this is to GROUP BY a CASE expression where unimportant groups are coalesced into a single row via the CASE expression returning a single value, e.g., NULL for the groups. This question is about efficient ways to perform this grouping in Amazon Redshift, which is based on ParAccel: close to PosgreSQL 8.0 in terms of functionality. As an example,

Assign unique ID per multiple columns of data table

痴心易碎 提交于 2020-01-01 12:16:29
问题 I would like to assign unique IDs to rows of a data table per multiple column values. Let's consider a simple example: library(data.table) DT = data.table(a=c(4,2,NA,2,NA), b=c("a","b","c","b","c"), c=1:5) a b c 1: 4 a 1 2: 2 b 2 3: NA c 3 4: 2 b 4 5: NA c 5 I'd like to generate IDs based on columns a and b and expect to get three IDs where 2nd and 4th row IDs are identical and 3rd and 5th rows have the same ID as well. I have seen two solutions but each are slightly incomplete: 1) Solution

Django: Filtering datetime field by *only* the year value?

℡╲_俬逩灬. 提交于 2020-01-01 08:32:49
问题 I'm trying to spit out a django page which lists all entries by the year they were created. So, for example: 2010: Note 4 Note 5 Note 6 2009: Note 1 Note 2 Note 3 It's proving more difficult than I would have expected. The model from which the data comes is below: class Note(models.Model): business = models.ForeignKey(Business) note = models.TextField() created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) class Meta: db_table = 'client_note'

.NET LINQ to entities group by date (day)

倾然丶 夕夏残阳落幕 提交于 2020-01-01 08:02:31
问题 I have the same problem posted here: LINQ to Entities group-by failure using .date However, the answer is not 100% correct. It works in all cases except when different timezones are used. When different timezones are used, it also groups on timezones. Why? I managed to bypass this by using many entity functions. int localOffset= Convert.ToInt32( TimeZone.CurrentTimeZone.GetUtcOffset(DateTime.Now).TotalMinutes); var results = ( from perfEntry in db.entry where (....) select new { perfEntry

Django group by hour

倾然丶 夕夏残阳落幕 提交于 2020-01-01 07:04:23
问题 I have the following model in Django. class StoreVideoEventSummary(models.Model): Customer = models.ForeignKey(GlobalCustomerDirectory, null=True, db_column='CustomerID', blank=True, db_index=True) Store = models.ForeignKey(Store, null=True, db_column='StoreID', blank=True, related_name="VideoEventSummary") Timestamp = models.DateTimeField(null=True, blank=True, db_index=True) PeopleCount = models.IntegerField(null=True, blank=True) I would like to find out the number of people entering the

Applying different aggregate functions to different columns (now that dict with renaming is deprecated)

心已入冬 提交于 2020-01-01 05:29:07
问题 I had asked this question before: python pandas: applying different aggregate functions to different columns but the latest changes to pandas https://github.com/pandas-dev/pandas/pull/15931 mean that what I thought was an elegant and pythonic solution is deprecated, for reasons I genuinely fail to understand. The question was, and still is: when doing a groupby, how can I apply different aggregate functions to different fields (e.g. sum of x, avg of x, min of y, max of z, etc.) and rename the

query with count subquery, inner join and group

梦想的初衷 提交于 2020-01-01 05:15:12
问题 I'm definitely a noob with SQL, I've been busting my head to write a complex query with the following table structure in Postgresql: CREATE TABLE reports ( reportid character varying(20) NOT NULL, userid integer NOT NULL, reporttype character varying(40) NOT NULL, ) CREATE TABLE users ( userid serial NOT NULL, username character varying(20) NOT NULL, ) The objective of the query is to fetch the amount of report types per user and display it in one column. There are three different types of

Translate SQL to lambda LINQ with GroupBy and Average

删除回忆录丶 提交于 2020-01-01 04:24:10
问题 I spend a few hours trying to translate simple SQL to lambda LINQ SELECT ID, AVG(Score) FROM myTable GROUP BY ID Any idea? 回答1: from t in myTable group t by new { t.ID } into g select new { Average = g.Average(p => p.Score), g.Key.ID } or Lambda myTable.GroupBy(t => new {ID = t.ID}) .Select (g => new { Average = g.Average (p => p.Score), ID = g.Key.ID }) 回答2: The equivalent in Linq-to-Objects would be something like the below. var results = from row in myTable group row by row.Id into rows