aggregate

MySQL group by a certain type and select the latest row?

ⅰ亾dé卋堺 提交于 2019-12-23 16:43:19
问题 Imagine a table with columns type, date, message. And some rows looking like this (type | date | message): 1 | 1310572318 | Hello 1 | 1310572317 | Hi 2 | 1310572315 | Wassup 3 | 1310572312 | Yo 3 | 1310572311 | Hey 3 | 1310572309 | Eyo 1 | 1310572305 | Hello 1 | 1310572303 | Good Day Is it possible to group them by type, and selecting the latest (ordered by date) so the result would be: 1 | 1310572318 | Hello 2 | 1310572315 | Wassup 3 | 1310572312 | Yo 1 | 1310572305 | Hello I'm pretty sure I

how to show 0 for week when no record is matching that week in $week mongodb query

吃可爱长大的小学妹 提交于 2019-12-23 16:26:09
问题 My collection looks like below with details /* 1 createdAt:6/13/2018, 5:17:07 PM*/ { "_id" : ObjectId("5b21043b18f3bc7c0be3414c"), "Number" : 242, "State" : "2", "City" : "3", "Website" : "", "Contact_Person_Name" : "Ajithmullassery", "CreatedById" : "Admin", "UpdatedById" : "Admin", "IsActive" : true, "UpdatedOn" : ISODate("2018-06-13T17:17:07.313+05:30"), "CreatedOn" : ISODate("2018-06-13T17:17:07.313+05:30") }, /* 2 createdAt:6/13/2018, 6:45:42 PM*/ { "_id" : ObjectId(

SQL: aggregate function and group by

喜夏-厌秋 提交于 2019-12-23 15:36:23
问题 Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk . Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a

Product Aggregate in PostgreSQL

淺唱寂寞╮ 提交于 2019-12-23 12:27:19
问题 I try to create an aggregate for product (*) in PostgreSQL. The field type of my row is "double precision" So, I tried : CREATE AGGREGATE nmul(numeric) ( sfunc = numeric_mul, stype = numeric ); When I launch my query, the result : ERROR: function nmul(double precision) does not exist LINE 4: CAST(nmul("cote") AS INT), Thank you 回答1: I found a solution from a very smart guy, who realized you can use logarithms to achieve this (credit goes to him): select exp(sum(ln(x))) from generate_series(1

MySQL - Referencing aggregate column in where clause

自古美人都是妖i 提交于 2019-12-23 10:54:25
问题 This seems so simple but I can't seem to figure it out without doing subqueries (which seem to slow down the queries significantly - takes almost 10 seconds instead of <1). Let's say I have a table of sent documents, and I want to select the ones that have been updated since they've last been sent, and the ones that have never been sent. SELECT d.document_id, max(sd.document_sent_date) as last_sent_date FROM documents d LEFT JOIN sent_documents sd ON d.document_id=sd.document_id WHERE last

MySQL - Referencing aggregate column in where clause

大兔子大兔子 提交于 2019-12-23 10:54:14
问题 This seems so simple but I can't seem to figure it out without doing subqueries (which seem to slow down the queries significantly - takes almost 10 seconds instead of <1). Let's say I have a table of sent documents, and I want to select the ones that have been updated since they've last been sent, and the ones that have never been sent. SELECT d.document_id, max(sd.document_sent_date) as last_sent_date FROM documents d LEFT JOIN sent_documents sd ON d.document_id=sd.document_id WHERE last

Measure application performance by aggregating SQL audit records

南楼画角 提交于 2019-12-23 10:25:28
问题 Suppose there is a simple audit table with two columns (in production there are more columns): ID | Date When the request is processed, we add a record into this table. Requests are processed in batches, there can be any number of items in a batch. For each item, we will add a record. There will be at least 2 second delay between batches (the number is configurable). The performance is measured by how fast we can process requests, per unit of time, for example, per second. Consider this

MySQL query - find “new” users per day

女生的网名这么多〃 提交于 2019-12-23 10:16:59
问题 I have a table of data with the following fields EventID : Int, AutoIncrement, Primary Key EventType : Int ' Defines what happened EventTimeStamp : DateTime ' When the Event Happened UserID : Int ' Unique The query needs to tell me how many events occurred with new UserIDs for each day in the whole set. So, for each day, how many events exist which have a UserID which doesn't exist in any prior day. I've tried lots, and I can get unique users per day, but can't work out how to get 'NEW' users

Aggregate for one entity

微笑、不失礼 提交于 2019-12-23 10:06:02
问题 In Domain-driven design if I want to use a repository I need to have an aggregate for it - as I understand. So I have a User, that has id, login, email, and password. A user is a domain Entity with unique Id. When i want to add a User to User repository, should I build first an Aggregate with only Aggregate Root that is my User entity and nothing more? It looks like a proxy to User in this case, unneeded layer. Or maybe I missed something here? Maybe User isnt an Entity, even if it looks like

DDD: Persisting aggregates

狂风中的少年 提交于 2019-12-23 09:59:29
问题 Let's consider the typical Order and OrderItem example. Assuming that OrderItem is part of the Order Aggregate, it an only be added via Order. So, to add a new OrderItem to an Order, we have to load the entire Aggregate via Repository, add a new item to the Order object and persist the entire Aggregate again. This seems to have a lot of overhead. What if our Order has 10 OrderItems ? This way, just to add a new OrderItem , not only do we have to read 10 OrderItems , but we should also re