aggregate-functions

Generate_series in Postgres from start and end date in a table

浪子不回头ぞ 提交于 2019-11-29 10:28:04
I have been trying to generate a series of dates (YYYY-MM-DD HH) from the first until the last date in a timestamp field. I've got the generate_series() I need, however running into an issue when trying to grab the start and end dates from a table. I have the following to give a rough idea: with date1 as ( SELECT start_timestamp as first_date FROM header_table ORDER BY start_timestamp DESC LIMIT 1 ), date2 as ( SELECT start_timestamp as first_date FROM header_table ORDER BY start_timestamp ASC LIMIT 1 ) select generate_series(date1.first_date, date2.first_date , '1 hour'::interval)::timestamp

mysql grouping by week

懵懂的女人 提交于 2019-11-29 09:25:20
问题 I have a table with the following fields: id amount_sale the_date (unix timestamp integer) payment_type (can be Cash, or Account) I am trying to create a query that will group all sales by each week of the year, and then split the sum of amount_sales for each week on my page. Example: week 1 = $26.00 week 2 = $35.00 week 3 = $49.00 etc. I'm using this query but it's not working: SELECT SUM(`amount_sale`) as total FROM `sales` WHERE `payment_type` = 'Account' GROUP BY WEEK(`the_date`) 回答1: If

ORDER BY Alias not working

时间秒杀一切 提交于 2019-11-29 07:46:38
UPDATING QUESTION: ERROR: column "Fruits" does not exist Running Postgres 7.4(Yeah we are upgrading) Why can't I ORDER BY the column alias? wants tof."TypeOfFruits" in the ORDER BY as well, why? SELECT (CASE WHEN tof."TypeOfFruits" = 'A' THEN 'Apple' WHEN tof."TypeOfFruits" = 'P' THEN 'Pear' WHEN tof."TypeOfFruits" = 'G' THEN 'Grapes' ELSE 'Other' END) AS "Fruits", SUM(CASE WHEN r.order_date BETWEEN DATE_TRUNC('DAY', LOCALTIMESTAMP) AND DATE_TRUNC('DAY', LOCALTIMESTAMP) + INTERVAL '1 DAY' THEN 1 ELSE 0 END) AS daily, SUM(CASE WHEN r.order_date BETWEEN DATE_TRUNC('MONTH', LOCALTIMESTAMP) AND

How to use array_agg() for varchar[]

家住魔仙堡 提交于 2019-11-29 07:39:44
I have a column in our database called min_crew that has varying character arrays such as '{CA, FO, FA}' . I have a query where I'm trying to get aggregates of these arrays without success: SELECT use.user_sched_id, array_agg(se.sched_entry_id) AS seids , array_agg(se.min_crew) FROM base.sched_entry se LEFT JOIN base.user_sched_entry use ON se.sched_entry_id = use.sched_entry_id WHERE se.sched_entry_id = ANY(ARRAY[623, 625]) GROUP BY user_sched_id; Both 623 and 625 have the same use.user_sched_id , so the result should be the grouping of the seids and the min_crew , but I just keep getting

SQL select elements where sum of field is less than N

限于喜欢 提交于 2019-11-29 07:25:28
Given that I've got a table with the following, very simple content: # select * from messages; id | verbosity ----+----------- 1 | 20 2 | 20 3 | 20 4 | 30 5 | 100 (5 rows) I would like to select N messages, which sum of verbosity is lower than Y (for testing purposes let's say it should be 70, then correct results will be messages with id 1,2,3). It's really important to me, that solution should be database independent (it should work at least on Postgres and SQLite). I was trying with something like: SELECT * FROM messages GROUP BY id HAVING SUM(verbosity) < 70; However it doesn't seem to

No non-missing arguments warning when using min or max in reshape2

三世轮回 提交于 2019-11-29 04:50:16
问题 I get the following warning when I use min or max in the dcast function from the reshape2 package. What is it telling me? I can't find anything that explains the warning message and I'm a bit confused about why I get it when I use max but not when I use mean or other aggregate functions. Warning message: In .fun(.value[0], ...) : no non-missing arguments to min; returning Inf Here's a reproducible example: data(iris) library(reshape2) molten.iris <- melt(iris,id.var="Species") summary(molten

Spark Scala - How to group dataframe rows and apply complex function to the groups?

馋奶兔 提交于 2019-11-29 04:06:38
问题 i am trying to solve this super simple problem and i am already sick of it, I hope somebody can help my out with this. I have a dataframe of shape like this: --------------------------- | Category | Product_ID | |------------+------------+ | a | product 1 | | a | product 2 | | a | product 3 | | a | product 1 | | a | product 4 | | b | product 5 | | b | product 6 | --------------------------- How do i group these rows by category and apply complicated function in Scala? Maybe something like

Efficiently Include Column not in Group By of SQL Query

和自甴很熟 提交于 2019-11-29 04:06:06
Given Table A Id INTEGER Name VARCHAR(50) Table B Id INTEGER FkId INTEGER ; Foreign key to Table A I wish to count the occurrances of each FkId value: SELECT FkId, COUNT(FkId) FROM B GROUP BY FkId Now I simply want to also output the Name from Table A . This will not work: SELECT FkId, COUNT(FkId), a.Name FROM B b INNER JOIN A a ON a.Id=b.FkId GROUP BY FkId because a.Name is not contained in the GROUP BY clause (produces is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause error). The point is to move from output like this FkId Count

SQL: tuple comparison

与世无争的帅哥 提交于 2019-11-29 03:38:28
In my current application, I need to be able to do this type of query: SELECT MIN((colA, colB, colC)) FROM mytable WHERE (colA, colB, colC) BETWEEN (200, 'B', 'C') AND (1000, 'E', 'F') and get the answer of (333, 'B', 'B') , given this data: +------+------+------+ | colA | colB | colC | +------+------+------+ | 99 | A | A | | 200 | A | Z | | 200 | B | B | | 333 | B | B | | 333 | C | D | | 333 | C | E | | 333 | D | C | | 1000 | E | G | | 1000 | F | A | +------+------+------+ What is the most efficient way to accomplish this in real SQL? Please keep in mind that this is a toy example, and that

mysql count the sum of all rows

谁都会走 提交于 2019-11-29 03:35:56
I have a mysql table that has a number of rows, and in each row a field called "value", the field value will differ from row to row. What I want, is to select all the rows and count the sum of all the "value" fields. any idea? Do you mean like this? SELECT SUM(value) FROM myTable If you have multiple columns to return, simply add each non-aggregate (i.e., summed) row to the GROUP BY clause: SELECT firstName, lastName, SUM(value) FROM myTable GROUP BY firstName, lastName SELECT SUM(value) as total FROM table; $row['total']; SELECT SUM(`value`) FROM `your_table` SELECT SUM(value) FROM YourTable