aggregate-functions

Is there a standard for SQL aggregate function calculation?

感情迁移 提交于 2019-12-23 10:48:08
问题 Is there a standard on SQL implementaton for multiple calls to the same aggregate function in the same query? For example, consider the following example, based on a popular example schema: SELECT Customer,SUM(OrderPrice) FROM Orders GROUP BY Customer HAVING SUM(OrderPrice)>1000 Presumably, it takes computation time to calculate the value of SUM(OrderPrice). Is this cost incurred for each reference to the aggregate function, or is the result stored for a particular query? Or, is there no

Postgresql 9.3 - array_agg challenge

左心房为你撑大大i 提交于 2019-12-23 10:41:48
问题 I'm trying to understand the array_agg function in Postgresql 9.3. I've put together a fun example for everyone who may be interested in participating. Any fan of American films from the 1980's may be familiar with the "brat pack" who appeared in many hit films together. Using the information about the brat pack films on wikipedia, I've created tables that when joined together, can tell us who worked with each other -- if we have the right query! /* See: http://en.wikipedia.org/wiki/Brat_Pack

Need SQL optimization (maybe DISTINCT ON is the reason?)

主宰稳场 提交于 2019-12-23 09:57:32
问题 Related, preceding question: Select a random entry from a group after grouping by a value (not column)? My current query looks like this: WITH points AS ( SELECT unnest(array_of_points) AS p ), gtps AS ( SELECT DISTINCT ON(points.p) points.p, m.groundtruth FROM measurement m, points WHERE st_distance(m.groundtruth, points.p) < distance ORDER BY points.p, RANDOM() ) SELECT DISTINCT ON(gtps.p, gtps.groundtruth, m.anchor_id) m.id, m.anchor_id, gtps.groundtruth, gtps.p FROM measurement m, gtps

COUNT and GROUP BY on text fields seems slow

一曲冷凌霜 提交于 2019-12-23 09:32:52
问题 I'm building a MySQL database which contains entries about special substrings of DNA in species of yeast. My table looks like this: +--------------+---------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------+---------+------+-----+---------+-------+ | species | text | YES | MUL | NULL | | | region | text | YES | MUL | NULL | | | gene | text | YES | MUL | NULL | | | startPos | int(11) | YES | | NULL | | | repeatLength | int(11) | YES | | NULL | | |

Get max average for each distinct record in a SQL Query

梦想与她 提交于 2019-12-23 09:32:39
问题 I have some tables that contain data about players and the games they have bowled this season in a bowling center during their leagues. What this particular query is used for is to sort the top X averages this year for men and for women. I have all of this down, but I still have a problem in some particular case when some players play in multiple leagues and have more than one of their averages in the top X. Obviously, I only want to list the best average for a given player, so if Player A

Average of datetime data type

与世无争的帅哥 提交于 2019-12-23 09:17:51
问题 I am trying to calculate the average of a few rows with a datetime data type (standard datetime format). How can I do that? 回答1: Convert the datetime to a float . The SQL standard defines that as the number of days since 1900, so it should be fairly portable. For example: declare @t table (dt datetime) insert @t select '1950-01-01' union all select '1960-01-01' select cast(avg(cast(dt as float)) as datetime) from @t This result is 1955-01-01 . Example at SE Data. 回答2: This is how to get the

Use an Aggregate function in Sort Expression

ぃ、小莉子 提交于 2019-12-23 07:56:02
问题 I have a report which uses a dataset returned from a stored procedure. There are two key columns: Name and Value I am using this dataset for two tablixes. The first is just a straightforward tablix displaying the data. The second groups the data based on a Name column. I need to order this data based on the Sum of Value column However I get the following error: [rsAggregateInDataRowSortExpression] A sort expression for the tablix 'table1' includes an aggregate function. Aggregate functions

PostgreSQL sum typecasting as a bigint

一世执手 提交于 2019-12-23 05:22:18
问题 I am doing the sum() of an integer column and I want to typecast the result to be a bigint - to avoid an error. However when I try to use sum(myvalue)::bigint it still gives me an out of range error. Is there anything that I can do to the query to get this to work? Or do I have to change the column type to a bigint? 回答1: The result is obviously bigger than what bigint could hold: -9223372036854775808 to +9223372036854775807 Postgres returns numeric in such a case. You shouldn't have to do

Alternatives to array_agg()?

只谈情不闲聊 提交于 2019-12-23 04:43:50
问题 Is there any alternative to the PostgreSQL array_agg() function so that it doesn't return values in the format: '{x,y,z,}' . Can I have it return just: 'x,y,z' ? 回答1: In PostgreSQL 9.0 or later use string_agg(val, ','). It returns a string with delimiters of your choosing. array_agg(val) returns an array , no surprise there. The curly braces you see are integral part of array literals - the text representation of arrays. In older versions (or any version really) you can substitute with array

Implement aggregation in Teradata

爱⌒轻易说出口 提交于 2019-12-23 04:41:33
问题 I want to aggregate 2 fields proct_dt, dw_job_id in ascendinng order My scenario would be clear by using below queries and result. First query :- sel * from scratch.COGIPF_RUNREPORT_test1 order by proct_dt,dw_job_id where dw_job_id =10309 Output :- dw_job_id proct_dt start_ts end_ts time_diff 1 10,309 2018-03-06 00:00:00 2018-03-06 07:04:18 2018-03-06 07:04:22.457000 0 2 10,309 2018-03-06 00:00:00 2018-03-06 06:58:50 2018-03-06 06:58:51.029000 0 3 10,309 2018-03-07 00:00:00 2018-03-07 06:35