aggregate-functions

Create nested json from sql query postgres 9.4

不想你离开。 提交于 2019-12-07 03:13:03
问题 I need to get as a result from query fully structured JSON. I can see in postgres that there are some built in functions that may be useful. As an example I created a structure as follows: -- Table: person -- DROP TABLE person; CREATE TABLE person ( id integer NOT NULL, name character varying(30), CONSTRAINT person_pk PRIMARY KEY (id) ) WITH ( OIDS=FALSE ); ALTER TABLE person OWNER TO postgres; -- Table: car -- DROP TABLE car; CREATE TABLE car ( id integer NOT NULL, type character varying(30)

MySQL - Max() return wrong result

佐手、 提交于 2019-12-06 17:26:27
I tried this query on MySQL server (5.1.41)... SELECT max(volume), dateofclose, symbol, volume, close, market FROM daily group by market I got this result: max(volume) dateofclose symbol volume close market 287031500 2010-07-20 AA.P 500 66.41 AMEX 242233000 2010-07-20 AACC 16200 3.98 NASDAQ 1073538000 2010-07-20 A 4361000 27.52 NYSE 2147483647 2010-07-20 AAAE.OB 400 0.01 OTCBB 437462400 2010-07-20 AAB.TO 31400 0.37 TSX 61106320 2010-07-20 AA.V 0 0.24 TSXV As you can see, the maximum volume is VERY different from the 'real' value of the volume column?!? The volume column is define as int(11)

How to Calculate Aggregated Product Function in SQL Server

允我心安 提交于 2019-12-06 16:11:07
I have a table with 2 column: No. Name Serial 1 Tom 1 2 Bob 5 3 Don 3 4 Jim 6 I want to add a column whose a content is multiply Serial column like this: No. Name Serial Multiply 1 Tom 2 2 2 Bob 5 10 3 Don 3 30 4 Jim 6 180 How can i do that? Oh, this is a pain. Most databases do not support a product aggregation function. You can emulate it with logs and powers. So, something like this might work: select t.*, (select exp(sum(log(serial))) from table t2 where t2.no <= t.no ) as cumeProduct from table t; Note that log() might be called ln() in some databases. Also, this works for positive

Grouping Events in Postgres

戏子无情 提交于 2019-12-06 15:19:18
I've got an events table that is generated by user activity on a site: timestamp | name 7:00 AM | ... 7:01 AM | ... 7:02 AM | ... 7:30 AM | ... 7:31 AM | ... 7:32 AM | ... 8:01 AM | ... 8:03 AM | ... 8:05 AM | ... 8:08 AM | ... 8:09 AM | ... I'd like to aggregate over the events to provide a view of when a user is active. I'm defining active to mean the period in which an event is within +/- 2 minutes. For the above that'd mean: from | till 7:00 AM | 7:02 AM 7:30 AM | 7:32 AM 8:01 AM | 8:05 AM 8:08 AM | 8:09 AM What's the best way to write a query that'll aggregate in that method? Is it

SQLite - Help optimizing aggregate total of previous rows with multiple conditions

瘦欲@ 提交于 2019-12-06 13:16:55
I'm trying to get conditional SUMs of the Value column for each record in the table for all of the "previous" records grouped by the same "Category" field value, and the same "Approved" field value, then divided into Negative and Positive sums. In my program, users can create document record in any order, so "previous" is defined as: If Approved =TRUE, then "previous" records have an older ApprovedDate field value than the current record. If the ApprovedDate field values are the same, then "previous" records have a lower DocumentNumber field value. If Approved =FALSE, then "previous" records

SQL - Select 'n' greatest elements in group

懵懂的女人 提交于 2019-12-06 12:22:56
问题 The SQL MAX aggregate function will allow you to select the top element in a group. Is there a way to select the top n elements for each group? For instance, if I had a table of users that held their division rank, and wanted the top two users per division ... Users userId | division | rank 1 | 1 | 1 2 | 1 | 2 3 | 1 | 3 4 | 2 | 3 I would want the query to somehow return users 2,3,4 If it matters, I'm using MySQL. 回答1: select * from users as t1 where (select count(*) from users as t2 where t1

Performance of UDAF versus Aggregator in Spark

浪子不回头ぞ 提交于 2019-12-06 12:09:29
I am trying to write some performance-mindful code in Spark and wondering whether I should write an Aggregator or a User-defined Aggregate Function (UDAF) for my rollup operations on a Dataframe. I have not been able to find any data anywhere on how fast each of these methods are and which you should be using for spark 2.0+. 来源: https://stackoverflow.com/questions/45356452/performance-of-udaf-versus-aggregator-in-spark

SQL-style GROUP BY aggregate functions in jq (COUNT, SUM and etc)

江枫思渺然 提交于 2019-12-06 08:40:15
Similar questions asked here before: Count items for a single key: jq count the number of items in json by a specific key Calculate the sum of object values: How do I sum the values in an array of maps in jq? Question How to emulate the COUNT aggregate function which should behave similarly to its SQL original? Let's extend this question even more to include other regular SQL functions: COUNT SUM / MAX/ MIN / AVG ARRAY_AGG The last one is not a standard SQL function - it's from PostgreSQL but is quite useful. At input comes a stream of valid JSON objects. For demonstration let's pick a simple

Aggregate function to detect trend in PostgreSQL

百般思念 提交于 2019-12-06 07:48:44
问题 I'm using a psql DB to store a data structure like so: datapoint(userId, rank, timestamp) where timestamp is the Unix Epoch milliseconds timestamp. In this structure I store the rank of each user each day, so it's like: UserId Rank Timestamp 1 1 1435366459 1 2 1435366458 1 3 1435366457 2 8 1435366456 2 6 1435366455 2 7 1435366454 So, in the sample data above, userId 1 its improving it's rank with each measurement, which means it has a positive trend, while userId 2 is dropping in rank, which

Is there any good way to build a comma-separated list in SQL Server?

纵然是瞬间 提交于 2019-12-06 06:10:01
In Firebird, there's an aggregate called List() that turns multiple results into a comma-separated string. This function does not appear to exist in SQL Server. Is there any equivalent to it that doesn't involve a big, long, ugly, slow workaround using for xml or building your own as a CLR UDF? (Yes, I know about those methods. Looking for something I might not be aware of.) No, those are the workarounds you will have to use. We have been asking for a GROUP_CONCAT equivalent since 2006: http://connect.microsoft.com/SQLServer/feedback/details/247118/sql-needs-version-of-mysql-group-concat