window-functions

Select Row number in postgres

旧街凉风 提交于 2019-11-27 10:03:32
问题 How to select row number in postgres. I tried this: select row_number() over (ORDER BY cgcode_odc_mapping_id)as rownum, cgcode_odc_mapping_id from access_odc.access_odc_mapping_tb order by cgcode_odc_mapping_id and got this error: ERROR: syntax error at or near "over" LINE 1: select row_number() over (ORDER BY cgcode_odc_mapping_id)as I have checked these pages : How to show row numbers in PostgreSQL query? This is my query: select row_number() over (ORDER BY cgcode_odc_mapping_id)as rownum

Using windowing functions in Spark

佐手、 提交于 2019-11-27 09:34:29
I am trying to use rowNumber in Spark data frames. My queries are working as expected in Spark shell. But when i write them out in eclipse and compile a jar, i am facing an error 16/03/23 05:52:43 ERROR ApplicationMaster: User class threw exception:org.apache.spark.sql.AnalysisException: Could not resolve window function 'row_number'. Note that, using window functions currently requires a HiveContext; org.apache.spark.sql.AnalysisException: Could not resolve window function 'row_number'. Note that, using window functions currently requires a HiveContext; My queries import org.apache.spark.sql

Window Functions: last_value(ORDER BY … ASC) same as last_value(ORDER BY … DESC)

谁说我不能喝 提交于 2019-11-27 09:18:22
Sample data CREATE TABLE test (id integer, session_ID integer, value integer) ; INSERT INTO test (id, session_ID, value) VALUES (0, 2, 100), (1, 2, 120), (2, 2, 140), (3, 1, 900), (4, 1, 800), (5, 1, 500) ; Current query select id, last_value(value) over (partition by session_ID order by id) as last_value_window, last_value(value) over (partition by session_ID order by id desc) as last_value_window_desc from test ORDER BY id I was running into a problem with the last_value() window function: http://sqlfiddle.com/#!15/bcec0/2 In the fiddle I am trying to work with the sort direction within the

T-SQL calculate moving average

て烟熏妆下的殇ゞ 提交于 2019-11-27 08:53:46
I am working with SQL Server 2008 R2, trying to calculate a moving average. For each record in my view, I would like to collect the values of the 250 previous records, and then calculate the average for this selection. My view columns are as follows: TransactionID | TimeStamp | Value | MovAvg ---------------------------------------------------- 1 | 01.09.2014 10:00:12 | 5 | 2 | 01.09.2014 10:05:34 | 3 | ... 300 | 03.09.2014 09:00:23 | 4 | TransactionID is unique. For each TransactionID , I would like to calculate the average for column value, over previous 250 records. So for TransactionID 300

Window functions: PARTITION BY one column after ORDER BY another

佐手、 提交于 2019-11-27 07:23:08
问题 Disclaimer: The shown problem is much more general than I expected first. The example below is taken from a solution to another question. But now I was taking this sample for solving many problems more - mostly related to time series (have a look at the "Linked" section in the right bar). So I am trying to explain the problem more generally first: I am using PostgreSQL but I am sure this problem exists in other window function supporting DBMS' (MS SQL Server, Oracle, ...) as well. Window

Window functions and more “local” aggregation

↘锁芯ラ 提交于 2019-11-27 06:52:32
问题 Suppose I have this table: select * from window_test; k | v ---+--- a | 1 a | 2 b | 3 a | 4 Ultimately I want to get: k | min_v | max_v ---+-------+------- a | 1 | 2 b | 3 | 3 a | 4 | 4 But I would be just as happy to get this (since I can easily filter it with distinct ): k | min_v | max_v ---+-------+------- a | 1 | 2 a | 1 | 2 b | 3 | 3 a | 4 | 4 Is it possible to achieve this with PostgreSQL 9.1+ window functions? I'm trying to understand if I can get it to use separate partition for the

OVER clause in Oracle

依然范特西╮ 提交于 2019-11-27 06:28:58
What is the meaning of the OVER clause in Oracle? The OVER clause specifies the partitioning, ordering & window "over which" the analytic function operates. For example, this calculates a moving average: AVG(amt) OVER (ORDER BY date ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) date amt avg_amt ===== ==== ======= 1-Jan 10.0 10.5 2-Jan 11.0 17.0 3-Jan 30.0 17.0 4-Jan 10.0 18.0 5-Jan 14.0 12.0 It operates over a moving window (3 rows wide) over the rows, ordered by date. This calculates a running balance: SUM(amt) OVER (ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) date amt sum_amt

Spark and SparkSQL: How to imitate window function?

一笑奈何 提交于 2019-11-27 04:34:52
问题 Description Given a dataframe df id | date --------------- 1 | 2015-09-01 2 | 2015-09-01 1 | 2015-09-03 1 | 2015-09-04 2 | 2015-09-04 I want to create a running counter or index, grouped by the same id and sorted by date in that group, thus id | date | counter -------------------------- 1 | 2015-09-01 | 1 1 | 2015-09-03 | 2 1 | 2015-09-04 | 3 2 | 2015-09-01 | 1 2 | 2015-09-04 | 2 This is something I can achieve with window function, e.g. val w = Window.partitionBy("id").orderBy("date") val

MySql using correct syntax for the over clause

橙三吉。 提交于 2019-11-27 04:06:51
问题 What is the correct syntax to get the over clause to work in mysql? I would like to see the total sms's sent by each user without grouping it with the group by clause. SELECT username, count(sentSmsId) OVER (userId) FROM sentSmsTable, userTable WHERE userId = sentUserId; 回答1: There is no OVER clause in MySQL that I know of, but here is a link that might assist you to accomplish the same results: http://explainextended.com/2009/03/10/analytic-functions-first_value-last_value-lead-lag/ Hope

Referencing current row in FILTER clause of window function

拥有回忆 提交于 2019-11-27 02:42:21
问题 In PostgreSQL 9.4 the window functions have the new option of a FILTER to select a sub-set of the window frame for processing. The documentation mentions it, but provides no sample. An online search yields some samples, including from 2ndQuadrant but all that I found were rather trivial examples with constant expressions. What I am looking for is a filter expression that includes the value of the current row. Assume I have a table with a bunch of columns, one of which is of date type: col1 |