window-functions

GROUP BY and aggregate sequential numeric values

帅比萌擦擦* 提交于 2019-12-17 03:16:43
问题 Using PostgreSQL 9.0. Let's say I have a table containing the fields: company , profession and year . I want to return a result which contains unique companies and professions, but aggregates (into an array is fine) years based on numeric sequence: Example Table: +-----------------------------+ | company | profession | year | +---------+------------+------+ | Google | Programmer | 2000 | | Google | Sales | 2000 | | Google | Sales | 2001 | | Google | Sales | 2002 | | Google | Sales | 2004 | |

Grouping with partition and over in TSql

醉酒当歌 提交于 2019-12-14 04:02:58
问题 I have a simple table CREATE TABLE [dbo].[Tooling]( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](50) NOT NULL, [Status] [int] NOT NULL, [DateFinished] [datetime] NULL, [Tooling] [nvarchar](50) NULL, [Updated] [datetime] NULL, ) ON [PRIMARY] with following values SET IDENTITY_INSERT [dbo].[Tooling] ON GO INSERT [dbo].[Tooling] ([Id], [Name], [Status], [DateFinished], [Tooling], [Updated]) VALUES (1, N'Large', 0, NULL, NULL, CAST(N'2015-05-05 00:00:00.000' AS DateTime)) GO INSERT [dbo].

ROW_NUMBER() shows unexpected values

情到浓时终转凉″ 提交于 2019-12-14 02:36:07
问题 My table has values like ( RowCount is generated by the query below): ID Date_trans Time_trans Price RowCount ------- ----------- ---------- ----- -------- 1699093 22-Feb-2011 09:30:00 58.07 1 1699094 22-Feb-2011 09:30:00 58.08 1 1699095 22-Feb-2011 09:30:00 58.08 2 1699096 22-Feb-2011 09:30:00 58.08 3 1699097 22-Feb-2011 09:30:00 58.13 1 1699098 22-Feb-2011 09:30:00 58.13 2 1699099 22-Feb-2011 09:30:00 58.12 1 1699100 22-Feb-2011 09:30:08 58.13 3 1699101 22-Feb-2011 09:30:09 57.96 1 1699102

PostgreSQL: detecting the first/last rows of result set

爷,独闯天下 提交于 2019-12-13 08:19:21
问题 Is there any way to embed a flag in a select that indicates that it is the first or the last row of a result set? I'm thinking something to the effect of: > SELECT is_first_row() AS f, is_last_row() AS l FROM blah; f | l ----------- t | f f | f f | f f | f f | t The answer might be in window functions but I've only just learned about them, and I question their efficiency. SELECT first_value(unique_column) OVER () = unique_column, last_value(unique_column) OVER () = unique_column, * FROM blah;

SQL: partition over two columns

只愿长相守 提交于 2019-12-12 22:19:14
问题 I have following table: --------------------- | No1 | No2 | Amount --------------------- | A | B | 10 | | C | D | 20 | | B | A | 30 | | D | C | 40 | --------------------- and I want to sum over partition by both columns (No1,No2) but it should group also when the values are changed in the two columns. Example would be: AB = BA This would be my expected result: ----------------------------------------- | No1 | No2 | Sum(Amount) over partition ----------------------------------------- | A | B |

PostgreSQL: getting ordinal rank (row index? ) efficiently

拟墨画扇 提交于 2019-12-12 21:06:43
问题 You have a table like so: id dollars dollars_rank points points_rank 1 20 1 35 1 2 18 2 30 3 3 10 3 33 2 I want a query that updates the table's rank columns ( dollars_rank and points_rank ) to set the rank for the given ID, which is just the row's index for that ID sorted by the relevant column in a descending order. How best to do this in PostgreSQL? 回答1: @OMG_Ponies already pointed it out: The window function dense_rank() is what you need - or maybe rank() . The UPDATE could look like this

Window functions filter through current row

不想你离开。 提交于 2019-12-12 16:08:11
问题 This is a follow-up to this question, where my query was improved to use window functions instead of aggregates inside a LATERAL join. While the query is now much faster, I've found that the results are not correct. I need to perform computations on x year trailing time frames. For example, price_to_maximum_earnings is computed per row by getting max(earnings) over ten years ago to the current row, and dividing price by the result. We'll use 1 year for simplicity here. SQL Fiddle for this

PARTITION BY alternative in HSQLDB

只愿长相守 提交于 2019-12-12 09:59:55
问题 I would like to fire the query suggested in https://stackoverflow.com/a/3800572/2968357 on a HSQLDB database using select * such as WITH tmpTable AS ( SELECT p.* , ROW_NUMBER() OVER(PARTITION BY p.groupColumn order by p.groupColumn desc) AS rowCount FROM sourceTable p) SELECT * FROM tmpTable WHERE tmpTable.rowCount = 1 but getting the following error: Caused by: org.hsqldb.HsqlException: unexpected token: PARTITION required: ) meaning PARTITION BY is not supported. Is there a work-around for

Oracle Lag function with dynamic parameter

喜你入骨 提交于 2019-12-12 08:06:34
问题 I have a specific problem. I have a table which contains invalid values. I need to replace the invalid values (here 0 ) with the previous value which is bigger than 0 . The difficulty is, it is not appropiate for me to use an Update or an insert(Cursor and update would do it).Well my only way is to use a Select statement. When I use the lag(col1, 1) - function with when case, I get only one column with the correct value. select col1, col2 realcol2, (case when col2 = 0 then lag(col2,1,1) over

Does hibernate support count(*) over()

て烟熏妆下的殇ゞ 提交于 2019-12-12 07:58:10
问题 I'm trying to prevent having to create one separate query for count and one for the actual query. What I found is SesssionImpl::createQuery takes a considerable amount of time for a complex query and by combining count and the main query I can then eliminate one createQuery call. In SQL I can do something like select count(*) over(), col_A, col_B from TABLE_XX where col_C > 1000 Can this be achieved in hibernate? (I'm trying to avoid native sql and stick to HQL and detached criteria. Using