window-functions

How to use ROW_NUMBER() in UPDATE clause? [duplicate]

笑着哭i 提交于 2020-04-10 06:47:05
问题 This question already has answers here : SQL Update with row_number() (8 answers) Closed 4 years ago . ROW_NUMBER() is only for used in the SELECT clause in MS SQL Server , but I want to use it for update like the following: Update MyTab Set MyNo = 123 + ROW_NUMBER() over (Order By ID) Where a=b; then I got Error like, Windowed functions can only appear in the SELECT or ORDER BY clauses. How to use ROW_NUMBER() in UPDATE clause? 回答1: DECLARE @MyTable TABLE ( ID INT IDENTITY(2,2) PRIMARY KEY,

FETCH/ROWNUM() first n percent from each BRANCH (BRANCH_NO). I am trying to retrieve top 10 percent of each branch

被刻印的时光 ゝ 提交于 2020-02-25 06:03:39
问题 SELECT e.EMPLOYEE_NO, e.FNAME, e.LNAME, b.BRANCH_NO, o.SUBTOTAL, PERCENT_RANK() OVER ( partition by e.EMPLOYEE_NO ORDER BY e.EMPLOYEE_NO ASC) AS percent FROM EMPLOYEE e INNER JOIN BRANCH b ON e.BRANCH_NO = b.BRANCH_NO INNER JOIN ORDERS o ON o.BRANCH_NO = b.BRANCH_NO ORDER BY b.BRANCH_NO FETCH FIRST 10 PERCENT ROWS ONLY; I am trying to retrieve top 10 percent of each branch. PL SQL 回答1: You can use analytical functions as following: Select employee_no, Fname, Lname, employee_total_order,

ROW_NUMBER query

你离开我真会死。 提交于 2020-02-24 07:24:48
问题 I have a table: Trip Stop Time ----------------- 1 A 1:10 1 B 1:16 1 B 1:20 1 B 1:25 1 C 1:31 1 B 1:40 2 A 2:10 2 B 2:17 2 C 2:20 2 B 2:25 I want to add one more column to my query output: Trip Stop Time Sequence ------------------------- 1 A 1:10 1 1 B 1:16 2 1 B 1:20 2 1 B 1:25 2 1 C 1:31 3 1 B 1:40 4 2 A 2:10 1 2 B 2:17 2 2 C 2:20 3 2 B 2:25 4 The hard part is B, if B is next to each other I want it to be the same sequence, if not then count as a new row. I know row_number over (partition

PySpark / Spark Window Function First/ Last Issue

放肆的年华 提交于 2020-02-01 04:23:05
问题 From my understanding first/ last function in Spark will retrieve first / last row of each partition/ I am not able to understand why LAST function is giving incorrect results. This is my code. AgeWindow = Window.partitionBy('Dept').orderBy('Age') df1 = df1.withColumn('first(ID)', first('ID').over(AgeWindow))\ .withColumn('last(ID)', last('ID').over(AgeWindow)) df1.show() +---+----------+---+--------+--------------------------+-------------------------+ |Age| Dept| ID| Name|first(ID) |last(ID

Why can't I use dense_rank for SQL 'rank scores'?

爱⌒轻易说出口 提交于 2020-01-30 08:07:56
问题 I'm using the dense_rank function in SQL to solve the leetcode 'rank scores' problem(https://leetcode.com/problems/rank-scores/description/): select Score, dense_rank() over (order by Score) Rank from Scores order by Score desc It always give me the following error: Line 2: SyntaxError: near '(order by Score) Rank from Scores order by Score desc' I wonder how to make this answer correct? Thanks a lot! Also, I realized most people use an answer without using the DENSE_RANK function, which is

Why can't I use dense_rank for SQL 'rank scores'?

感情迁移 提交于 2020-01-30 08:07:25
问题 I'm using the dense_rank function in SQL to solve the leetcode 'rank scores' problem(https://leetcode.com/problems/rank-scores/description/): select Score, dense_rank() over (order by Score) Rank from Scores order by Score desc It always give me the following error: Line 2: SyntaxError: near '(order by Score) Rank from Scores order by Score desc' I wonder how to make this answer correct? Thanks a lot! Also, I realized most people use an answer without using the DENSE_RANK function, which is

Get percentage based on value in previous row

霸气de小男生 提交于 2020-01-25 06:59:08
问题 For the given data: +---+--------+------+ |CID| number | rum | +---+--------+------+ | 1 | 1.0000 | NULL | | 3 | 2.0000 | NULL | | 5 | 2.0000 | NULL | | 6 | 4.0000 | NULL | +---+--------+------+ I want to calculate rum with percentage change of current and previous number. rum = (currNumber - prevNumber) / prevNumber * 100 Expected result: +---+--------+------+ |CID| number | rum | +---+--------+------+ | 1 | 1.0000 | NULL | | 3 | 2.0000 |100.0 | | 5 | 2.0000 | 0.0 | | 6 | 4.0000 |100.0 | +--

Tag consecutive non zero rows into distinct partitions?

走远了吗. 提交于 2020-01-25 05:38:25
问题 Suppose we have this simple schema and data: DROP TABLE #builds CREATE TABLE #builds ( Id INT IDENTITY(1,1) NOT NULL, StartTime INT, IsPassed BIT ) INSERT INTO #builds (StartTime, IsPassed) VALUES (1, 1), (7, 1), (10, 0), (15, 1), (21, 1), (26, 0), (34, 0), (44, 0), (51, 1), (60, 1) SELECT StartTime, IsPassed, NextStartTime, CASE IsPassed WHEN 1 THEN 0 ELSE NextStartTime - StartTime END Duration FROM ( SELECT LEAD(StartTime) OVER (ORDER BY StartTime) NextStartTime, StartTime, IsPassed FROM

Redshift SQL: add and reset a counter with date and group considered

空扰寡人 提交于 2020-01-24 20:50:46
问题 Suppose I have a table below. I'd like to have a counter to count the # of times when a Customer (there are many) is in Segment A. If the Customer jumps to a different Segment between 2 quarters, the counter will reset when the Customer jumps back to Segment A. I am sure there are many ways to do it, but I just can't figure this out..Please help. Thank you! Quarter Segment Customer *Counter* Q1 2018 A A1 1 Q2 2018 A A1 2 Q3 2018 A A1 3 Q4 2018 B A1 1 Q1 2019 B A1 2 Q2 2019 A A1 1 Q1 2020 A A1

Redshift SQL: add and reset a counter with date and group considered

自古美人都是妖i 提交于 2020-01-24 20:50:07
问题 Suppose I have a table below. I'd like to have a counter to count the # of times when a Customer (there are many) is in Segment A. If the Customer jumps to a different Segment between 2 quarters, the counter will reset when the Customer jumps back to Segment A. I am sure there are many ways to do it, but I just can't figure this out..Please help. Thank you! Quarter Segment Customer *Counter* Q1 2018 A A1 1 Q2 2018 A A1 2 Q3 2018 A A1 3 Q4 2018 B A1 1 Q1 2019 B A1 2 Q2 2019 A A1 1 Q1 2020 A A1