amazon-redshift

3 Month Moving Average - Redshift SQL

给你一囗甜甜゛ 提交于 2021-02-19 05:06:34
问题 I am trying to create a 3 Month Moving Average based on some data that I have while using RedShift SQL or Domo BeastMode (if anyone is familiar with that). The data is on a day to day basis, but needs to be displayed by month. So the quotes/revenue need to be summarized by month, and then a 3MMA needs to be calculated (excluding the current month). So, if the quote was in April, I would need the average of Jan, Feb, Mar. The input data looks like this: Quote Date MM/DD/YYYY Revenue 3/24/2015

3 Month Moving Average - Redshift SQL

末鹿安然 提交于 2021-02-19 05:04:32
问题 I am trying to create a 3 Month Moving Average based on some data that I have while using RedShift SQL or Domo BeastMode (if anyone is familiar with that). The data is on a day to day basis, but needs to be displayed by month. So the quotes/revenue need to be summarized by month, and then a 3MMA needs to be calculated (excluding the current month). So, if the quote was in April, I would need the average of Jan, Feb, Mar. The input data looks like this: Quote Date MM/DD/YYYY Revenue 3/24/2015

Column does not exist error on CreateTable

久未见 提交于 2021-02-11 15:08:39
问题 I'm trying to print out DDLs for all the tables in Redshift database. My code look something like this: cnxn = engine.connect() metadata = MetaData(bind=engine, schema=SCHEMA) metadata.reflect(bind=engine) for table in metadata.sorted_tables: try: res = CreateTable(table).compile(engine) print(res) except ProgrammingError as e: print(f"Failed to compile DDL for {table}: {e}") continue I've tested this code on other redshift databases and it seems to work fine. But in this database it fails

Exponential Smoothing in Redshift

て烟熏妆下的殇ゞ 提交于 2021-02-11 14:53:28
问题 I'm trying to write a window function to calculate exponential smoothing in Redshift. I am referencing this post (here). SELECT p.*, (sum(power((1/0.8666666), seqnum) * price) over (order by seqnum rows unbounded preceding) + first_value(price) over (order by seqnum rows unbounded preceding) ) / power((1/.13333333), seqnum+1 ) FROM (SELECT date, row_number() over (order by date) - 1 as seqnum, price FROM table.prices ) p; The issue is that when the value of the smoothing constant is anything

Redshift - Adding timezone offset (Varchar) to timestamp column

不羁岁月 提交于 2021-02-11 13:27:13
问题 as part of ETL to Redshift, in one of the source tables, there are 2 columns: original_timestamp - TIMESTAMP : which is the local time when the record was inserted in whichever region original_timezone_offset - Varchar : which is the offset to UTC The data looks something like this: original_timestamp original_timezone_offset 2011-06-22 11:00:00.000000 -0700 2014-11-29 17:00:00.000000 -0800 2014-12-02 22:00:00.000000 +0900 2011-06-03 09:23:00.000000 -0700 2011-07-28 03:00:00.000000 -0700 2011

Redshift - Adding timezone offset (Varchar) to timestamp column

£可爱£侵袭症+ 提交于 2021-02-11 13:26:33
问题 as part of ETL to Redshift, in one of the source tables, there are 2 columns: original_timestamp - TIMESTAMP : which is the local time when the record was inserted in whichever region original_timezone_offset - Varchar : which is the offset to UTC The data looks something like this: original_timestamp original_timezone_offset 2011-06-22 11:00:00.000000 -0700 2014-11-29 17:00:00.000000 -0800 2014-12-02 22:00:00.000000 +0900 2011-06-03 09:23:00.000000 -0700 2011-07-28 03:00:00.000000 -0700 2011

How to get data for the past x weeks for each type?

落爺英雄遲暮 提交于 2021-02-11 12:54:49
问题 I have below query which gives me data with three columns - type , amount and total for previous week using week_number column. select type, case WHEN (type = 'PROC1' AND contractdomicilecode = 'UIT') THEN 450 WHEN (type = 'PROC1' AND contractdomicilecode = 'KJH') THEN 900 WHEN (type = 'PROC2' AND contractdomicilecode = 'LOP') THEN 8840 WHEN (type = 'PROC2' AND contractdomicilecode = 'AWE') THEN 1490 WHEN (type = 'PROC3' AND contractdomicilecode = 'MNH') THEN 1600 WHEN (type = 'PROC3' AND

Running difference month over month

╄→гoц情女王★ 提交于 2021-02-11 12:30:15
问题 I have a sample data, i want to get the Difference in month over month data 'Lag' column for only row B 回答1: If there always is just one row per month and id , then just use lag() . You can wrap this in a case expression so it only applies to id 'B' . select id, date, data, case when id = 'B' then data - lag(data) over(partition by id order by date) end lag_diff from mytable 来源: https://stackoverflow.com/questions/62160736/running-difference-month-over-month

I would like migrate Oralce DB to Amazon Redshift with AWS SCT

懵懂的女人 提交于 2021-02-11 12:25:00
问题 Overview I learn to migrate an Amazon RDS for Oracle Database to Amazon Redshift referring this tutorial https://docs.aws.amazon.com/dms/latest/sbs/CHAP_RDSOracle2Redshift.html Trouble In step5, I use Amazon SCT to convert the Oracle Schema to Amazon Redshift. But, I have to change config for Amazon SCT because the tutorial document is old written in 2017. One of the change config, I try to disable to use AWS Glue. I open Project settings to uncheck the checkbox "Use AWS Glue". And soon after

UNLOAD Redshift: append

♀尐吖头ヾ 提交于 2021-02-11 07:56:23
问题 I'd like to UNLOAD data from Redshift table into already existing S3 folder, in a similar way of what happens in Spark with the write option " append " (so creating new files in the target folder if this already exists). I'm aware of the ALLOWOVERWRITE option but this deletes the already existing folder. Is it something supported in Redshift? If not, what approach is recommended? (it would be anyway a desired feature I believe...) 回答1: One solution that could solve the issue is to attach