snowflake-cloud-data-platform

How to run cursor in snowflake?

烈酒焚心 提交于 2021-02-11 15:13:56
问题 I have written below cursor in SQL and working file. But I am not able to run the same cursor on snowflake, please help. DECLARE @CurrentMonth NVARCHAR(100) DECLARE @CurrentMonth1 NVARCHAR(100) DECLARE MYDateCURSOR CURSOR DYNAMIC FOR SELECT Collections_COE FROM [CollectionsAgeing_OTCN024_028_029] OPEN MYDateCURSOR FETCH LAST FROM MYDateCURSOR INTO @CurrentMonth CLOSE MYDateCURSOR DEALLOCATE MYDateCURSOR --select value from STRING_SPLIT(@CurrentMonth,'-') ; select @CurrentMonth1=LEFT(

I can't connect to snowflake using node js connector

别等时光非礼了梦想. 提交于 2021-02-11 14:00:20
问题 I'm trying to connect to snowflake database, using snowflake-sdk connector. First I installed the snowflake-sdk , using the command line: npm install snowflake-sdk After I followed all the instructions reported here. i created the file index.js containing: var snowflake = require('snowflake-sdk'); var connection = snowflake.createConnection( { account : 'xxxx.east-us-2' username: 'MYUSERNAME' password: 'MYPASSWORD' } ); connection.connect( function(err, conn) { if (err) { console.error(

How to retrieve all the catalog names , schema names and the table names in a database like snowflake or any such database?

拟墨画扇 提交于 2021-02-11 12:32:28
问题 I need to drop some columns and uppercase the data in snowflake tables. For which I need to loop through all the catalogs/ dbs, its respective schemas and then the tables. I need this to be in python to list of the catalogs schemas and then the tables after which I will be exicuting the SQL query to do the manipulations. How to proceed with this? 1.List all the catalog names 2.List all the schema names 3.List alll the table names I have established a connection using python snowflake

Snowflake - COPY INTO … ignores DATE_INPUT_FORMAT setting

北城以北 提交于 2021-02-11 12:32:27
问题 The following instruction aims at using a specific format to import DATEs alter session set DATE_INPUT_FORMAT = 'YYYY-MM-DD HH24:MI:SS.FF'; However, it seems to have no effect on the following: copy into schema.table from s3://bucket/file.parquet credentials=(aws_key_id='...' aws_secret_key='...') match_by_column_name=case_insensitive file_format=(type=parquet); Which results in errors like below: sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 100071 (22000):

how to create table automatically based on any text file in snowflake?

↘锁芯ラ 提交于 2021-02-11 05:55:38
问题 Is there any tool or any ways that creates tables automatically based on any text files? I have 100+ csv files and every files has different numbers of columns. It would be so much work if create table definition first in snowflake manually and then load the data. I am looking for a specific way to loading data without creating a table. Please let me know if anyone know how to tackle this. Thanks! 回答1: Data processing frameworks such as Spark and Pandas have readers that can parse CSV header

How to parse json efficiently in Snowpipe with ON_ERROR=CONTINUE

不羁的心 提交于 2021-02-11 04:31:35
问题 I'm setting up a snowpipe to load data from s3 bucket to snowflake schema. S3 contains files in NDJOSN format. One file can contain multiple records and I want to process all of them. Even if one record is broken. To do so, I need to add on_error='continue' option to pipe creation and use csv file format as stated in official snowflake docs here. That way I receive raw strings of JSON that I need to parse to access data. And since snowpipes do not support nested selects the only way to do

How to parse json efficiently in Snowpipe with ON_ERROR=CONTINUE

时间秒杀一切 提交于 2021-02-11 04:30:40
问题 I'm setting up a snowpipe to load data from s3 bucket to snowflake schema. S3 contains files in NDJOSN format. One file can contain multiple records and I want to process all of them. Even if one record is broken. To do so, I need to add on_error='continue' option to pipe creation and use csv file format as stated in official snowflake docs here. That way I receive raw strings of JSON that I need to parse to access data. And since snowpipes do not support nested selects the only way to do

How to parse json efficiently in Snowpipe with ON_ERROR=CONTINUE

走远了吗. 提交于 2021-02-11 04:27:55
问题 I'm setting up a snowpipe to load data from s3 bucket to snowflake schema. S3 contains files in NDJOSN format. One file can contain multiple records and I want to process all of them. Even if one record is broken. To do so, I need to add on_error='continue' option to pipe creation and use csv file format as stated in official snowflake docs here. That way I receive raw strings of JSON that I need to parse to access data. And since snowpipes do not support nested selects the only way to do

INSERT repeating values in SQL

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-10 22:22:40
问题 Trying to find a simple way to insert some repeating values in two columns in my table, something similar to the rep function in R - for instance, I need to insert two values (chocolate and vanilla, 4 times each) and I need to insert 4 types of values that repeat twice such as -- flavor_type schedule_type chocolate weekly chocolate monthly chocolate quarterly chocolate yearly vanilla weekly vanilla monthly vanilla quarterly vanilla yearly 回答1: You can use cross join : select * from (values(

“Numeric value '' is not recognized” - what column?

断了今生、忘了曾经 提交于 2021-02-10 15:50:12
问题 I am trying to insert data from a staging table into the master table. The table has nearly 300 columns, and is a mix of data-typed Varchars, Integers, Decimals, Dates, etc. Snowflake gives the unhelpful error message of " Numeric value '' is not recognized " I have gone through and cut out various parts of the query to try and isolate where it is coming from. After several hours and cutting every column, it is still happening. Does anyone know of a Snowflake diagnostic query (like Redshift