snowflake-cloud-data-platform

snowflake python connector - Time to make database connection

半城伤御伤魂 提交于 2021-02-20 04:21:27
问题 Python code is taking around 2-3 secs to make the snowflake database connection. Is it expected behaviour ? OR are there any parameters which will speed up connection time. Here is the sample code: import snowflake.connector import time t1=time.time() print("Start time :"+str(t1)) try: conn = snowflake.connector.connect( user=user, password=password, account=account, warehouse=warehouse, # database=DATABASE, # schema=SCHEMA ) cur = conn.cursor() except Exception as e: logging.error(

snowflake python connector - Time to make database connection

旧巷老猫 提交于 2021-02-20 04:21:05
问题 Python code is taking around 2-3 secs to make the snowflake database connection. Is it expected behaviour ? OR are there any parameters which will speed up connection time. Here is the sample code: import snowflake.connector import time t1=time.time() print("Start time :"+str(t1)) try: conn = snowflake.connector.connect( user=user, password=password, account=account, warehouse=warehouse, # database=DATABASE, # schema=SCHEMA ) cur = conn.cursor() except Exception as e: logging.error(

Get identity of row inserted in Snowflake Datawarehouse

守給你的承諾、 提交于 2021-02-19 03:23:26
问题 If I have a table with an auto-incrementing ID column, I'd like to be able to insert a row into that table, and get the ID of the row I just created. I know that generally, StackOverflow questions need some sort of code that was attempted or research effort, but I'm not sure where to begin with Snowflake. I've dug through their documentation and I've found nothing for this. The best I could do so far is try result_scan() and last_query_id() , but these don't give me any relevant information

snowflake pivot attribute values into columns in array of objects

随声附和 提交于 2021-02-18 18:59:35
问题 EDIT: I gave bad example data. Updated some details and switched out dummy data for sanitized, actual data. Source system: Freshdesk via Stitch Table Structure: create or replace TABLE TICKETS ( CC_EMAILS VARIANT, COMPANY VARIANT, COMPANY_ID NUMBER(38,0), CREATED_AT TIMESTAMP_TZ(9), CUSTOM_FIELDS VARIANT, DUE_BY TIMESTAMP_TZ(9), FR_DUE_BY TIMESTAMP_TZ(9), FR_ESCALATED BOOLEAN, FWD_EMAILS VARIANT, ID NUMBER(38,0) NOT NULL, IS_ESCALATED BOOLEAN, PRIORITY FLOAT, REPLY_CC_EMAILS VARIANT,

How to pivot on dynamic values in Snowflake

僤鯓⒐⒋嵵緔 提交于 2021-02-16 20:06:51
问题 I want to pivot a table based on a field which can contain "dynamic" values (not always known beforehand). I can make it work by hard coding the values (which is undesirable): SELECT * FROM my_table pivot(SUM(amount) FOR type_id IN (1,2,3,4,5,20,50,83,141,...); But I can't make it work using a query to provide the values dynamically: SELECT * FROM my_table pivot(SUM(amount) FOR type_id IN (SELECT id FROM types); --- 090150 (22000): Single-row subquery returns more than one row. SELECT * FROM

How to pivot on dynamic values in Snowflake

我只是一个虾纸丫 提交于 2021-02-16 20:04:05
问题 I want to pivot a table based on a field which can contain "dynamic" values (not always known beforehand). I can make it work by hard coding the values (which is undesirable): SELECT * FROM my_table pivot(SUM(amount) FOR type_id IN (1,2,3,4,5,20,50,83,141,...); But I can't make it work using a query to provide the values dynamically: SELECT * FROM my_table pivot(SUM(amount) FOR type_id IN (SELECT id FROM types); --- 090150 (22000): Single-row subquery returns more than one row. SELECT * FROM

How to avoid sub folders in snowflake copy statement

て烟熏妆下的殇ゞ 提交于 2021-02-11 18:15:51
问题 I have a requirement to exclude certain folder from prefix and process the data in snowflake (Copy statement) In the below example I need to process files under emp/ and exclude files from abc/ Input : s3://bucket1/emp/ E1.CSV E2.CSV /abc/E11.csv s3://bucket1/emp/abc/ - E11.csv Output : s3://bucket1/emp/ E1.CSV E2.CSV Is there any suggestion around pattern to handle this ? 回答1: With the pattern keyword you can try to exclude certain files. However when using the pattern matching with the NOT

How to avoid sub folders in snowflake copy statement

柔情痞子 提交于 2021-02-11 18:14:15
问题 I have a requirement to exclude certain folder from prefix and process the data in snowflake (Copy statement) In the below example I need to process files under emp/ and exclude files from abc/ Input : s3://bucket1/emp/ E1.CSV E2.CSV /abc/E11.csv s3://bucket1/emp/abc/ - E11.csv Output : s3://bucket1/emp/ E1.CSV E2.CSV Is there any suggestion around pattern to handle this ? 回答1: With the pattern keyword you can try to exclude certain files. However when using the pattern matching with the NOT

Snowflake External Table failed to cast variant value NULL to DATETIME/TIMESTAMP_NTZ type

有些话、适合烂在心里 提交于 2021-02-11 15:45:49
问题 Created an external table with a column of type datetime (TIMESTAMP_NTZ type), the external stage has a csv file with null value in the column. Selecting from the external table is giving "Failed to cast variant value "null" to TIMESTAMP_NTZ" CREATE OR REPLACE EXTERNAL TABLE ext_table_datetime ( col1 datetime as (value:c1::datetime) ) with location = 's3://bucket_name' file_format = file_format_1 auto_refresh = true; Also I have the file format defined as follows, which works for other column

Unable to create snowflake reader account - trial account

Deadly 提交于 2021-02-11 15:30:18
问题 Using trial account, initially I created a reader account and deleted it. After that I am unable to create another reader account and getting the following error. I am not able to contact the support as well. Any help is appreciated. Number of managed accounts allowed exceeded the limit. Please contact Snowflake support. 回答1: As it says, "you need to submit a case to Snowflake support". Could you tell me why you can not connect to the support? Go to snowflake community: https://community