snowflake-data-warehouse

How to use Show Command in Stored Procedures - Not working

点点圈 提交于 2020-01-06 05:41:27
问题 Following the bog post https://community.snowflake.com/s/article/How-to-USE-SHOW-COMMANDS-in-Stored-Procedures The result I get it NaN, which makes sense since the return value is set to float in this Blog Post. I have tried setting the return value to varchar, string etc. but Iget different results like Object object etc. CREATE OR REPLACE PROCEDURE SHOP(CMD VARCHAR) returns float not null LANGUAGE JAVASCRIPT EXECUTE AS CALLER AS $$ var stmt = snowflake.createStatement( { sqlText: ${CMD} } )

Is that possible to split a lager file more than 8 GB using snowflake?

心已入冬 提交于 2020-01-06 05:39:30
问题 I have a file more than 8 GB. want to load this file to snowflake. I was going through the snowflake documentation and found the best practices which says keep file size 10 MB to 100 MB for best load performance. https://docs.snowflake.net/manuals/user-guide/data-load-considerations-prepare.html Is that possible to split the file in snowflake itself? So I will upload 8 GB file to Azure Blob and then will use snowflake to split the file into multiple and then load into a table..? 回答1: No, it's

SnowFlake Query if an Id exists on each of the last 7 days

假如想象 提交于 2020-01-06 05:18:04
问题 We INSERT new records every day to a table with say id and created_on column. How do i identify if records with a particular identifier existed every day in the last 7 days ? 回答1: This can be done with a Stored Procedure: CREATE OR REPLACE PROCEDURE TIME_TRAVEL(QUERY TEXT, DAYS FLOAT) RETURNS VARIANT LANGUAGE JAVASCRIPT AS $$ function run_query(query, offset) { try { var sqlText = query.replace('"at"', " AT(OFFSET => " + (offset + 0) + ") "); return (snowflake.execute({sqlText: sqlText}))

How can I parse an ISO 8601 timestamp with Snowflake SQL?

六眼飞鱼酱① 提交于 2020-01-05 08:28:27
问题 I'm looking for a generic function that allows me to parse ISO8601 timestamps. I know about to_timestamp_tz but I couldn't find a way to create a format parameter that will parse all the possible variations of ISO-8601 datetimes: select '2012-01-01T12:00:00+00:00'::timestamp_tz; // this works select '2012-01-01T12:00:00+0000'::timestamp_tz; //Timestamp '2012-01-01T12:00:00+0000' is not recognized, although is a valid iso8601 (no colon in the timezone) select to_timestamp_tz('2012-01-01T12:00

Creating custom connection pool in Spring Boot application

本秂侑毒 提交于 2020-01-05 07:10:41
问题 I'm writing a Spring Boot application which connects with Snowflake Data Warehouse and execute SQL queries on it. I have written a Configuration class for configuring Datasource for connecting to Snowflake Data Warehouse as follows: @Configuration @EnableAutoConfiguration(exclude={DataSourceAutoConfiguration.class}) public class DBConfig { Logger logger = LoggerFactory.getLogger(DBConfig.class); @Bean JdbcTemplate jdbcTemplate() throws IllegalAccessException, InvocationTargetException,

How can I efficiently transform a two-column range into an expanded table?

若如初见. 提交于 2020-01-03 01:53:49
问题 I'm trying to use geo IP data in snowflake. This involves several things: 1) A source table with a CIDR IP range and a geoname_ID and its lat/long coords 2) I've used the parse_ip function and extracted the range_start and range_end values as simple integer columns in the ipv4 0-4.2bn range. Some ranges consist of 1 IP, some may have as many as 16.7 million. So, the 3.1 million rows in the intermediary table data look something like this : RANGE_START RANGE_END GEONAME_ID LATITUDE LONGITUDE

How should I load XML file which has comments and spaces in them and then using XMLGET on the root element, I'm not able to get the child elements

你离开我真会死。 提交于 2019-12-25 00:34:33
问题 (Submitting on behalf of a Snowflake User) Using: <clinical_study> <!-- This xml conforms to an XML Schema at: https://clinicaltrials.gov/ct2/html/images/info/public.xsd --> <required_header> <download_date>ClinicalTrials.gov processed this data on September 13, 2019</download_date> <link_text>Link to the current ClinicalTrials.gov record.</link_text> <url>https://clinicaltrials.gov/show/NCT00010010</url> </required_header> <id_info> <org_study_id>CDR0000068431</org_study_id> <secondary_id

snowflake sproc vs standalone sql

允我心安 提交于 2019-12-24 21:30:41
问题 I am thinking to create denormalized table for our BI purpose. While creating business logic from several tables i noticed queries perform better when denormalized table is updated in batches(sproc with multiple business logic SQL's) with merge statement as below. eg: sproc contains multiple SQL's like merge denormalized_data (select businesslogic1) merge denormalized_data (select businesslogic2) etc Is it better to include business logic in huge SQL or divide it so that each query handles

Connect Snowflake to Azure analysis services to build cube

六月ゝ 毕业季﹏ 提交于 2019-12-24 20:32:59
问题 I need to build cube on Azure analysis services by connecting to Snowflake DB. Seems Azure analysis services does not provide a connector to snowflake. Can anyone suggest how to overcome this. 回答1: First, on your laptop install both the 32-bit and 64-bit ODBC driver for Snowflake. Then open the "ODBC Data Sources (32-bit)" and create a new system DSN called "Snowflake" using the Snowflake ODBC driver. Repeat in the "ODBC Data Sources (64-bit)" app creating another system DSN named identically

Snowflake - Failed to rewrite multi-row insert -(insert into select)

会有一股神秘感。 提交于 2019-12-24 15:19:38
问题 I got below error when use the Sqlalchemy to insert the data into Snowflake warehouse, any idea? Error : Failed to rewrite multi-row insert [SQL: 'INSERT INTO widgets (id, name, type) SELECT %(id)s AS anon_1, %(name)s AS anon_2, widgets.id \nFROM widgets \nWHERE widgets.type = %(card_id)s'] [parameters: ({'id': 2, 'name': 'Lychee', 'card_id': 1}, {'id': 3, 'name': 'testing', 'card_id': 2})] Code : from sqlalchemy import * from snowflake.sqlalchemy import URL # Helper function for local