azure-sqldw

Azure Databricks to Azure SQL DW: Long text columns

﹥>﹥吖頭↗ 提交于 2021-01-27 08:21:53
问题 I would like to populate an Azure SQL DW from an Azure Databricks notebook environment. I am using the built-in connector with pyspark: sdf.write \ .format("com.databricks.spark.sqldw") \ .option("forwardSparkAzureStorageCredentials", "true") \ .option("dbTable", "test_table") \ .option("url", url) \ .option("tempDir", temp_dir) \ .save() This works fine, but I get an error when I include a string column with a sufficiently long content. I get the following error: Py4JJavaError: An error

Microsoft Azure Data Warehouse: Flat Tables or Star Schema

ε祈祈猫儿з 提交于 2020-02-21 07:32:51
问题 I am creating data warehouse model on numerous OLTP tables. a) I can either utilize a Star schema or b) Flat table model table. Many people think dimensional star schema model table is not required; because most data can report itself in a single table. Additionally, star schema Kimball was created when performance and storage are an issue. Some claim with improved tech, data can be presented in a single table. Should I still separate data into dimensions/facts tables or just use the flat

Cannot view object schema Azure Datawarehouse

拟墨画扇 提交于 2020-01-06 05:29:10
问题 Attempting to view the a view or procedure : from SQL Server Management Studio 17.4 in Azure SQL Datawarehouse notes the error: I can however, delete and create any object that I want. How can I work to ensure I can view the objects definition? UPDATED Concerning setting the options in SSMS to SQL Datawarehouse, there is not that option: 回答1: Please change this setting under Tools... Options. That should resolve the error. I wish we didn't have to change this but at lease we have a workaround

Loading ORC Data into SQL DW using PolyBase

时间秒杀一切 提交于 2019-12-25 02:57:31
问题 I am trying to load ORC file format via PolyBase but I am facing below problems. Problem:1 I have a CSV file which below code converts the csv file to ORC format but its selecting data from permanent table. If I remove "as select * from dbo.test" then create external table is not working. Permanent table contains 0 record. create external table test_orc with ( location='/TEST/', DATA_SOURCE=SIMPLE, FILE_FORMAT=TEST_ORC ) as select * from dbo.test ---Permanent Table Problem:2 If I select the

Calculate Sum From Moving 4 Rows in SQL

若如初见. 提交于 2019-12-24 01:23:44
问题 I've have the following data. WM_Week POS_Store_Count POS_Qty POS_Sales POS_Cost ------ --------------- ------ -------- -------- 201541 3965 77722 153904.67 102593.04 201542 3952 77866 154219.66 102783.12 201543 3951 70690 139967.06 94724.60 201544 3958 70773 140131.41 95543.55 201545 3958 76623 151739.31 103441.05 201546 3956 73236 145016.54 98868.60 201547 3939 64317 127368.62 86827.95 201548 3927 60762 120309.32 82028.70 I need to write a SQL query to get the last four weeks of data, and

Time-based drilldowns in Power BI powered by Azure Data Warehouse

血红的双手。 提交于 2019-12-22 07:57:29
问题 I have designed a simple Azure Data Warehouse where I want to track stock of my products on periodic basis. Moreover I want to have an ability to see that data grouped by month, weeks, days and hours with ability to drill down from top to bottom. I have defined 3 dimensions: DimDate DimTime DimProduct I have also defined a Fact table to track product stocks: FactStocks - DateKey (20160510, 20160511, etc) - TimeKey (0..23) - ProductKey (Product1, Product2) - StockValue (number, 1..9999) My

Insert values statement can contain only constant literal values or variable references in SQL Data Warehouse

[亡魂溺海] 提交于 2019-12-20 06:38:40
问题 Consider this table: CREATE TABLE t (i int, j int, ...); I want to insert data into a table from a set of SELECT statements. The simplified version of my query is: INSERT INTO t VALUES ((SELECT 1), (SELECT 2), ...); The real query can be much more complex, and the individual subqueries independent. Unfortunately, this standard SQL statement (which works on SQL Server) doesn't work on SQL Data Warehouse. The following error is raised: Failed to execute query. Error: Insert values statement can

Using Polybase to load data into an existing table in parallel

断了今生、忘了曾经 提交于 2019-12-19 04:39:40
问题 Using CTAS we can leverage the parallelism that Polybase provides to load data into a new table in a highly scalable and performant way. Is there a way to use a similar approach to load data into an existing table? The table might even be empty. Creating an external table and using INSERT INTO ... SELECT * FROM ... - I would assume that this goes through the head node and is therefore not in parallel? I know that I could also drop the table and use CTAS to recreate it but then I have to deal