teradata

Teradata Optimizer Equal vs Like in SQL

你离开我真会死。 提交于 2019-12-04 06:34:07
问题 I am currently trying to optimize some bobj reports where our backend is Teradata. The Teradata optimizer seems very finicky and I was wondering if anyone has come up with a solution or a workaround to get the optimizer to treat likes in a similar regard to equals . My issue is that we allow the user to input one of two methods: 1. Enter the Number: or 2. Enter a Number like: Option one performs like a dream while option two is dragging our query times from 6 seconds to 2 minutes. In addition

Huge gap in increment value of identity column

江枫思渺然 提交于 2019-12-04 06:22:07
问题 I have created a table with an identity column. When I insert values in that table, Identity column shows huge gap of increment in between the values. Identity value jumps from 6 to 10001. This is the output ordered by Department id: Output Screenshot Here This is table I have created: Create Table STG2.Department ( DepartmentID int GENERATED ALWAYS AS IDENTITY (START WITH 1 INCREMENT BY 1 Cycle), Name varchar(100), GroupName varchar(100) ) PRIMARY INDEX (DepartmentID); This is how I am

Difference between “TOP” and “SAMPLE” in TeraData SQL

一曲冷凌霜 提交于 2019-12-04 05:13:14
What is the difference between "TOP" and "SAMPLE" in TeraData SQL? Are they the same? From TOP vs SAMPLE : TOP 10 means "first 10 rows in sorted order". If you don't have an ORDER BY, then by extension it will be interpreted as asking for "ANY 10 rows" in any order. The optimizer is free to select the cheapest plan it can find and stop processing as soon as it has found enough rows to return. If this query is the only thing running on your system, TOP may appear to always give you exactly the same answer, but that behavior is NOT guaranteed. SAMPLE, as you have observed, does extra processing

Export From Teradata Table to CSV

牧云@^-^@ 提交于 2019-12-03 16:49:13
Is it possible to transfer the date from the Teradata Table into .csv file directly. Problem is - my table has more that 18 million rows. If yes, please send tell me the process For a table that size I would suggest using the FastExport utility. It does not natively support a CSV export but you can mimic the behavior. Teradata SQL Assistant will export to a CSV but it would not be appropriate to use with a table of that size. BTEQ is another alternative that may be acceptable for a one-time dump if the table. Do you have access to any of these? Syed Ghazanfer I use the following code to export

How to set up .net teradata connection in c#?

走远了吗. 提交于 2019-12-03 12:52:47
I am trying to connect to Teradata with c#. I am using the sample code from this website using System; using System.Collections.Generic; using System.Text; using Teradata.Client.Provider; namespace Teradata.Client.Provider.HelloWorld { class HelloWorld { static void Main(string[] args) { using (TdConnection cn = new TdConnection("Data Source = x;User ID = y;Password = z;")) { cn.Open(); TdCommand cmd = cn.CreateCommand(); cmd.CommandText = "SELECT DATE"; using (TdDataReader reader = cmd.ExecuteReader()) { reader.Read(); DateTime date = reader.GetDate(0); Console.WriteLine("Teradata Database

Issue with querying Teradata in Python/Pyodbc

点点圈 提交于 2019-12-03 12:21:08
问题 I'm trying to query a Teradata database in Python with PyODBC. The connection to database is established alright; however, when I try to fetch result, I ran into this error "Invalid literal for Decimal: u''". Help please. I am on RHEL6, with Python 2.7.3 Here is the code and result: import pyodbc sql = "select * from table" pyodbc.pooling = False cnx = pyodbc.connect("DRIVER={Teradata};DBCNAME=host;DATABASE=database; AUTHENTICATION=LDAP;UID=user;PWD=password", autocommit=True, ANSI=True)

Export From Teradata Table to CSV

匿名 (未验证) 提交于 2019-12-03 08:56:10
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Is it possible to transfer the date from the Teradata Table into .csv file directly. Problem is - my table has more that 18 million rows. If yes, please send tell me the process 回答1: For a table that size I would suggest using the FastExport utility. It does not natively support a CSV export but you can mimic the behavior. Teradata SQL Assistant will export to a CSV but it would not be appropriate to use with a table of that size. BTEQ is another alternative that may be acceptable for a one-time dump if the table. Do you have access to any

MAX() and MAX() OVER PARTITION BY produces error 3504 in Teradata Query

一曲冷凌霜 提交于 2019-12-03 07:04:41
I am trying to produce a results table with the last completed course date for each course code, as well as the last completed course code overall for each employee. Below is my query: SELECT employee_number, MAX(course_completion_date) OVER (PARTITION BY course_code) AS max_course_date, MAX(course_completion_date) AS max_date FROM employee_course_completion WHERE course_code IN ('M910303', 'M91301R', 'M91301P') GROUP BY employee_number This query produces the following error: 3504 : Selected non-aggregate values must be part of the associated group If I remove the MAX() OVER (PARTITION BY...)

SQL SELECT multi-columns INTO multi-variable

☆樱花仙子☆ 提交于 2019-12-03 05:23:40
问题 I'm converting SQL from Teradata to SQL Server in Teradata, they have the format SELECT col1, col2 FROM table1 INTO @variable1, @variable2 In SQL Server, I found SET @variable1 = ( SELECT col1 FROM table1 ); That only allows a single column/variable per statement. How to assign 2 or more variables using a single SELECT statement? 回答1: SELECT @variable1 = col1, @variable2 = col2 FROM table1 回答2: SELECT @var = col1, @var2 = col2 FROM Table Here is some interesting information about SET / SELECT

Issue with querying Teradata in Python/Pyodbc

一个人想着一个人 提交于 2019-12-03 02:48:00
I'm trying to query a Teradata database in Python with PyODBC. The connection to database is established alright; however, when I try to fetch result, I ran into this error "Invalid literal for Decimal: u''". Help please. I am on RHEL6, with Python 2.7.3 Here is the code and result: import pyodbc sql = "select * from table" pyodbc.pooling = False cnx = pyodbc.connect("DRIVER={Teradata};DBCNAME=host;DATABASE=database; AUTHENTICATION=LDAP;UID=user;PWD=password", autocommit=True, ANSI=True) cursor = cnx.cursor() rows = cursor.execute(sql).fetchone() InvalidOperation Traceback (most recent call