teradata

Teradata: Results with duplicate values converted into comma delimited strings

*爱你&永不变心* 提交于 2021-01-29 22:17:01
问题 I have a typical table where each row represents a customer - product holding. If a customer has multiple products, there will be multiple rows with the same customer Id. I'm trying to roll this up so that each customer is represented by a single row, with all product codes concatenated together in a single comma delimited string. The diagram below illustrates this After googling this, I managed to get it to work using the XMLAGG function - but this only worked on a small sample of data, when

Teradata: Results with duplicate values converted into comma delimited strings

一曲冷凌霜 提交于 2021-01-29 21:07:07
问题 I have a typical table where each row represents a customer - product holding. If a customer has multiple products, there will be multiple rows with the same customer Id. I'm trying to roll this up so that each customer is represented by a single row, with all product codes concatenated together in a single comma delimited string. The diagram below illustrates this After googling this, I managed to get it to work using the XMLAGG function - but this only worked on a small sample of data, when

Row size limitation in Teradata

僤鯓⒐⒋嵵緔 提交于 2021-01-29 10:56:43
问题 So I know that Teradata has a limitation of 64k bytes per row. I have a wide table that I need to export to Teradata and there are some fields (varchar (5000)) that go along with that. We have seen cases where the row size exceeds this limitation. So, my question is, how can we overcome this situation? We cannot trim the large VARCHARS in our source as they are necessary to the downstream business users. Splitting up the table is always the option, but are there any other ways in Teradata

Optimizing huge value list in Teradata without volatile tables

拜拜、爱过 提交于 2021-01-28 04:11:30
问题 Have a value list like` `where a.c1 in ( list ) ` Then shoving the list in the volatile table is the best way out. However this is being done via cognos & IBM isn't smart enough to know what Teradata's volatile table is. I wish It was so I could use exclusion logic Exists to go through the volatile table contents. So without volatile table , I have a value list where a.c1 in ( list ) which has like 5K values. Keeping that list in the report is proving expensive. I wondered if it was possible

How to get lastaltertimestamp from Hive table?

烈酒焚心 提交于 2021-01-27 13:56:04
问题 Teradata has the concept of lastaltertimestamp , which is the last time an alter table command was executed on a table. lastaltertimestamp can be queried. Does Hive have a similar value that can be queried? The timestamp returned by hdfs dfs -ls /my/hive/file does not reflect alter table commands, so alter table must not modify the file backing Hive file. describe formatted does not provide a last-alter-timestamp either. Thanks 回答1: Hive stores metadata into a database, so files never get

Group by Concat Teradata

耗尽温柔 提交于 2021-01-27 06:05:32
问题 I have a problem with a table, I would like to concat a string field using a group by. I have this situation here: USER | TEXT A | 'hello' A | 'by' B | 'hi' B | '9' B | 'city' I would like to obtain this result: USER | TEXT A | 'hello by' B | 'hi 9 city' 回答1: You can try using xmlagg SELECT User ,TRIM(TRAILING ' ' FROM (XMLAGG(TRIM(text)|| ',' ORDER BY ColumnPosition) (VARCHAR(1000)))) FROM table GROUP BY 1 来源: https://stackoverflow.com/questions/46119178/group-by-concat-teradata

Conversion of Teradata sql to MYSQL sql

假装没事ソ 提交于 2020-06-29 05:17:36
问题 I want to convert Teradata query into MYSQL query. Datatype of START_TIME AND END_TIME is TIMESTAMP(6) Teradata query:- select START_TIME,END_TIME, (EXTRACT(DAY FROM (END_TIME - START_TIME DAY(4) TO SECOND)) * 86400) from base.xyz Result is like:- **START_TIME, END_TIME, CALCULATED_FIELD** 9/15/2017 16:22:52.000000 9/19/2017 15:14:02.000000 259,200 7/26/2014 07:00:04.000000 7/28/2014 12:55:55.000000 172,800 6/8/2018 16:59:19.000000 6/11/2018 09:56:23.000000 172,800 10/6/2017 17:52:06.000000