Redshift. Convert comma delimited values into rows

依然范特西╮ 提交于 2019-12-17 09:21:14

问题


I am wondering how to convert comma-delimited values into rows in Redshift. I am afraid that my own solution isn't optimal. Please advise. I have table with one of the columns with coma-separated values. For example:

I have:

user_id|user_name|user_action
-----------------------------
1      | Shone   | start,stop,cancell...

I would like to see

user_id|user_name|parsed_action 
------------------------------- 
1      | Shone   | start        
1      | Shone   | stop         
1      | Shone   | cancell      
....

回答1:


A slight improvement over the existing answer is to use a second "numbers" table that enumerates all of the possible list lengths and then use a cross join to make the query more compact.

Redshift does not have a straightforward method for creating a numbers table that I am aware of, but we can use a bit of a hack from https://www.periscope.io/blog/generate-series-in-redshift-and-mysql.html to create one using row numbers.

Specifically, if we assume the number of rows in cmd_logs is larger than the maximum number of commas in the user_action column, we can create a numbers table by counting rows. To start, let's assume there are at most 99 commas in the user_action column:

select 
  (row_number() over (order by true))::int as n
into numbers
from cmd_logs
limit 100;

If we want to get fancy, we can compute the number of commas from the cmd_logs table to create a more precise set of rows in numbers:

select
  n::int
into numbers
from
  (select 
      row_number() over (order by true) as n
   from cmd_logs)
cross join
  (select 
      max(regexp_count(user_action, '[,]')) as max_num 
   from cmd_logs)
where
  n <= max_num + 1;

Once there is a numbers table, we can do:

select
  user_id, 
  user_name, 
  split_part(user_action,',',n) as parsed_action 
from
  cmd_logs
cross join
  numbers
where
  split_part(user_action,',',n) is not null
  and split_part(user_action,',',n) != '';



回答2:


You can get the expected result with the following query. I'm using "UNION ALL" to convert a column to row.

select user_id, user_name, split_part(user_action,',',1) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',2) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',3) as parsed_action from cmd_logs



回答3:


Another idea is to transform your CSV string into JSON first, followed by JSON extract, along the following lines:

... '["' || replace( user_action, '.', '", "' ) || '"]' AS replaced

... JSON_EXTRACT_ARRAY_ELEMENT_TEXT(replaced, numbers.i) AS parsed_action

Where "numbers" is the table from the first answer. The advantage of this approach is the ability to use built-in JSON functionality.




回答4:


Here's my equally-terrible answer.

I have a users table, and then an events table with a column that is just a comma-delimited string of users at said event. eg

event_id | user_ids
1        | 5,18,25,99,105

In this case, I used the LIKE and wildcard functions to build a new table that represents each event-user edge.

SELECT e.event_id, u.id as user_id
FROM events e
LEFT JOIN users u ON e.user_ids like '%' || u.id || '%'

It's not pretty, but I throw it in a WITH clause so that I don't have to run it more than once per query. I'll likely just build an ETL to create that table every night anyway.

Also, this only works if you have a second table that does have one row per unique possibility. If not, you could do LISTAGG to get a single cell with all your values, export that to a CSV and reupload that as a table to help.

Like I said: a terrible, no-good solution.




回答5:


Late to the party but I got something working (albeit very slow though)

with nums as (select n::int n
from
  (select 
      row_number() over (order by true) as n
   from table_with_enough_rows_to_cover_range)
cross join
  (select 
      max(json_array_length(json_column)) as max_num 
   from table_with_json_column )
where
  n <= max_num + 1)
select *, json_extract_array_element_text(json_column,nums.n-1) parsed_json
from  nums, table_with_json_column
where json_extract_array_element_text(json_column,nums.n-1) != ''
and nums.n <= json_array_length(json_column) 

Thanks to answer by Bob Baxley for inspiration




回答6:


Just improvement for the answer above https://stackoverflow.com/a/31998832/1265306

Is generating numbers table using the following SQL https://discourse.looker.com/t/generating-a-numbers-table-in-mysql-and-redshift/482

SELECT 
  p0.n 
  + p1.n*2 
  + p2.n * POWER(2,2) 
  + p3.n * POWER(2,3)
  + p4.n * POWER(2,4)
  + p5.n * POWER(2,5)
  + p6.n * POWER(2,6)
  + p7.n * POWER(2,7) 
  as number  
INTO numbers
FROM  
  (SELECT 0 as n UNION SELECT 1) p0,  
  (SELECT 0 as n UNION SELECT 1) p1,  
  (SELECT 0 as n UNION SELECT 1) p2, 
  (SELECT 0 as n UNION SELECT 1) p3,
  (SELECT 0 as n UNION SELECT 1) p4,
  (SELECT 0 as n UNION SELECT 1) p5,
  (SELECT 0 as n UNION SELECT 1) p6,
  (SELECT 0 as n UNION SELECT 1) p7
ORDER BY 1
LIMIT 100

"ORDER BY" is there only in case you want paste it without the INTO clause and see the results




回答7:


You can try copy command to copy your file into redshift tables

copy table_name from 's3://mybucket/myfolder/my.csv' CREDENTIALS 'aws_access_key_id=my_aws_acc_key;aws_secret_access_key=my_aws_sec_key' delimiter ','

You can use delimiter ',' option.

For more details of copy command options you can visit this page

http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html



来源:https://stackoverflow.com/questions/25112389/redshift-convert-comma-delimited-values-into-rows

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!