kiba-etl

Is there a sample implementation of Kiba ETL Job using s3 bucket with csv files as source and the destination is in s3 bucket also?

对着背影说爱祢 提交于 2019-12-23 01:52:20
问题 I have csv file in s3 and I wanted to transform some columns and put the result in another s3 bucket and sometimes in same bucket but with different folder. Can I achieve it using Kiba? Im possible.. do I need to store the csv data in database first before transformation and other stuff? 回答1: Thanks for using Kiba! There is no such implementation sample available today. I'll provide vendor-supported S3 components as part of Kiba Pro in the future. That said, what you have in mind is

Pass Parameters to Kiba run Method

两盒软妹~` 提交于 2019-12-11 02:24:10
问题 I'm trying to use something similar to the code that's used for the kiba cli programmatically as ... filename = './path/to/script.rb' script_content = IO.read(filename) job_definition = Kiba.parse(script_content, filename) Kiba.run(job_definition) # <= I want to pass additional parameters here I'd like to be able to pass parameters to via the .run command besides the job_definition. It doesn't look like the run supports this but figured I'd check. 来源: https://stackoverflow.com/questions

Is there a sample implementation of Kiba ETL Job using s3 bucket with csv files as source and the destination is in s3 bucket also?

被刻印的时光 ゝ 提交于 2019-12-06 14:55:06
I have csv file in s3 and I wanted to transform some columns and put the result in another s3 bucket and sometimes in same bucket but with different folder. Can I achieve it using Kiba? Im possible.. do I need to store the csv data in database first before transformation and other stuff? Thanks for using Kiba! There is no such implementation sample available today. I'll provide vendor-supported S3 components as part of Kiba Pro in the future. That said, what you have in mind is definitely possible (I've done this for some clients) - and there is definitely no need to store the CSV data in a