AWS Glue to Redshift: Is it possible to replace, update or delete data?

后端 未结 6 1923
执念已碎
执念已碎 2020-12-25 12:33

Here are some bullet points in terms of how I have things setup:

  • I have CSV files uploaded to S3 and a Glue crawler setup to create the table and schema.
6条回答
  •  情书的邮戳
    2020-12-25 13:20

    This was the solution I got from AWS Glue Support:

    As you may know, although you can create primary keys, Redshift doesn't enforce uniqueness. Therefore, if you are rerunning Glue jobs then duplicate rows can get inserted. Some of the ways to maintain uniqueness are:

    1. Use a staging table to insert all rows and then perform a upsert/merge [1] into the main table, this has to be done outside of glue.

    2. Add another column in your redshift table [1], like an insert timestamp, to allow duplicate but to know which one came first or last and then delete the duplicate afterwards if you need to.

    3. Load the previously inserted data into dataframe and then compare the data to be insert to avoid inserting duplicates[3]

    [1] - http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html and http://www.silota.com/blog/amazon-redshift-upsert-support-staging-table-replace-rows/

    [2] - https://github.com/databricks/spark-redshift/issues/238

    [3] - https://docs.databricks.com/spark/latest/faq/join-two-dataframes-duplicated-column.html

提交回复
热议问题