Here are some bullet points in terms of how I have things setup:
This was the solution I got from AWS Glue Support:
As you may know, although you can create primary keys, Redshift doesn't enforce uniqueness. Therefore, if you are rerunning Glue jobs then duplicate rows can get inserted. Some of the ways to maintain uniqueness are:
Use a staging table to insert all rows and then perform a upsert/merge [1] into the main table, this has to be done outside of glue.
Add another column in your redshift table [1], like an insert timestamp, to allow duplicate but to know which one came first or last and then delete the duplicate afterwards if you need to.
Load the previously inserted data into dataframe and then compare the data to be insert to avoid inserting duplicates[3]
[1] - http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-upsert.html and http://www.silota.com/blog/amazon-redshift-upsert-support-staging-table-replace-rows/
[2] - https://github.com/databricks/spark-redshift/issues/238
[3] - https://docs.databricks.com/spark/latest/faq/join-two-dataframes-duplicated-column.html