Here are some bullet points in terms of how I have things setup:
As per my testing (with the same scenario), the BOOKMARK functionality is not working. Duplicate data is getting inserted when the Job is run multiple times. I have got this issue resolved by removing the files from the S3 location daily (through lambda) and implementing Staging & Target tables. data will get insert/update based on the matching key columns.