I would like to copy all the dynamoDB tables to another aws account without s3 to save the data. I saw solutions to copy table with data pipeline but all are using s3 to sav
The S3 is definitely not a bottleneck. I would almost argue that for 99% of the use-cases you should do it with The Data Pipeline + S3 which is recommended best practice by AWS. I have provided more detailed answer on this here: https://stackoverflow.com/a/57465721/126382
The real question is whether you organize other systems and clients that read/write data live to do migration in a such way that will not cause a downtime. If that is your biggest concern about timing of the task - then you want to engineer the custom solution which will ensure all writes go to both DDB tables in both accounts and switching clients that read data to destination DDB table before you finally switch clients that write data. Couple other flavors of this migration plan are also possible.