I would like to copy all the dynamoDB tables to another aws account without s3 to save the data. I saw solutions to copy table with data pipeline but all are using s3 to sav
The reading and writing to S3 is not going to be your bottleneck.
While scanning from Dynamo is going to be very fast, writing the items to the destination table is going to be slow. You can only write up to 1000 items per second per partition. So, I wouldn't worry about the intermediate S3 storage.
However, Data Pipeline is also not the most efficient way of copying a table to another table either.
If you need speedy trasfers then your best bet is to implement your own solution. Provision the destination tables based on your transfer throughput desired (but be careful about undesired partition splits) and then write a parallel scan using multiple threads, which also writes to the destination table.
There is an open source implementation in Java that you can use as a starting point in the AWS labs repository.
https://github.com/awslabs/dynamodb-cross-region-library