I need to load N (around 50) tables from a Source DB to a Target one. Each table is different from the others (so different metadata); I thought I could use a parent pkg to
there is a lot of factors which have impact on the what scenario to choose.
But in general:
For small tables with relatively few rows, you can put multiple sources/destinations in single data flow
If you have complex ETL, for the source/destination, than it is better to put them to separate data flow tasks for clarity
If you need to define the sequence of execution you have to use multiple data flow tasks, as you cannot control the order of execution for multiple sources/destinations in single data flow tasks.
Whenever you need different transactional isolation level or behavior, you have to put them into separate data flows.
Whenever you are unsure on the impact of the ETL on the source system put them in separate data flows as it will allow you to optimize the execution order in the future more easily.
If you have large tables than put them into separate data flow tasks, as this will allow to optimize buffer sizes for different tables and optimize the ETL process for whatever reason
So from the above if you have relatively small tables, and straight source/destination mapping, than there is no problem to have more source/destinations in single data flow.
In other cases it is better or necessary to put them into separate data flows as it will allow you optimize the ETL process from all three points of view:
Load impact on the Source systems
Load impact on the destination systems
Utilization of the machine on which the ETL process is running (CPU consumption, memory consumption and overall though output).