When we are using Hadoop in the sense it means we are trying to huge data processing The end goal of the data processing would be to generate content/reports out of it.
So it internally consists of 2 prime activities:
1) Loading Data Processing
2) Generate content and use it for the reporting /etc..
Loading /Data Processing -> Pig would be helpful in it.
This helps as an ETL (We can perform etl operations using pig scripts.).
Once the result is processed we can use hive to generate the reports based on the processed result.
Hive: Its built on top of hdfs for warehouse processing.
We can generate adhoc reports easily using hive from the processed content generated from pig.