I commonly work with text files of ~20 Gb size and I find myself counting the number of lines in a given file very often.
The way I do it now it\'s just cat fn
If your data resides on HDFS, perhaps the fastest approach is to use hadoop streaming. Apache Pig's COUNT UDF, operates on a bag, and therefore uses a single reducer to compute the number of rows. Instead you can manually set the number of reducers in a simple hadoop streaming script as follows:
$HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/hadoop-streaming.jar -Dmapred.reduce.tasks=100 -input -output -mapper /bin/cat -reducer "wc -l"
Note that I manually set the number of reducers to 100, but you can tune this parameter. Once the map-reduce job is done, the result from each reducer is stored in a separate file. The final count of rows is the sum of numbers returned by all reducers. you can get the final count of rows as follows:
$HADOOP_HOME/bin/hadoop fs -cat /* | paste -sd+ | bc