There is no need to code a MapReduce job to bulk load data into HBase. There are several ways to bulk load data into HBase:
1) Use HBase tools like importtsv
and completebulkload
http://hbase.apache.org/book/arch.bulk.load.html
2) Use Pig to bulk load data. Example:
A = LOAD '/hbasetest.txt' USING PigStorage(',') as
(strdata:chararray, intdata:long);
STORE A INTO 'hbase://mydata'
USING org.apache.pig.backend.hadoop.hbase.HBaseStorage(
'mycf:intdata');
3) Do it programatically using the HBase API. I got a small project called hbaseloader that loads files into a HBase table (table it has just one ColumnFamily with the content of the file). Take a look at it, you just need to define the structure of your table and modified the code to read a csv file and parse it.
4) Do it programatically using a MapReduce job like in the example you mentioned.