How to read files with an offset from Hadoop using Java

一个人想着一个人 提交于 2019-12-04 05:17:10

I think SEEK is a best option for reading files with huge volumes. It did not cause any problems to me as the volume of data that i was reading was in the range of 2 - 3GB. I did not encounter any issues till today but we did use file splitting to handle the large data set. below is the code which you can use for reading purpose and test the same.

public class HDFSClientTesting {

/**
 * @param args
 */
public static void main(String[] args) {
    // TODO Auto-generated method stub

  try{

 //System.loadLibrary("libhadoop.so");
    Configuration conf = new Configuration();
    FileSystem fs = FileSystem.get(conf);
    conf.addResource(new Path("core-site.xml"));


    String Filename = "/dir/00000027";
    long ByteOffset = 3185041;



    SequenceFile.Reader rdr = new SequenceFile.Reader(fs, new Path(Filename), conf);
    Text key = new Text();
    Text value = new Text();

    rdr.seek(ByteOffset);
    rdr.next(key,value);
    //Plain text
    JSONObject jso = new JSONObject(value.toString());
    String content = jso.getString("body");
    System.out.println("\n\n\n" + content + "\n\n\n");

    File file =new File("test.gz");
    file.createNewFile();

}
  catch (Exception e ){
    throw new RuntimeException(e);

}
 finally{

 } 

  }

}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!