How to process/extract .pst using hadoop Map reduce

痴心易碎 提交于 2019-12-10 11:30:25

问题


I am using MAPI tools (Its microsoft lib and in .NET) and then apache TIKA libraries to process and extract the pst from exchange server, which is not scalable.

How can I process/extracts pst using MR way ... Is there any tool, library available in java which I can use in my MR jobs. Any help would be great-full .

Jpst Lib internally uses: PstFile pstFile = new PstFile(java.io.File)

And the problem is for Hadoop API's we don't have anything close to java.io.File.

Following option is always there but not efficient:

  File tempFile = File.createTempFile("myfile", ".tmp");
  fs.moveToLocalFile(new Path (<HDFS pst path>) , new Path(tempFile.getAbsolutePath()) );
  PstFile pstFile = new PstFile(tempFile);

回答1:


Take a look at Behemoth (http://digitalpebble.blogspot.com/2011/05/processing-enron-dataset-using-behemoth.html). It combines Tika and Hadoop.

I've also written by own Hadoop + Tika jobs. The pattern is:

  1. Wrap all the pst files into sequencence or avro files.
  2. Write a map only job that reads the pst files form the avro files and writes it to the local disk.
  3. Run tika across the files.
  4. Write the output of tika back into a sequence file

Hope that help.s




回答2:


Its not possible to process PST file in mapper. after long analysis and debug it was found out that the API is not exposed properly and those API needs localfile system to store extracted pst contents. It directly cant store on HDFS. thats bottle-neck. And all those API's(libs that extract and process) are not free.

what we can do is extract outside hdfs and then we can process in MR jobs



来源:https://stackoverflow.com/questions/10415213/how-to-process-extract-pst-using-hadoop-map-reduce

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!