Custom Binary Input - Hadoop

若如初见. 提交于 2019-12-13 05:49:12

问题


I am developing a demo application in Hadoop and my input is .mrc image files. I want to load them to hadoop and do some image processing over them.

These are binary files that contain a large header with metadata followed by the data of a set of images. The information on how to read the images is also contained in the header (eg. number_of_images, number_of_pixels_x, number_of_pixels_y, bytes_per_pixel, so after the header bytes, the first [number_of_pixels_x*number_of_pixels_y*bytes_per_pixel] are the first image, then the second and so on].

What is a good Input format for these kinds of files? I thought two possible solutions:

  1. Convert them to sequence files by placing the metadata in the sequence file header and have pairs for each image. In this case can I access the metadata from all mappers?
  2. Write a custom InputFormat and RecordReader and create splits for each image while placing the metadata in distributed cache.

I am new in Hadoop, so I may be missing something. Which approach you think is better? is any other way that I am missing?


回答1:


Without knowing your file formats, the first option seems to be the better option. Using sequence files you can leverage a lot of SequenceFile related tools to get better performance. However, there are two things that do concern me with this approach.

  1. How will you get your .mrc files into a .seq format?
  2. You mentioned that the header is large, this may reduce the performance of SequenceFiles

But even with those concerns, I think that representing your data in SequenceFile's is the best option.



来源:https://stackoverflow.com/questions/9465373/custom-binary-input-hadoop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!