Using s3 as fs.default.name or HDFS?

元气小坏坏 提交于 2019-12-11 07:54:42

问题


I'm setting up a Hadoop cluster on EC2 and I'm wondering how to do the DFS. All my data is currently in s3 and all map/reduce applications use s3 file paths to access the data. Now I've been looking at how Amazons EMR is setup and it appears that for each jobflow, a namenode and datanodes are setup. Now I'm wondering if I really need to do it that way or if I could just use s3(n) as the DFS? If doing so, are there any drawbacks?

Thanks!


回答1:


in order to use S3 instead of HDFS fs.name.default in core-site.xml needs to point to your bucket:

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property>

It's recommended that you use S3N and NOT simple S3 implementation, because S3N is readble by any other application and by yourself :)

Also, in the same core-site.xml file you need to specify the following properties:

  • fs.s3n.awsAccessKeyId
  • fs.s3n.awsSecretAccessKey

fs.s3n.awsSecretAccessKey




回答2:


Any intermediate data of your job goes to HDFS, so yes, you still need a namenode and datanodes




回答3:


https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml

fs.default.name is deprecated, and maybe fs.defaultFS is better.




回答4:


I was able to get the s3 integration working using

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property> 

in the core-site.xml and get the list of the files get using hdfs ls command.but should also should have namenode and separate datanode configurations, coz still was not sure how the data gets partitioned in the data nodes.

should we have local storage for namenode and datanode?



来源:https://stackoverflow.com/questions/6271222/using-s3-as-fs-default-name-or-hdfs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!