Hadoop 2.2 YARN分布式集群搭建配置流程

人盡茶涼 提交于 2020-02-27 01:09:19

搭建环境准备:JDK1.6,SSH免密码通信

系统:CentOS 6.3

集群配置:NameNode和ResourceManager在一台服务器上,三个数据节点

搭建用户:YARN

Hadoop2.2下载地址:http://www.apache.org/dyn/closer.cgi/hadoop/common/

步骤一:上传Hadoop 2.2 并解压到/export/yarn/hadoop-2.2.0

  • 外层的启动脚本在sbin目录
  • 内层的被调用脚本在bin目录
  • Native的so文件都在lib/native目录
  • 配置程序文件都放置在libexec
  • 配置文件都在etc目录,对应以前版本的conf目录
  • 所有的jar包都在share/hadoop目录下面

步骤二:配置环境变量

  在~/.bashrc文件中添加以下配置:

export JAVA_HOME=/export/servers/jdk1.6.0_25/
export HADOOP_DEV_HOME=/export/yarn/hadoop-2.2.0
export HADOOP_MAPARED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export YARN_HOME=${HADOOP_DEV_HOME}
export HADOOP_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export HDFS_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_DEV_HOME}/etc/hadoop
export PATH=$PATH:$HADOOP_DEV_HOME/bin:$JAVA_HOME/bin:$HADOOP_DEV_HOME/sbin

  配置完成后,执行source ~/.bashrc命令

步骤三:core-site.xml hdfs-site.xml mapred-site.xml yarn-site.xml配置

  •   Core-site.xml配置
<configuration >
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://master1:9101</value>
<description></description>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/export/yarn/hadoop-log/</value>
<description>tmp临时目录</description>
</property>
<property>
  <name>io.compression.codecs</name>
 <value>com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
<description>压缩编码,这里配置了LZO</description>
</property>
<property>
  <name>io.compression.codec.lzo.class</name>
  <value>com.hadoop.compression.lzo.LzoCodec</value>
<description>LZO对应类</description>
</property>
<property>
  <name>io.native.lib.available</name>
  <value>true</value>
  <description>是否启用本地native库</description>
</property>
</configuration>
  • Hdfs-site.xml
<configuration>
<property>
  <name>dfs.namenode.name.dir</name>
  <value>/export/yarn/hadoop-log/nd</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
  <value>/export/yarn/hadoop-log/dd</value>
</property>
<property>
  <name>dfs.namenode.http-address</name>
  <value>0.0.0.0:60176</value>
  <description>namenode http 地址</description>
</property>              
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:60116</value>
</property>
<property>
  <name>dfs.datanode.ipc.address</name>
  <value>0.0.0.0:60126</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:60176</value>
</property>
<property>
  <name>dfs.secondary.http.address</name>
  <value>0.0.0.0:60196</value>
</property>
</configuration>
  • Mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>        
  • yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.address</name>
<value>master1:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master1:8088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master1:8031</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
 <name>yarn.resourcemanager.scheduler.class</name>
 <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
</property>
<property>
 <name>yarn.scheduler.fair.allocation.file</name>
 <value>fair-scheduler.xml</value>
</property>
</configuration>

  注:这里配置了Hadoop 2.2 FairScheduler调度器

步骤四:slaves配置

       将三个数据节点配置到slaves中

步骤五:将配置好的Hadoop 2.2 分发同步到各个数据节点

步骤六:格式化NameNode

       执行命令:hdfs namenode –format

              或者 hadoop namenode –format

步骤七:启动hdfs和yarn

       启动Hdfs: start-dfs.sh

       启动yarn: start-yarn.sh

       或者可以执行start-all.sh一起启动hdfs和yarn

步骤八:测试

       Hdfs测试:

              向hdfs中上传文件:hdfs dfs –put abc /input

              查看hdfs文件目录:hdfs dfs –ls /

       Yarn测试:

              运行WordCount测试程序:

     hadoop jar /export/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /out

欢迎加入Hadoop技术群进行交流:147681830

 

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!