Hadoop Version:1.0.1 start-all.sh #总启动开关
1. start dfs daemons 1.1 start namenode call hadoop-daemon.sh(num:135) --> call hadoop, start service, define java memory 1.2 start datanode call hadoop-daemons.sh --> call slave.sh --(ssh)--> slave server,start service 1.3 start secondarynamenode call hadoop-daemons.sh --> call slave.sh --(ssh)--> master server,start service 2. start mapred daemons 2.1 start jobstracker call hadoop-daemon.sh --> call hadoop, start service, define java memory 2.2 start taskstracker call hadoop-daemons.sh --> call slave.sh --(ssh)--> slave server, start service服务启动服务基本有如下:
1.start namenode / master server 2.start datanode / slave server 3.start secondarynamenode / master server 4.start jobstracker / master server 5.start taskstracker / slave server服务脚本: hadoop-config.sh # 通用服务变量定义脚本 1. 定义HADOOP_CONF_DIR,HADOOP_PREFIX等 2. 判断hadoop-daemons.sh远程调用的server,主要用于sencondary服务启动,所以部署的时候需要MasterServer通过ssh不但需要访问到slave,也需要访问到master自己 3. 定义脚本参数变量,比如namenode,datanode,jobstracker等 hadoop-daemon.sh #本地hadoop服务启动调用脚本 hadoop-daemons.sh #远程hadoop服务启动调用脚本 hadoop # 通用启动服务执行脚本 定义启动服务的内存消耗,默认是1000m JAVA_HEAP_MAX=-Xmx1000m 以上为hadoop的启动脚本的启动基本流程很原理
来源:oschina
链接:https://my.oschina.net/u/96940/blog/410087