1.hadoop的内存配置调优
mapred-site.xml的内存调整
<property>
<name>mapreduce.map.memory.mb</name>
<value>1536</value>
</property>
<property>
<name>mapreduce.map.java.opts</name>
<value>-Xmx1024M</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>3072</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>-Xmx2560M</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
<discription>每个节点可用内存,单位MB</discription>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
<discription>单个任务可申请最少内存,默认1024MB</discription>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
<discription>单个任务可申请最大内存,默认8192MB</discription>
</property>
hadoop的内存调整hadoop-env.sh
export HADOOP_HEAPSIZE_MAX=2048
export HADOOP_HEAPSIZE_MIN=2048
2.Hbase的参数调优
hbase的内存调整hbase-env.sh export HBASE_HEAPSIZE=8G
3.数据的导入导出
hbase数据的导出 hbase org.apache.hadoop.hbase.mapreduce.Export NS1.GROUPCHAT /do1/GROUPCHAT hdfs dfs -get /do1/GROUPCHAT /opt/GROUPCHAT 尝试删除数据 hdfs dfs -rm -r /do1/GROUPCHAT hbase数据的导入 hdfs dfs -put /opt/GROUPCHAT /do1/GROUPCHAT hdfs dfs -ls /do1/GROUPCHAT hbase org.apache.hadoop.hbase.mapreduce.Import NS1.GROUPCHAT /do1/GROUPCHAT