hadoop-不同集群之间数据拷贝
hadoop不同集群之间数据拷贝,拷贝时两个集群要用active namenode去拷贝,datanode是不具备拷贝功能的,所以当我们把数据拿到hdfs路径上时, 要去判断当前集群哪个主节点是active的,所以大致步骤为 数据落到hdfs上 beeline -u jdbc:hive2://158.222.14.103:10000/ln -e “insert overwrite directory ‘/tmp/export/loan_table’ ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\001’ select * from ln.loan_table” 判断namenode状态 hdfs haadmin -getServiceState nn1 拷贝 hadoop distcp -update -skipcrccheck hdfs://158.222.14.100:8020/tmp/export/loan_table hdfs://158.220.177.106:8020/tmp/export/loan_table 这样后数据就会落到目标集群的/tmp/export/loan_table 建指定位置的text表 create table ln.loan_table location ‘/tmp/export/loan_table’