CDH5.12.1集群安装配置

ぐ巨炮叔叔 提交于 2020-01-16 08:46:49

CDH5.12.1&Kerberos 安装配置

环境:

操作系统:CentOS 7
JDK 版本:1.8.144
所需安装包及版本说明:由于我们的操作系统为CentOS7,需要下载以下文件:
下载地址: http://archive.cloudera.com/cm5/cm/5/
cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz
下载地址: http://archive.cloudera.com/cdh5/parcels/5.12.1/
CDH-5.12.1-1.cdh5.12.1.p0.3-el7.parcel
CDH-5.12.1-1.cdh5.12.1.p0.3-el7.parcel.sha1
manifest.json
IP地址 主机名 角色名称 部署软件
192.168.1.25 node5 Master jdk、cloudera-manager、MySql、krb5kdc、kadmin
192.168.1.21 node1 node jdk、cloudera-manager
192.168.1.22 node2 node jdk、cloudera-manager
192.168.1.23 node3 node jdk、cloudera-manager
192.168.1.24 node4 node jdk、cloudera-manager
192.168.1.26 node6 node jdk、cloudera-manager
192.168.1.27 node7 node jdk、cloudera-manager
192.168.1.28 node8 node jdk、cloudera-manager
192.168.1.29 node9 node jdk、cloudera-manager

准备

1 配置host文件–Master执行

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.25 master
192.168.1.21 node1
192.168.1.22 node2
192.168.1.23 node3
192.168.1.24 node4
192.168.1.26 node6
192.168.1.27 node7
192.168.1.28 node8
192.168.1.29 node9

2. 拷贝host文件到所有服务器–Master执行

for a in {1..4} ; do scp /etc/hosts node$a:/etc/hosts ; done
for a in {6..9} ; do scp /etc/hosts node$a:/etc/hosts ; done
scp /etc/hosts master:/etc/hosts

3. 配置主机到所有节点免密码登录–Master执行

# 在所有节点执行ssh-keygen -t rsa -P ''
## 将集群每一个节点的公钥id_rsa.pub放入到自己的认证文件中authorized_keys
for a in {1..4}; do ssh root@node$a cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys; done
for a in {6..9}; do ssh root@node$a cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys; done
ssh root@master cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
## 将自己的认证文件 authorized_keys ` 通过 scp 命令复制发送到每一个节点上去: /root/.ssh/authorized_keys
for a in {1..4}; do scp /root/.ssh/authorized_keys root@node$a:/root/.ssh/authorized_keys ; done
for a in {6..9}; do scp /root/.ssh/authorized_keys root@node$a:/root/.ssh/authorized_keys ; done

4. 安装配置JAVA环境

# 拷贝java安装文件--Master执行
for a in {1..4}; do scp /root/jdk-8u144-linux-x64.rpm root@node$a:/root/jdk-8u144-linux-x64.rpm ; done
for a in {6..9}; do scp /root/jdk-8u144-linux-x64.rpm root@node$a:/root/jdk-8u144-linux-x64.rpm ; done
# 执行安装java--Master执行
for a in {1..4}; do ssh root@node$a rpm -ivh /root/jdk-8u144-linux-x64.rpm; done
for a in {6..9}; do ssh root@node$a rpm -ivh /root/jdk-8u144-linux-x64.rpm; done
rpm -ivh /root/jdk-8u144-linux-x64.rpm
# 配置环境变量-所有 
echo "JAVA_HOME=/usr/java/latest/" >> /etc/environment

5.设置用户最大可打开文件数,进程数,内存占用

cat /etc/security/limits.conf >> EOF
*    soft    nofile    32728
*    hard    nofile    1029345
*    soft    nproc    65535
*    hard    nproc    unlimited
*    soft    memlock    unlimited
*    hard    memlock    unlimited
>> EOF

6. 关闭防火墙与selinux

7. 配置时间同步

# master-node[1-9]
yum -y install ntp
# 备份配置文件所有节点-在master执行
for a in {1..4}; do ssh root@node$a cp /etc/ntp.conf /etc/ntp.conf.backup; done
for a in {6..9}; do ssh root@node$a cp /etc/ntp.conf /etc/ntp.conf.backup; done
cp /etc/ntp.conf /etc/ntp.conf.backup
# 配置文件增加如下内容
# 中国这边最活跃的时间服务器 : http://www.pool.ntp.org/zone/cn
server 0.cn.pool.ntp.org
server 0.asia.pool.ntp.org
server 3.asia.pool.ntp.org

# allow update time by the upper server
# 允许上层时间服务器主动修改本机时间
restrict 0.cn.pool.ntp.org nomodify notrap noquery
restrict 0.asia.pool.ntp.org nomodify notrap noquery
restrict 3.asia.pool.ntp.org nomodify notrap noquery

# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
# 外部时间服务器不可用时,以本地时间作为时间服务
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10

# 启动服务
systemctl restart ntpd && systemctl enable ntpd
# 查看端口监听情况
netstat -tlunp | grep ntp
# 查看网络中的NTP服务器,同时显示客户端和每个服务器的关系
ntpq -p
# 在客户端节点--node[1-9]
配置/etc/ntp.conf
echo "server 192.168.1.25 prefer" >> /etc/ntp.conf && systemctl restart ntpd && systemctl enable ntpd && ntpdate -u 192.168.1.25

8. 设置swap空间, 关闭大页面压缩

# 设置swap空间 执行命令 (所有节点)
echo "vm.swappiness = 0" >> /etc/sysctl.conf
# 关闭大页面压缩
# 以禁用此设置,然后将同一命令添加到 /etc/rc.local 等初始化脚本中,以便在系统重启时予以设置。以下主机将受到影响: node[1-9]
cat << EOF >> /etc/rc.local
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
EOF

9. 安装配置数据库–Master节点

# 下载mysql repo源
wget http://repo.mysql.com/mysql-community-release-el7-5.noarch.rpm
# 安装repo源
rpm -ivh mysql-community-release-el7-5.noarch.rpm
# 检测安装
yum repolist enabled | grep "mysql.*-community.*"
mysql-connectors-community/x86_64       MySQL Connectors Community           141
mysql-tools-community/x86_64            MySQL Tools Community                105
mysql56-community/x86_64                MySQL 5.6 Community Server           513
# 安装数据库
yum -y install mysql-community-server
# 配置数据库
systemctl enable mysqld && systemctl start mysqld
# 初始化数据库
mysql_secure_installation # 数据库密码为: 123456
# 数据库root复权
grant all privileges on *.* to root@'%' identified by '123456';
flush privileges;
# 创建CM用的数据库:
--hive数据库 

create database hive DEFAULT CHARSET utf8 COLLATE utf8_general_ci ;

--集群监控数据库

create database amon DEFAULT CHARSET utf8 COLLATE utf8_general_ci ;

--hue数据库

create database hue DEFAULT CHARSET utf8 COLLATE utf8_general_ci;

-- oozie数据库

create database oozie DEFAULT CHARSET utf8 COLLATE utf8_general_ci;

10. 重启所有节点-基础配置完毕–master节点

for a in {1..4}; do ssh root@node$a reboot; done
for a in {6..9}; do ssh root@node$a reboot; done
reboot

部署CDH5.12.1

1. 安装依赖

# 所有节点
yum -y install chkconfig bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse portmap fuse-libs redhat-lsb

2. 配置cdh

# 所有节点创建-在master执行

for a in {1..4}; do ssh root@node$a mkdir /opt/cloudera-manager; done
for a in {6..9}; do ssh root@node$a mkdir /opt/cloudera-manager; done
mkdir /opt/cloudera-manager
# 上传cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz到master节点-在master执行拷贝到所有agent节点
for a in {1..4}; do scp /opt/cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz root@node$a:/opt/ ; done
for a in {6..9}; do scp /opt/cloudera-manager-centos7-cm5.12.1_x86_64.tar.gz root@node$a:/opt/ ; done
# 在所有节点解压文件-在master节点执行
for a in {1..4}; do ssh root@node$a tar xvzf /opt/cloudera-manager*.tar.gz -C /opt/cloudera-manager; done
for a in {6..9}; do ssh root@node$a tar xvzf /opt/cloudera-manager*.tar.gz -C /opt/cloudera-manager; done
tar xvzf /opt/cloudera-manager*.tar.gz -C /opt/cloudera-manager
# 创建用户 cloudera-scm(所有节点执行)
useradd --system --home=/opt/cloudera-manager/cm-5.12.1/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm
# 修改配置文件
vi /opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini
server_host=master
# 拷贝到所有节点
for a in {1..4}; do scp /opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini root@node$a:/opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini; done
for a in {6..9}; do scp /opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini root@node$a:/opt/cloudera-manager/cm-5.12.1/etc/cloudera-scm-agent/config.ini; done
# 将mysql的驱动包上传到所有节点
for a in {1..4}; do scp /opt/cloudera-manager/cm-5.12.1/share/cmf/lib/mysql-connector-java-5.1.38.jar root@node$a:/opt/cloudera-manager/cm-5.12.1/share/cmf/lib/mysql-connector-java-5.1.38.jar; done
for a in {6..9}; do scp /opt/cloudera-manager/cm-5.12.1/share/cmf/lib/mysql-connector-java-5.1.38.jar root@node$a:/opt/cloudera-manager/cm-5.12.1/share/cmf/lib/mysql-connector-java-5.1.38.jar; done
# 初始化数据库--在master节点执行
cd /opt/cloudera-manager/cm-5.12.1/share/cmf/schema/
./scm_prepare_database.sh mysql cm -h master -uroot -p123456 --scm-host master scm scm scm
scm_prepare_database.sh mysql cm -h <hostName> -u<username>  -p<password> --scm-host <hostName>  scm scm scm

对应于:数据库类型  数据库 服务器 用户名 密码  –scm-host  Cloudera_Manager_Server 所在节点……
# 创建 Parcel 目录--在master节点
# 将一下3个文件上传到 /opt/cloudera/parcel-repo
CDH-5.12.1-1.cdh5.12.1.p0.3-el7.parcel
CDH-5.12.1-1.cdh5.12.1.p0.3-el7.parcel.sha1 要将名称改为CDH-5.12.1-1.cdh5.12.1.p0.3-el7.parcel.sha
manifest.json
mkdir -p /opt/cloudera/parcel-repo # 上传安装文件到此处
chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo && cd /opt/cloudera/parcel-repo
# Agent 节点创建目录/opt/cloudera/parcels,在master执行:
for a in {1..4}; do ssh root@node$a mkdir -p /opt/cloudera/parcels; done
for a in {1..4}; do ssh root@node$a chown cloudera-scm:cloudera-scm /opt/cloudera/parcels; done
for a in {6..9}; do ssh root@node$a mkdir -p /opt/cloudera/parcels; done
for a in {6..9}; do ssh root@node$a chown cloudera-scm:cloudera-scm /opt/cloudera/parcels; done
# 启动 CM Manager&Agent 服务
# master节点
/opt/cloudera-manager/cm-5.12.1/etc/init.d/cloudera-scm-server start
# agent节点
for a in {1..4}; do ssh root@node$a /opt/cloudera-manager/cm-5.12.1/etc/init.d/cloudera-scm-agent start; done
for a in {6..9}; do ssh root@node$a /opt/cloudera-manager/cm-5.12.1/etc/init.d/cloudera-scm-agent start; done

3. 页面安装cdh 根据自己需求安装即可

访问地址为: http://[ip]:7180 默认用户密码: admin/admin

报错收集与解决

1. HIVE报错

# 当使用页面 cdh 创建 Hive Metastore 数据库表 失败, 需要将mysql的连接jar拷贝到 集群自动分配或者手动指定的服务器的相应目录后重试.
scp /opt/cloudera-manager/cm-5.12.1/share/cmf/lib/mysql-connector-java-5.1.38.jar root@node6:/opt/cloudera/parcels/CDH-5.12.1-1.cdh5.12.1.p0.3/lib/hive/lib/

2. 在页面安装中–手动分配主机角色避免出现一台服务器服务过多的情况

CDH开启kerberos认证

kdc 服务的安装与配置

  • 安装kdc服务
 # yum install krb5-server krb5-libs krb5-auth-dialog krb5-workstation -y
  • 配置kdc 服务
vim /etc/krb5.conf

---
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_kdc = false
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 default_realm = GEMS.COM
 default_tgs_enctypes = rc4-hmac
 default_tkt_enctypes = rc4-hmac
 permitted_enctypes = rc4-hmac
 udp_preference_limit = 1
 kdc_timeout = 3000

# default_ccache_name = KEYRING:persistent:%{uid}

[realms]
 GEMS.COM = {
  kdc = master
  admin_server = master
 }

[domain_realm]
 .master = GEMS.COM
 master = GEMS.COM
  • 修改/var/kerberos/krb5kdc/kadm5.acl
vi /var/kerberos/krb5kdc/kadm5.acl

*/admin@GEMS.COM        *
  • 修改/var/kerberos/krb5kdc/kdc.conf
vim /var/kerberos/krb5kdc/kdc.conf
----
[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 GEMS.COM = {
  #master_key_type = aes256-cts
  max_renewable_life = 7d
  max_life = 1d
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
  default_principal_flags = +renewable, +forwardable
 }
  • 创建Kerberos数据库
# 密码是admin(可以自定义)
# kdb5_util create -r GEMS.COM -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'GEMS.COM',
master key name 'K/M@GEMS.COM'
You will be prompted for the database Master Password. 
It is important that you NOT FORGET this password.
Enter KDC database master key: 
Re-enter KDC database master key to verify:

输入认证的密码为: GEMS.COM
  • 创建Kerberos的管理账号
# 密码是admin(可以自定义)
# kadmin.local
Authenticating as principal root/admin@GEMS.COM with password.
kadmin.local:  addprinc admin/admin@GEMS.COM 
WARNING: no policy specified for admin/admin@GEMS.COM; defaulting to no policy
Enter password for principal "admin/admin@GEMS.COM":  [输入密码]
Re-enter password for principal "admin/admin@GEMS.COM":  [输入密码]
Principal "admin/admin@GEMS.COM" created.

kadmin.local: exit 
  • 启动krb5 的 服务
systemctl start krb5kdc 
systemctl start kadmin 

systemctl enable krb5kdc 
systemctl enable kadmin
  • 测试kerberos 的管理员账号
kinit admin/admin@GEMS.COM
---> 输入密码:admin

# klist 

集群所有节点安装Kerberos客户端(包括CM)

  • 安装依赖包
全部节点都要安装:
yum -y install krb5-libs krb5-workstation (所有节点都要安装)
for a in {1..4}; do ssh root@node$a yum -y install krb5-libs krb5-workstation; done
for a in {6..9}; do ssh root@node$a yum -y install krb5-libs krb5-workstation; done
CM节点安装额外组件
yum -y install openldap-clients (kdc-server 节点安装)
  • 节点同步krb5.conf 文件
for a in {1..4}; do scp /etc/krb5.conf root@node$a:/etc/krb5.conf; done
for a in {6..9}; do scp /etc/krb5.conf root@node$a:/etc/krb5.conf; done

CDH集群启用Kerberos

  • 配置jdk 的 jce_policy-8.zip
# 在master节点执行--不确定是否需要
# unzip jce_policy-8.zip

# cd UnlimitedJCEPolicyJDK8/
# cp -p *.jar /usr/java/jdk1.8.0_144/jre/lib/security/
for a in {1..4}; do scp /root/UnlimitedJCEPolicyJDK8/*.jar root@node$a:/usr/java/jdk1.8.0_144/jre/lib/security/; done
for a in {6..9}; do scp /root/UnlimitedJCEPolicyJDK8/*.jar root@node$a:/usr/java/jdk1.8.0_144/jre/lib/security/; done
  • KDC添加Cloudera Manager管理员账号
# 密码为 admin(可以自定义)
kadmin.local
Authenticating as principal admin/admin@GEMS.COM with password.
kadmin.local:  addprinc cloudera-scm/admin@GEMS.COM
WARNING: no policy specified for cloudera-scm/admin@GEMS.COM; defaulting to no policy
Enter password for principal "cloudera-scm/admin@GEMS.COM": [输入密码] 
Re-enter password for principal "cloudera-scm/admin@GEMS.COM": [输入密码]
Principal "cloudera-scm/admin@GEMS.COM" created.

密码为: Cloudera-scm 
  • impala启用Kerberos
# 在页面启用kerberos时已经生成了个节点的impala.keytab文件,需要进行把所有的节点进行合并
# 在master节点合并
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node1@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node2@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node3@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node4@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node6@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node7@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node8@GEMS.COM"
kadmin.local -q "xst  -k impala-unmerge.keytab  impala/node9@GEMS.COM"

# 在master节点同步配置
for a in {1..4}; do scp impala.keytab root@node$a:/etc/impala/conf/; done
for a in {6..9}; do scp impala.keytab root@node$a:/etc/impala/conf/; done
for a in {1..4}; do ssh root@node$a chmod 400 /etc/impala/conf/impala.keytab; done
for a in {6..9}; do ssh root@node$a chmod 400 /etc/impala/conf/impala.keytab; done
for a in {1..4}; do ssh root@node$a chown impala:impala /etc/impala/conf/impala.keytab; done
for a in {6..9}; do ssh root@node$a chown impala:impala /etc/impala/conf/impala.keytab; done
# 在需要连接的主机执行(impala/node1@GEMS.COM 可以根据自己的主机进行更改)
kinit -k -t /etc/impala/conf/impala.keytab impala/node1@GEMS.COM
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!