ambari

ambari服务安装问题

筅森魡賤 提交于 2020-02-12 11:45:47
执行ambari-server setup……的命令时并没有让配置数据库什么的信息,这种情况等你后面启动服务时就会报错了, 此时再执行一次setup操作就看到配置数据库了, 正常setup应该是这样的, [root@hadoop1 hadoop]# ambari-server setup Using python /usr/bin/python Setup ambari-server Checking SELinux... SELinux status is 'enabled' SELinux mode is 'permissive' WARNING: SELinux is set to 'permissive' mode and temporarily disabled. OK to continue [y/n] (y)? y Customize user account for ambari-server daemon [y/n] (n)? n Adjusting ambari-server permissions and ownership... Checking firewall status... Checking JDK... [1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8 [2

centons7安装 ambari准备工作

冷暖自知 提交于 2020-01-25 10:48:58
安装java,maven 安装过程略,环境变量配置 export JAVA_HOME = /home/ambari/tools/jdk1.8.0_221 export CLASSPATH = .: $JAVA_HOME /jre/lib: $JAVA_HOME /lib: $JAVA_HOME /lib/tools.jar export MAVEN_HOME = /home/ambari/tools/apache-maven-3.6.3 export PATH = $PATH : $HOME /.local/bin: $HOME /bin: $JAVA_HOME /bin: $MAVEN_HOME /bin 修改主机名,主机映射关系 修改主机名 vi /etc/hostname kylin60 集群主机映射配置 vi /etc/hosts 192.168.4.60 kylin60 192.168.4.61 kylin61 192.168.4.62 kylin62 关闭防火墙 sudo systemctl stop firewalld sudo systemctl disable firewalld sudo systemctl status firewalld 关闭selinux vi /etc/sysconfig/selinux SELINUX=disabled 机器重启

URI to access a file in HDFS

佐手、 提交于 2020-01-23 08:24:45
问题 I have setup a cluster using Ambari that includes 3 nodes . Now I want to access a file in a HDFS using my client application. I can find all node URIs under Data Nodes in Amabari. What is the URI + Port I need to use to access a file ? I have used the default installation process. 回答1: Default port is "8020". You can access the "hdfs" paths in 3 different ways. Simply use "/" as the root path For e.g. E:\HadoopTests\target>hadoop fs -ls / Found 6 items drwxrwxrwt - hadoop hdfs 0 2015-08-17

搭建Ambari集群

大城市里の小女人 提交于 2020-01-21 23:57:02
[root@hadoop001 ~]# visudo [root@hadoop001 ssh]# useradd hadoop SSH免密 [hadoop@hadoop001 ~]$ ssh-keygen [hadoop@hadoop001 ~]$ cd .ssh [hadoop@hadoop001 .ssh]$ pwd /home/hadoop/.ssh [hadoop@hadoop001 .ssh]$ cat id_rsa.pub >> authorized_keys [hadoop@hadoop001 .ssh]$ chmod 700 ~/.ssh [hadoop@hadoop001 .ssh]$ chmod 600 ~/.ssh/authorized_keys [hadoop@hadoop001 .ssh]$ ssh hadoop001 The authenticity of host 'hadoop001 (172.31.36.137)' can't be established. ECDSA key fingerprint is SHA256:AAM1VixV4qWn6aVj1liWEOFzmsYKTYxqOFKokwPIPwI. ECDSA key fingerprint is MD5:2d:1b:1d:d2:c2:32:34:ea:fe:ba:52:37:c4:a3

Ambari启用https访问(ssl) 导入信任库truststore

99封情书 提交于 2020-01-20 04:15:31
1、创建证书目录 root@hadoop01[/etc/ambari-server]#mkdir /etc/ambari-server/certs root@hadoop01[/etc/ambari-server]#cd /etc/ambari-server/certs/ root@hadoop01[/etc/ambari-server/certs]#export AMBARI_SERVER_HOSTNAME=hadoop01 2、生成证书 root@hadoop01[/etc/ambari-server/certs]#openssl genrsa -passout pass:hadoop -out $AMBARI_SERVER_HOSTNAME.key 2048 Generating RSA private key, 2048 bit long modulus ......................................+++ ........................+++ e is 65537 (0x10001) root@hadoop01[/etc/ambari-server/certs]# openssl req -new -key $AMBARI_SERVER_HOSTNAME.key -out $AMBARI_SERVER_HOSTNAME

大数据平台的利器--Ambari+HDP

北慕城南 提交于 2020-01-20 03:25:22
Ambari是什么 Ambari 是 Apache 软件基金会 的一个顶级项目。 Apache Ambari项目用于配置、管理和监视Apache Hadoop集群的软件,简化Hadoop管理。Ambari提供了一个直观、易于使用的Hadoop管理web UI。 但是这里的 Hadoop 是广义,指的是 Hadoop 整个生态圈(例如 Hive,Hbase,Sqoop,Zookeeper 等),而并不仅是特指 Hadoop。 用一句话来说, Ambari 就是为了让 Hadoop 以及相关的大数据软件更容易使用的一个工具。 Ambari组成 Ambari 自身也是一个分布式架构的软件,主要由两部分组成:Ambari Server 和 Ambari Agent。简单来说,用户通过 Ambari Server 通知 Ambari Agent 安装对应的软件;Agent 会定时地发送各个机器每个软件模块的状态给 Ambari Server,最终这些状态信息会呈现在 Ambari 的 GUI(图形用户界面),方便用户了解到集群的各种状态,并进行相应的维护。 HDP是什么 Hortonworks Data Platform (HDP)是一个用于分布式存储和处理大型多源数据集的开源框架。 安装步骤 一、集群规划 主机名 IP地址 功能 hadoop101 192.168.10.101 Yum源

ambari HDP2.6.5 安装FLINK1.9

若如初见. 提交于 2020-01-20 01:28:56
ambari HDP2.6.5 安装FLINK1.9 ambari HDP2.6.5 安装FLINK1.9 要下载Flink服务文件夹,请运行以下命令 VERSION=`hdp - select status hadoop - client | sed 's/hadoop-client - \([0-9]\.[0-9]\).*/\1/' ` sudo git clone https: / / github . com / abajwa - hw / ambari - flink - service . git / var / lib / ambari - server / resources / stacks / HDP / $VERSION / services / FLINK 重新启动Ambari ambari - server restart flink 是1.8版本修改成1.9 修改configuration/flink-ambari-config.xml <property> <name>flink_download_url</name> <value>http://X.X.151.15/Package/flink-1.9.0-bin-scala_2.11.tgz</value> <description>Snapshot download location.

Kafka, new storage

断了今生、忘了曾经 提交于 2020-01-16 06:46:10
问题 I'm trying to add new storage for Kafka, here is what I have already done: Add, prepare and mount storage under Linux OS Add new storage in Kafka Broker: log.dirs: /data0/kafka-logs,/data1/kafka-logs Restart Kafka Brokers New directories under /data1/kafka-logs has been created but the size is: du -csh /data1/kafka-logs/ 156K /data1/kafka-logs/ And the size isn't growing only the old /data0 is used. What I'm missing? What should I do more to solve this problem? The storage is almost full, and

【原创】hadoop集群监控工具ambari安装

耗尽温柔 提交于 2020-01-16 05:41:27
  Apache Ambari是对Hadoop进行监控、管理和生命周期管理的开源项目。它也是一个为Hortonworks数据平台选择管理组建的项目。Ambari向Hadoop MapReduce、HDFS、 HBase、Pig, Hive、HCatalog以及Zookeeper提供服务。最近准备装ambari,在网上找了许久,没找到比较系统的ambari安装过程,于是,就根据官网进行了安装,下面是我推荐的正确的较完善的安装方式,希望对大家有所帮助。   一、准备工作   1、系统:我的系统是CentOS6.2,x86_64,本次集群采用两个节点。管理节点:192.168.10.121;客户端节点:192.168.10.122   2、系统最好配置能上网,这样方便后面的操作,否则需要配置yum仓库,比较麻烦。   3、集群中ambari-serveer(管理节点)到客户端配置无密码登录。   4、集群同步时间   5、SELinux,iptables都处于关闭状态。   6、ambari版本:1.2.0   二、安装步骤   A、配置好集群环境 ############ 配置无密码登录 ################# [root@ccloud121 ~]# ssh-keygen -t dsa [root@ccloud121 ~]# cat /root/.ssh/id_dsa

Dynamically create the version number within the Ambari's metainfo.xml file using maven build processes

限于喜欢 提交于 2020-01-16 04:00:18
问题 I don’t want to hardcode my service version into metainfo.xml, Can I do it? <service> <name>DUMMY_APP</name> <displayName>My Dummy APP</displayName> <comment>This is a distributed app.</comment> <version>0.1</version> --------------This I don't want to hardcode, Can I doit? <components> ... </components> </service> I am using maven as my build tool. 回答1: This can be done by using maven's resource filtering. Three steps are required: Define a maven property that will hold the version number