Cloudera

0007-如何迁移Cloudera Manager节点

我与影子孤独终老i 提交于 2019-12-11 11:14:58
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 1.概述 本文档讲述如何将Cloudera Manager在Kerberos环境下迁移至新的CM节点。通过本文档,您将学习到以下知识: 1.如何迁移Cloudera Manager节点 2.如何迁移MySQL元数据库 3.如何迁移Kerberos MIT KDC 文档主要分为以下几步: 1.准备Cloudera Manager新节点 2.MariaDB数据库迁移(可选) 3.迁移Kerberos MIT KDC(可选) 4.将原CM节点数据迁移至新节点 5.迁移后集群服务验证 这篇文档将着重介绍Cloudera Manager节点迁移,并基于以下假设: 1.CDH环境已搭建并正常运行 2.旧Cloudera Manager节点包含Cloudera Manager Server(即cloudera-scm-server)服务和Cloudera Management Service服务(Alert Publisher/Event Server/Host Monitor/Reports Manager/Service Monitor) 3.集群已完成MIT Kerberos的配置,并正常使用 4.集群Hadoop服务HBase/Hive/HDFS/Hue/Kafka/Oozie/Spark/Spark2/Yarn

如何修改Cloudera Manager的时区

巧了我就是萌 提交于 2019-12-11 11:05:08
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 转载于: https://mp.weixin.qq.com/s?__biz=MzI4OTY3MTUyNg==&mid=2247492761&idx=2&sn=7419ad1720753a8229fb00d15aaf0335&chksm=ec293490db5ebd8617e86c814b6cfd391da1f9d93a00e097ef42730026cdb7b13f23323e5a56&scene=21#wechat_redirect 文档编写目的 在使用CDH集群使用过程中我们有时需要将Cloudera Manger的时区调整为本地时区,好方便查看监控图表信息。本篇文章Fayson主要介绍如何修改Cloudera Manager的时区。 测试环境 1.CDH6.0 2.Redhat7.4 3. 采用root用户进行操作 开始配置 1.Fayson 的测试CDH6.0集群一共4台机器,每台机器的时区都为UTC,如下所示: # bash ssh_do_all.sh node.list timedatectl 2. 通过Cloudera Manager首页查看。“主页”-> “支持”->“关于” 修改Cloudera Manager时区 1. 首先通过命令确认OS包含你所需要调整的时区

Tableau: Error while using Impala to connect to Cloudera Hadoop

冷暖自知 提交于 2019-12-11 09:28:29
问题 I am working on using Tableau to connect to Cloudera Hadoop. I provide the server and port details and connect using "Impala". I am able to succesfully connect, select default Schema and choose the required table (s). After this, when I drag and drop either a dimension or a measure to Rows/Columns on the 'grid', i get the below error: [Cloudera][Hardy] (22) Error from ThriftHiveClient: Query returned non-zero code: 10025, cause: FAILED: SemanticException [Error 10025]: Line 1:7 Expression not

Camel-Kafka security protocol SASL_PLAINTEXT not supported

两盒软妹~` 提交于 2019-12-11 07:29:15
问题 I need to route ActiveMQ messages to Kafka(Cloudera) using Camel using authentication protocol Kerberos. ActiveMQ v5.15.4 Camel:2.21.1 Kafka Clients: 1.1.0 Server Version: Apache/2.4.6(CentOS) Kafka Security documentation states that it only supports SASL_PLAINTEXT and SASL_SSL for Kerberos On the other hand when I try to use SASL_PLAINTEXT for security protocol in Camel I am getting an error during the ActiveMQ starting. As a result ActiveMQ will not start. I took the latest Camel code from:

Cloudera CDH 5.7.2 / HBase: How to Set hfile.format.version?

假如想象 提交于 2019-12-11 06:21:09
问题 With CDH 5.7.2-1.cdh5.7.2.po.18, I am trying to use Cloudera Manager to configure HBase to use visibility labels and authorizations, as described in the Cloudera Community post below: Cloudera Manager Hbase Visibility Labels Using Cloudera Manager, I have successfully updated the values of the following properties: hbase.coprocessor.region.classes: Set to org.apache.hadoop.hbase.security.visibility.VisibilityController hbase.coprocessor.master.classes: Set to org.apache.hadoop.hbase.security

Hive “Creating Hive Metastore Database Tables” command fails on installation 'Path A' using Cloudera Manager

为君一笑 提交于 2019-12-11 06:06:17
问题 I am installing Cloudera Manager onto an ec2 instance. I only added a single other ec2 instance to the cluster. The installation succeeded, but when the manager initiates the cluster services (step 9 of 21) I get the following error: [2013-07-12 18:44:35,906]ERROR 63227[main] com.cloudera.enterprise.dbutil.SqlRunner.open(SqlRunner.java:111) - Error connecting to db with user 'hive' and jdbcUrl 'jdbc:postgresql://ip-xx-xxx- xx-x.ec2.internal:7432/hive' I manually opened port 7432 on the ec2

How to compare two columns with different data type groups

两盒软妹~` 提交于 2019-12-11 05:39:23
问题 This is an extension of a question I posed yesterday: How to handle potential data loss when performing comparisons across data types in different groups In HIVE, is it possible to perform comparisons between two columns that are in different data type groups inline within the SELECT clause? I need to first determine what the incoming meta data is for each column and then provide logic that picks what CAST to use. CASE WHEN Column1 <=> Column2 THEN 0 -- Error occurs here if data types are in

Load a text file into Apache Kudu table?

China☆狼群 提交于 2019-12-11 04:39:59
问题 How do you load a text file to an Apache Kudu table? Does the source file need to be in HDFS space first? If it doesn't share the same hdfs space as other hadoop ecosystem programs (ie/ hive, impala), is there Apache Kudu equivalent of: hdfs dfs -put /path/to/file before I try to load the file? 回答1: The file need not to be in HDFS first.It can be taken from an edge node/local machine.Kudu is similar to Hbase.It is a real-time store that supports key-indexed record lookup and mutation but cant

Access tables from Impala through Python

不羁的心 提交于 2019-12-11 04:05:17
问题 I need to access tables from Impala through CLI using python on the same cloudera server I have tried below code to establish the connection : def query_impala(sql): cursor = query_impala_cursor(sql) result = cursor.fetchall() field_names = [f[0] for f in cursor.description] return result, field_names def query_impala_cursor(sql, params=None): conn = connect(host='xx.xx.xx.xx', port=21050, database='am_playbook',user='xxxxxxxx', password='xxxxxxxx') cursor = conn.cursor() cursor.execute(sql

Sqoop import failed, UnsupportedClassVersionError

人走茶凉 提交于 2019-12-11 02:21:06
问题 I was trying to import table from MySQL to HDFS using sqoop. The commandline used is, sqoop import --connect jdbc:mysql://192.168.10.452/qw_key_test --username qw -P --split-by qw_id -m 10 --target-dir /user/perf/qwperf/sqoops --verbose --table qw_perf_store_key The mappers fails with Unsupported version as show below. 2013-05-22 17:46:24,165 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead 2013-05-22 17:46