cloudera-manager

No package oracle-j2sdk1.7 available?

醉酒当歌 提交于 2019-12-04 05:48:05
I am running following command for cloudera installation ./cloudera-manager-installer.bin After accepting oracle license i getting error installation failed for logs go to 2.install-oracle-j2sdk1.7.log following is contents of the log file Loaded plugins: fastestmirror, priorities, refresh-packagekit, security Loading mirror speeds from cached hostfile * base: mirrors.syringanetworks.net * extras: mirror.sanctuaryhost.com * updates: centos.corenetworks.net Setting up Install Process No package oracle-j2sdk1.7 available. Error: Nothing to do anyone has this type of error ? give suggestions ?

Cloudera Manager Failed to authenticate : Exhausted available authentication methods

狂风中的少年 提交于 2019-12-03 13:07:22
I am currently trying to learn how can I install and configure Cloudera before using it. So I install in VirtualBox, Ubuntu 14.04, Cloudera Manager. I would like to try it on a pseudo single node (only my computer: no cluster). I manage to finish the installation. Then to Specify hosts for your CDH cluster installation ; localhost 127.0.0.1 My problem is on the "Provide SSH login credentials." step Root access to your hosts is required to install the Cloudera packages. This installer will connect to your hosts via SSH and log in either directly as root or as another user with password-less

beeline not able to connect to hiveserver2

落爺英雄遲暮 提交于 2019-12-03 06:08:48
I have a CDH 5.3 instance. I start the hive-server2 by first starting the hive-metastore and then the hive-server from command line. After this I use beeline to connect to my hive-server2 but apparently it is not able to so. Could not open connection to jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (state=08S01,code=0) Another issue, I tried to see if the hive-server2 was listening on port 10000. I did " sudo netstat -tulpn | grep :10000 " but none of the applications came up. I also added the following property in the hive-site.xml but to no avail. Why it doesn't

Namenode HA (UnknownHostException: nameservice1)

人走茶凉 提交于 2019-12-01 04:15:56
We enable Namenode High Availability through Cloudera Manager, using Cloudera Manager >> HDFS >> Action > Enable High Availability >> Selected Stand By Namenode & Journal Nodes Then nameservice1 Once the whole process completed then Deployed Client Configuration. Tested from Client Machine by listing HDFS directories (hadoop fs -ls /) then manually failover to standby namenode & again listing HDFS directories (hadoop fs -ls /). This test worked perfectly. But When I ran hadoop sleep job using following command it failed $ hadoop jar /opt/cloudera/parcels/CDH-4.6.0-1.cdh4.6.0.p0.26/lib/hadoop-0

Spark : check your cluster UI to ensure that workers are registered

喜欢而已 提交于 2019-12-01 03:05:32
I have a simple program in Spark: /* SimpleApp.scala */ import org.apache.spark.SparkContext import org.apache.spark.SparkContext._ import org.apache.spark.SparkConf object SimpleApp { def main(args: Array[String]) { val conf = new SparkConf().setMaster("spark://10.250.7.117:7077").setAppName("Simple Application").set("spark.cores.max","2") val sc = new SparkContext(conf) val ratingsFile = sc.textFile("hdfs://hostname:8020/user/hdfs/mydata/movieLens/ds_small/ratings.csv") //first get the first 10 records println("Getting the first 10 records: ") ratingsFile.take(10) //get the number of records

Namenode HA (UnknownHostException: nameservice1)

纵然是瞬间 提交于 2019-12-01 02:30:05
问题 We enable Namenode High Availability through Cloudera Manager, using Cloudera Manager >> HDFS >> Action > Enable High Availability >> Selected Stand By Namenode & Journal Nodes Then nameservice1 Once the whole process completed then Deployed Client Configuration. Tested from Client Machine by listing HDFS directories (hadoop fs -ls /) then manually failover to standby namenode & again listing HDFS directories (hadoop fs -ls /). This test worked perfectly. But When I ran hadoop sleep job using

Spark executor logs on YARN

风流意气都作罢 提交于 2019-11-30 22:22:35
问题 I'm launching a distributed Spark application in YARN client mode, on a Cloudera cluster. After some time I see some errors on Cloudera Manager. Some executors get disconnected and this happens systematically. I would like to debug the issue but the internal exception is not reported by YARN. Exception from container-launch with container ID: container_1417503665765_0193_01_000003 and exit code: 1 ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org

Spark : how to run spark file from spark shell

谁说胖子不能爱 提交于 2019-11-27 06:16:09
I am using CDH 5.2. I am able to use spark-shell to run the commands. How can I run the file(file.spark) which contain spark commands. Is there any way to run/compile the scala programs in CDH 5.2 without sbt? Thanks in advance To load an external file from spark-shell simply do :load PATH_TO_FILE This will call everything in your file. I don't have a solution for your SBT question though sorry :-) Ziyao Li In command line, you can use spark-shell -i file.scala to run code which is written in file.scala javadba You can use either sbt or maven to compile spark programs. Simply add the spark as

Spark : how to run spark file from spark shell

别来无恙 提交于 2019-11-26 11:53:55
问题 I am using CDH 5.2. I am able to use spark-shell to run the commands. How can I run the file(file.spark) which contain spark commands. Is there any way to run/compile the scala programs in CDH 5.2 without sbt? Thanks in advance 回答1: To load an external file from spark-shell simply do :load PATH_TO_FILE This will call everything in your file. I don't have a solution for your SBT question though sorry :-) 回答2: In command line, you can use spark-shell -i file.scala to run code which is written