yarn

java.io.IOException: Not a valid BCFile

北城以北 提交于 2019-12-12 02:44:49
问题 when i run "yarn logs -applicationId application_1438080928000_6932", appear this exception: Exception in thread "main" java.io.IOException: Not a valid BCFile. at org.apache.hadoop.io.file.tfile.BCFile$Magic.readAndVerify(BCFile.java:927) at org.apache.hadoop.io.file.tfile.BCFile$Reader.<init>(BCFile.java:628) at org.apache.hadoop.io.file.tfile.TFile$Reader.<init>(TFile.java:804) at org.apache.hadoop.yarn.logaggregation.AggregatedLogFormat$LogReader.<init>(AggregatedLogFormat.java:358) at

EACCES permissions errors node-sass

℡╲_俬逩灬. 提交于 2019-12-12 02:34:49
EACCES:permission denied, mkdir '…/node-sass/.node-gyp 在网上找了很多方法,不外乎如下: 1.改变NPM默认目录,执行如下命令: 1)mkdir ~/.npm-global 2)npm config set prefix ‘~/.npm-global’ 3)export PATH=~/.npm-global/bin:$PATH 4)source ~/.profile 5)npm install -g jshint 2.npm 后添加 --unsafe-perm 1)npm install node-sass --unsafe-perm 最后,找到的最简单粗暴的有效方法是,用yarn不用 npm 。 P.S:ubuntu下装yarn 可能会有cmdtest之类的问题,这个自行搜索yarn 的官方按照说明即可。 来源: CSDN 作者: gupard 链接: https://blog.csdn.net/gupard/article/details/103496615

must have properties for core-site hdfs-site mapred-site and yarn-site.xml

半腔热情 提交于 2019-12-12 02:14:43
问题 Can anyone please let me know the must have properties for Core-site.xml, hdfs-site.xml, mapred-site.xml and yarn-site.xml without which hadoop can not start? 回答1: Below settings is for Hadoop 2.x.x for Standalone and Pseudo node setup. core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.name

historyserver not able to read log after enabling kerberos

旧城冷巷雨未停 提交于 2019-12-12 00:14:26
问题 I enable the Kerberos on the cluster and it is working fine. But due to some issue mapred user is not able to read and display log over JobHistory server. I check the logs of job history server and it giving access error as: org.apache.hadoop.security.AccessControlException: Permission denied:user=mapred, access=READ_EXECUTE, inode="/user/history/done_intermediate/prakul":prakul:hadoop:drwxrwx--- as we can see the directory have access to hadoop group and mapred is in hadoop group, even then

Using all resources in Apache Spark with Yarn

随声附和 提交于 2019-12-11 19:33:47
问题 I am using Apache Spark with Yarn client. I have 4 worker PCs with 8 vcpus each and 30 GB of ram in my spark cluster. Im set my executor memory to 2G and number of instances to 33. My job is taking 10 hours to run and all machines are about 80% idle. I dont understand the correlation between executor memory and executor instances. Should I have an instance per Vcpu? Should I set the executor memory to be memory of machine/#executors per machine? 回答1: I believe that you have to use the

关于yarn上spark任务报错/bin/bash: /opt/soft/jdk/jdk1.8.0_66/bin/java: No such file or directory

雨燕双飞 提交于 2019-12-11 17:58:59
报错日志如下: /bin/bash: /opt/soft/jdk/jdk1.8.0_66/bin/java: No such file or directory 很明显是脚本里没有export java路径,即export JAVA_HOME=/opt/soft/jdk/jdk1.8.0_66 找到服务器的home路径,然后ls -all找到隐藏文件.bashrc vim编辑后,追加export JAVA_HOME=/opt/soft/jdk/jdk1.8.0_66 然后source .bashrc即可,这样服务器所有的脚本就都加载了JAVA_HOME路径,方便省事。 来源: CSDN 作者: 攻城狮Kevin 链接: https://blog.csdn.net/wx1528159409/article/details/103488227

How to recover the hdp

心不动则不痛 提交于 2019-12-11 17:58:54
问题 I have this command line to dispaly the YARN policy: The result is: { "id": 131, "guid": "4d9c3257-0998-42ea-8506-f773a368430d", "isEnabled": true, "version": 2, "service": "Namecluster_yarn", } }, "policyItems": [ { "accesses": [ { "type": "submit-app", "isAllowed": true } ], "users": [], "groups": [ "Application_Team_1" ], "conditions": [], "delegateAdmin": false } ], "denyPolicyItems": [], "allowExceptions": [], "denyExceptions": [], "dataMaskPolicyItems": [], "rowFilterPolicyItems": [] }

Error about sbt yarn at using spark

六眼飞鱼酱① 提交于 2019-12-11 17:37:41
问题 hi when i am writing this code >sbt And after seeing this result beyhan@beyhan:~/sparksample$ sbt Starting sbt: invoke with -help for other options [info] Set current project to Spark Sample (in build file:/home/beyhan/sparksample/) And after i am writing this code >compile And i am getting this error [error] {file:/home/beyhan/sparksample/}default-f390c8/*:update: sbt.ResolveException: unresolved dependency: org.apache.hadoop#hadoop-yarn-common;1.0.4: not found [error] unresolved dependency:

解决Yarn 安装 node-sass 依赖导致 Build Fresh Packages 太慢的问题

a 夏天 提交于 2019-12-11 17:17:09
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 解决办法: 在 项目目录下新建 .yarnrc 文件 添加以下代码 registry "https://registry.npm.taobao.org" sass_binary_site "https://npm.taobao.org/mirrors/node-sass/" 来源: oschina 链接: https://my.oschina.net/jamesview/blog/3141824

How to check spark config for an application in Ambari UI, posted with livy

五迷三道 提交于 2019-12-11 17:00:23
问题 I am posting jobs to a spark cluster using livy APIs. I want to increase the spark.network.timeout value and passing the same value ( 600s ) with the conf field in livy post call. How can I verify that it is getting correctly honoured and getting applied to the jobs posted? 来源: https://stackoverflow.com/questions/55690915/how-to-check-spark-config-for-an-application-in-ambari-ui-posted-with-livy