yarn

React服务端渲染框架Next.js

六月ゝ 毕业季﹏ 提交于 2020-01-02 11:21:58
React服务端渲染框架Next.js 1、全局安装脚手架 npm install -g create-next-app 2、创建项目 # 进入对应文件夹,执行如下命令,创建项目 npx create-next-app next-create cd next-create yarn install yarn dev 安装并启动成功后,会出现如下界面: 3、编写react页面 export default () => { return ( <div>线索管理</div> ) } 完成后可直接请求此页面 4、Xcode项目Git提交代码发生LF will be replaced by CRLF in 问题 原因是需要提交的文件是在windows下生成的,windows中的换行符为 CRLF, 而在linux下的换行符为LF,所以在执行add . 时出现提示,解决办法: git config --global core.autocrlf false 再执行git 提交 5、修改启动端口 "scripts": { "dev": "next dev", "build": "next build", "start": "next start -p 3008" } 6、部署启动 yarn install yarn build yarn start > log.info & curl http:/

Spark Creates Less Partitions Then minPartitions Argument on WholeTextFiles

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-02 10:00:59
问题 I have a folder which has 14 files in it. I run the spark-submit with 10 executors on a cluster, which has resource manager as yarn. I create my first RDD as this: JavaPairRDD<String,String> files = sc.wholeTextFiles(folderPath.toString(), 10); However, files.getNumPartitions() gives me 7 or 8, randomly. Then I do not use coalesce/repartition anywhere and I finish my DAG with 7-8 partitions. As I know, we gave argument as the "minimum" number of partitions, so that why Spark divide my RDD to

Spark 1.3.0: Running Pi example on YARN fails

懵懂的女人 提交于 2020-01-02 08:34:12
问题 I have Hadoop 2.6.0.2.2.0.0-2041 with Hive 0.14.0.2.2.0.0-2041 After building Spark with command: mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver -DskipTests package I try to run Pi example on YARN with the following command: export HADOOP_CONF_DIR=/etc/hadoop/conf /var/home2/test/spark/bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn-cluster \ --executor-memory 3G \ --num-executors 50 \ hdfs:///user/test/jars/spark-examples-1.3.0-hadoop2

Troubles writing temp file on datanode with Hadoop

邮差的信 提交于 2020-01-02 07:43:06
问题 I would like to create a file during my program. However, I don't want this file to be written on HDFS but on the datanode filesystem where the map operation is executed. I tried the following approach : public void map(Object key, Text value, Context context) throws IOException, InterruptedException { // do some hadoop stuff, like counting words String path = "newFile.txt"; try { File f = new File(path); f.createNewFile(); } catch (IOException e) { System.out.println("Message easy to look up

Can't run a MapReduce job on hadoop 2.4.0

你说的曾经没有我的故事 提交于 2020-01-02 02:20:20
问题 I am new to hadoop and here is my problem. I have configured hadoop 2.4.0 with jdk1.7.60 on cluster of 3 machine. I am able to execute all the commands of hadoop. Now I have modified wordcount example and created jar file. I have already executed with this jar file on hadoop 1.2.1 and got the result. But now on hadoop 2.4.0 I am not getting any result. Command used for execution $hadoop jar WordCount.jar WordCount /data/webdocs.dat /output I am getting following message from the setup: 14/06

spark executor memory cut to 1/2

不想你离开。 提交于 2020-01-01 18:21:19
问题 I am doing a spark-submit like this spark-submit --class com.mine.myclass --master yarn-cluster --num-executors 3 --executor-memory 4G spark-examples_2.10-1.0.jar in the web ui, I can see indeed there are 3 executor nodes, but each has 2G of memory. When I set --executor-memory 2G, then ui shows 1G per node. How did it figure to reduce my setting by 1/2? 回答1: The executor page of the Web UI is showing the amount of storage memory, which is equal to 54% of Java heap by default (spark.storage

spark executor memory cut to 1/2

扶醉桌前 提交于 2020-01-01 18:21:05
问题 I am doing a spark-submit like this spark-submit --class com.mine.myclass --master yarn-cluster --num-executors 3 --executor-memory 4G spark-examples_2.10-1.0.jar in the web ui, I can see indeed there are 3 executor nodes, but each has 2G of memory. When I set --executor-memory 2G, then ui shows 1G per node. How did it figure to reduce my setting by 1/2? 回答1: The executor page of the Web UI is showing the amount of storage memory, which is equal to 54% of Java heap by default (spark.storage

Client cannot authenticate via:[TOKEN, KERBEROS]

痞子三分冷 提交于 2020-01-01 06:13:06
问题 I'm using YarnClient to programmatically start a job. The cluster i'm running on has been kerberos-ized. Normal map reduce jobs submitted via "yarn jar examples.jar wordcount..." work. The job i'm trying to submit programmatically, does not. I get this error: 14/09/04 21:14:29 ERROR client.ClientService: Error happened during application submit: Application application_1409863263326_0002 failed 2 times due to AM Container for appattempt_1409863263326_0002_000002 exited with exitCode: -1000

使用npm踩过的坑

有些话、适合烂在心里 提交于 2019-12-31 21:06:56
项目背景: 项目使用的是前端框架是Vue , 使用Vue-cli搭建。 web移动网页,嵌入原生APP访问。 项目中用到一些与原生APP交互的通用的api方法,为了便于维护及场景通用,抽取了一个npm依赖包。通过以下配置请求地址方式放到package.json中: 由于 app-mobile-api 这个抽取出来的api库放在公司代码仓库,且只能通过公司内网访问,所以执行 npm install 的时候时无法像下载 vue 、 vuex 一样通过外网下载。 踩坑笔记: 先下载除 app-mobile-api 以外的其他包,下载完成后切换网络到公司内网,执行npm install , 进程卡住不动…一段时间后报超时。。。 于是直接把 app-mobile-api 包(.zip包)下载下来,解压后放到node_modules中,结果又报以下的错误: ~This dependency was not found~ 然鹅 ~ app-mobile-api 包确实已经加入到 nodu_modules 中,且通过代码提示可以看到依赖生效,可是编译不通过。。。在网上找了许多资料,没有找到可行的办法。 无意间看到了npm和yarn的比较: npm 是顺序下载,易产生堵塞。没有缓存的功能。 yarn是异步下载,依赖可同时下载,排前面的依赖下不下来不会堵塞后面的依赖。且有离线缓存的功能

hadoop - spark on yarn 集群搭建

邮差的信 提交于 2019-12-31 16:40:55
一、环境准备 1. 机器: 3 台虚拟机 机器 角色 l-qta3.sp.beta.cn0 NameNode,ResourceManager,spark的master l-querydiff1.sp.beta.cn0 DataNode,NodeManager,Worker l-bgautotest2.sp.beta.cn0 DataNode,NodeManager,Worker 2. jdk版本 [xx@l-qta3.sp.beta.cn0 ~]$ java -version java version "1.7.0_45" Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) 3. 准备工作   1)ssh 免密登陆: 集群中的机器需要相互免密访问。参考:http://www.cnblogs.com/lijingchn/p/5580263.html   2)hadoop 2.6.5 binary 下载。地址:http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.6.5/hadoop-2.6.5.tar.gz 4. 解压 hadoop-2.6