mesos

Hadoop 2.5.0 on Mesos 0.21.0 with library 0.0.8 executor error

笑着哭i 提交于 2019-12-12 01:54:17
问题 The stderr logs the following while running a map-reduce job: root@dbpc42:/tmp/mesos/slaves/20141201-225046-698725789-5050-19765-S24/frameworks/20141201-225046-698725789-5050-19765-0016/executors/executor_Task_Tracker_2/runs/latest# ls hadoop-2.5.0-cdh5.2.0 hadoop-2.5.0-cdh5.2.0.tgz stderr stdout Contents of stderr : WARNING: Logging before InitGoogleLogging() is written to STDERR I1202 19:41:40.323521 7223 fetcher.cpp:76] Fetching URI 'hdfs://dbpc41:9000/hadoop-2.5.0-cdh5.2.0.tgz' I1202 19

Docker on Mesos: Volume is placed on which node?

扶醉桌前 提交于 2019-12-11 20:24:55
问题 I will be setting up a Mesos cluster to run single-use docker jobs, e.g. long rapidminer computations. Of course I want to get the result of the computation, so I think I should use Docker volumes for that. Now, when I send a docker job to a cluster, specifying the volume for example in a JSON job file for Marathon or Chronos, where does the result of my computation land? I am guessing that it is put into the respective directory on the slave node, but do I really have to go into the Mesos

Auto scaling resources for foxx/arangodb on mesos

时光怂恿深爱的人放手 提交于 2019-12-11 18:12:48
问题 Is it possible to separately autoscale foxx and arangodb independently of each other in liu of trying to strike balance, and sure enough autoscale right amount of ram/storage/cpu? Simply if it's a good idea to try and autoscale deployment is answer good enough. 回答1: You are not very specific about what you mean by saying "scaling ArangoDB". In general, you can add more DB server nodes (primaries) independent of the number of coordinator nodes, if that is what you're asking. Foxx is executed

How to set up Spark cluster on Windows machines?

不打扰是莪最后的温柔 提交于 2019-12-11 08:48:02
问题 I am trying to set up a Spark cluster on Windows machines. The way to go here is using the Standalone mode, right? What are the concrete disadvantages of not using Mesos or YARN? And how much pain would it be to use either one of those? Does anyone have some experience here? 回答1: FYI, I got an answer in the user-group: https://groups.google.com/forum/#!topic/spark-users/SyBJhQXBqIs The standalone mode is indeed the way to go. Mesos does not work under Windows and YARN probably neither. 回答2:

Mesos + Jenkins Framework registers but Jenkins Slaves Offline

佐手、 提交于 2019-12-11 04:38:33
问题 Been struggling for days trying to get this working. I have a working Jenkins Master running on Marathon & mesos plugin 1 Mesos Master with a cluster of 6 Slaves When I run my Jobs using the mesos cloud, I can see the framework as registered with outstanding offers but my Jenkins Slaves never come online. Connect agent to Jenkins one of these ways: launch agent Launch agent from browser Run from agent command line: java -jar slave.jar -jnlpUrl http://x.x.x.x:xxxx/computer/mesos-jenkins

Spark job fails because it can't find the hadoop core-site.xml

三世轮回 提交于 2019-12-11 04:25:42
问题 I'm trying to run a spark job and I'm getting this error when I try to start the driver: 16/05/17 14:21:42 ERROR SparkContext: Error initializing SparkContext. java.io.FileNotFoundException: Added file file:/var/lib/mesos/slave/slaves/0c080f97-9ef5-48a6-9e11-cf556dfab9e3-S1/frameworks/5c37bb33-20a8-4c64-8371-416312d810da-0002/executors/driver-20160517142123-0183/runs/802614c4-636c-4873-9379-b0046c44363d/core-site.xml does not exist. at org.apache.spark.SparkContext.addFile(SparkContext.scala

Is there a way to get the max memory used by a task in mesos?

本小妞迷上赌 提交于 2019-12-11 04:06:41
问题 Context: I implemented a Scheduler in Scala based on the Mesos Scheduler Interface. All tasks are perfectly orchestrated. Expectations: Now, I would like to be able to monitor the max memory consumed by completed tasks. I expect to perform this monitoring task inside my implementation of Scheduler.statusUpdate() method, for every task with TASK_FINISHED state. Question: In this method, a SchedulerDriver and a Protos.TaskStatus are provided. So, is there a way to retrieve the max memory used

Chronos does not run job

梦想的初衷 提交于 2019-12-11 03:19:20
问题 I have set up Mesos Cluster including Marathon & Chronos using Docker image for each service. Docker images I am using are as follows; ZooKeeper: jplock/zookeeper:3.4.5 Mesos Master: redjack/mesos-master:0.21.0 Mesos Slave: redjack/mesos-slave:0.21.0 Marathon: mesosphere/marathon:v0.8.2-RC3 Chronos : tomaskral/chronos:2.3.0-mesos0.21.0 ZooKeeper is running on port 2181, Mesos Master on 5050, Mesos Slave on 5051, marathon on 8088, and Chronos on 8080. What I want to do is; Run Docker container

Where to find more explicit errors given container error status codes?

荒凉一梦 提交于 2019-12-10 23:59:59
问题 I am actually running tasks through a Mesos stack, which use Docker containers. Sometimes, some tasks are failing. Here are some of the related TaskStatus messages and reasons: message: Container exited with status 1 - reason: REASON_COMMAND_EXECUTOR_FAILED message: Container exited with status 42 - reason: REASON_COMMAND_EXECUTOR_FAILED message: Container exited with status 137 - reason: REASON_COMMAND_EXECUTOR_FAILED Is there a table of correspondance that links container error status codes

Spark mesos cluster mode is slower than local mode

℡╲_俬逩灬. 提交于 2019-12-10 10:56:32
问题 I submit the same jar to run by using both local mode and mesos cluster mode. And found for some exactly same stages, local mode only takes several milliseconds to finish however cluster mode will take seconds! listed is one example: stage 659 local mode: 659 Streaming job from [output operation 1, batch time 17:45:50] map at KafkaHelper.scala:35 +details 2016/03/22 17:46:31 11 ms mesos cluster mode: 659 Streaming job from [output operation 1, batch time 18:01:20] map at KafkaHelper.scala:35