mesosphere

How to pre-package external libraries when using Spark on a Mesos cluster

∥☆過路亽.° 提交于 2019-12-17 19:38:51
问题 According to the Spark on Mesos docs one needs to set the spark.executor.uri pointing to a Spark distribution: val conf = new SparkConf() .setMaster("mesos://HOST:5050") .setAppName("My app") .set("spark.executor.uri", "<path to spark-1.4.1.tar.gz uploaded above>") The docs also note that one can build a custom version of the Spark distribution. My question now is whether it is possible/desirable to pre-package external libraries such as spark-streaming-kafka elasticsearch-spark spark-csv

Docker mesosphere/chronos container fails immediately after launch

允我心安 提交于 2019-12-13 18:25:31
问题 I am trying to launch Chronos in Docker, using mesosphere/chronos image. From command line Running following command to run the image doesn't work fine. docker run -p 8081:8081 -t mesosphere/chronos:latest /usr/bin/chronos --master zk://<master-hostname>:2181/mesos --zk_hosts <master-hostname>:2181 --http_port 8081 (I am trying with a single ZK node and a single Mesos Master node) It shows following messages soon after a few seconds. And no docker container of Chronos runs. /usr/bin/chronos:

Ephemeral tasks on Marathon

断了今生、忘了曾经 提交于 2019-12-12 21:44:00
问题 Before hand let me say that I'm new to Mesosphere stack. I am trying to migrate an existing Rails application deployment to Mesos and I'm successful so far, but currently I'm on the middle of running migrations and seeds (through Rake tasks) and I don't see a pretty way to get it done since those tasks are ephemeral and they don't match quite Marathon's idea. How should I proceed? 回答1: You could also use Chronos to run a task "Now" that is expected to complete at some point. Marathon is

Spark to MongoDB via Mesos

北城以北 提交于 2019-12-12 04:59:13
问题 I am trying to connect Apache Spark to MongoDB using Mesos. Here is my architecture: - MongoDB: MongoDB Cluster of 2 shards, 1 config server and 1 query server. Mesos: 1 Mesos Master, 4 Mesos slaves Now I have installed Spark on just 1 node. There is not much information available on this out there. I just wanted to pose a few questions: - As per what I understand, I can connect Spark to MongoDB via mesos. In other words, I end up using MongoDB as a storage layer. Do I really Need Hadoop? Is

finding active framework current resource usage in mesos

落爺英雄遲暮 提交于 2019-12-12 02:59:40
问题 Which HTTP endpoint will help me to find all the active frameworks current resource utilization? We want this information because we want to dynamically scale Mesos cluster and our algorithm needs information regarding what resources each active framework is using. 回答1: I think to focus on the frameworks is not really what you would want to to. What you're after is probably the Mesos Slave utilization, which can be requested via calling http://{mesos-master}:5050/master/state-summary In the

Chronos does not run job

梦想的初衷 提交于 2019-12-11 03:19:20
问题 I have set up Mesos Cluster including Marathon & Chronos using Docker image for each service. Docker images I am using are as follows; ZooKeeper: jplock/zookeeper:3.4.5 Mesos Master: redjack/mesos-master:0.21.0 Mesos Slave: redjack/mesos-slave:0.21.0 Marathon: mesosphere/marathon:v0.8.2-RC3 Chronos : tomaskral/chronos:2.3.0-mesos0.21.0 ZooKeeper is running on port 2181, Mesos Master on 5050, Mesos Slave on 5051, marathon on 8088, and Chronos on 8080. What I want to do is; Run Docker container

Spark shell connect to Mesos hangs: No credentials provided. Attempting to register without authentication

情到浓时终转凉″ 提交于 2019-12-10 01:50:23
问题 I installed Mesos in an OpenStack environment using these instructions from Mesosphere: https://open.mesosphere.com/getting-started/datacenter/install/. I ran the verification test as described and it was successful. UI for both Mesos and Marathon are working as expected. When I run the Spark shell from my laptop I cannot connect. The shell hangs with the output below. I don't see anything in the Mesos master or slave logs that would indicate an error, so am not sure what to investigate next.

How to read mesos task stdout/stderr from Mesos framework Scheduler class?

空扰寡人 提交于 2019-12-08 08:17:37
问题 I am developing a Mesos framework, it is working perfectly fine, my only issue is that I am unable to read task stdout or stderr from inside the the Scheduler class. I am providing a code sample below, I would like to read the stdout and stderr of a finished task, preferably in the statusUpdate function but anywhere would be useful. How can I reach that info? I tried getting executorInfo or executorId from TaskInfo or TaskStatus objects without any luck. If someone can provide a code sample

Consul deregister 'failing' services

∥☆過路亽.° 提交于 2019-12-08 02:23:40
问题 I have consul running on Consul v0.5.2 version & services running in Mesos. Services keep moving from 1 server to another. Is there way to deregister services in consul that are in 'failing' state? I am able to get the list of services in failing state using this curl curl http://localhost:8500/v1/health/state/critical Issue that we are seeing is over a period of time in consul UI we have stale data & making the whole UI unusable 回答1: Consul by default do not deregister unhealthy services

How to use volumes-from in marathon

天大地大妈咪最大 提交于 2019-12-06 21:53:43
问题 I'm working with mesos + marathon + docker quite a while but I got stuck at some point. At the moment I try to deal with persistent container and I tried to play around with the "volumes-from" parameter but I can't make it work because I have no clue how I can figure out the name of the data box to put it as a key in the json. I tried it with the example from here { "id": "privileged-job", "container": { "docker": { "image": "mesosphere/inky" "privileged": true, "parameters": [ { "key":