mesos

Isn't chronos a centralized scheduler?

走远了吗. 提交于 2019-12-24 20:18:09
问题 Why chronos is called as distributed and fault-tolerant scheduler? As per my understanding there is only one scheduler instance running that manages job schedules. As per Chronos doc, internally, the Chronos scheduler main loop is quite simple. The pattern is as follows: Chronos reads all job state from the state store (ZooKeeper) Jobs are registered within the scheduler and loaded into the job graph for tracking dependencies. Jobs are separated into a list of those which should be run at the

WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect

自闭症网瘾萝莉.ら 提交于 2019-12-24 07:30:03
问题 I have two nodes that each one has docker with Mesos,marathon,and zookeeper have been installed on it. This is my docker compose file on master node: version: '3.7' services: zookeeper: image: ubuntu_mesos_home_marzieh command: /home/zookeeper-3.4.8/bin/zkServer.sh restart environment: ZOOKEEPER_SERVER_ID: 1 ZOOKEEPER_CLIENT_PORT: 2190 ZOOKEEPER_TICK_TIME: 2000 ZOOKEEPER_INIT_LIMIT: 10 ZOOKEEPER_SYNC_LIMIT: 5 ZOOKEEPER_SERVERS: 150.20.11.133:2888:3888;150.20.11.136:2888:3888 network_mode:

HDFS resiliency to machine restarts in DC/OS

假装没事ソ 提交于 2019-12-24 05:59:33
问题 I have installed HDFS from universe on my DCOS cluster of 10 Core OS machines (3 master nodes, 7 agent nodes). My HA HDFS config has 2 name nodes, 3 journal nodes and 5 data nodes. Now, my question is. Shouldn’t the HDFS be resilient to machine restarts? If I restart a machine where a data node is installed the data node gets rebuilt as a mirror of the others (only after restarting the HDFS service from the DC/OS UI). In the case of a restart where a journal node or a name node is, the nodes

how know container name with marathon rest API

帅比萌擦擦* 提交于 2019-12-23 19:16:01
问题 I'm using Apache Mesos + Marathon + Zookeeper to deploy my rails app. I need share data between rails app and other container. I found some reference here to do it with marathon as follow: marathon/docs/native-docker.html { "id": "privileged-job", "container": { "docker": { "image": "mesosphere/inky" "privileged": true, "parameters": [ { "key": "hostname", "value": "a.corp.org" }, { "key": "volumes-from", "value": "another-container" }, { "key": "lxc-conf", "value": "..." } ] }, "type":

How to set up Cassandra Docker cluster in Marathon with BRIDGE network?

爷,独闯天下 提交于 2019-12-23 17:51:52
问题 I have a production DC/OS(v1.8.4) cluster and I am trying to setup a Cassandra cluster inside it. I use Marathon(v1.3.0) to deploy Cassandra nodes. I use the official Docker image of Cassandra and more specifically the 2.2.3 version. First Case: Deploy Cassandra using HOST mode network - Everything OK In this case, I first deploy a node that I call cassasndra-seed and it attaches to a physical host with IP 10.32.0.6. From the stdout log of Marathon for this service I can see that "Node /10.32

How to set up Cassandra Docker cluster in Marathon with BRIDGE network?

假如想象 提交于 2019-12-23 17:42:50
问题 I have a production DC/OS(v1.8.4) cluster and I am trying to setup a Cassandra cluster inside it. I use Marathon(v1.3.0) to deploy Cassandra nodes. I use the official Docker image of Cassandra and more specifically the 2.2.3 version. First Case: Deploy Cassandra using HOST mode network - Everything OK In this case, I first deploy a node that I call cassasndra-seed and it attaches to a physical host with IP 10.32.0.6. From the stdout log of Marathon for this service I can see that "Node /10.32

Handle database connection inside spark streaming

ⅰ亾dé卋堺 提交于 2019-12-23 16:24:20
问题 I am not sure if I understand correctly how spark handle database connection and how to reliable using large number of database update operation insides spark without potential screw up the spark job. This is a code snippet I have been using (for easy illustration): val driver = new MongoDriver val hostList: List[String] = conf.getString("mongo.hosts").split(",").toList val connection = driver.connection(hostList) val mongodb = connection(conf.getString("mongo.db")) val dailyInventoryCol =

When running make check on Mesos one of the tests fails, what now?

南楼画角 提交于 2019-12-23 13:08:42
问题 After running a make check when building Mesos, I found that one of those tests is failing. How can I find out more about the reasoning behind that failure? 回答1: Note make check needs to be run before the following can be used as make check renders the needed binaries. The following assumes that your current directory ( pwd ) is the build folder within the extracted / cloned Mesos project directory structure. Let's assume that a test named Foo.Bar had failed for you. Now go ahead and run that

Can't connect to cassandra container via haproxy

对着背影说爱祢 提交于 2019-12-23 02:43:04
问题 I am trying to connect an external app to Cassandra which is running dockerized on a mesos cluster. These are the the apps I have running on mesos: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 137760ce852a cassandra:latest "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 7000-7001/tcp, 7199/tcp, 9160/tcp, 0.0.0.0:31634->9042/tcp mesos-1b65f33a-3d36-4bf4-8a77-32077d8d234a-S1.0db174cc-2e0c-4790-9cd7-1f142d08c6e2 fec5fc93ccfd cassandra:latest "/docker-entrypoint.s" 22 minutes ago Up

Spark Mesos Dispatcher

守給你的承諾、 提交于 2019-12-23 02:01:45
问题 My team is deploying a new Big Data architecture on Amazon Cloud. We have Mesos up and running Spark jobs. We are submitting Spark jobs (i.e.: jars) from a bastion host inside the same cluster. Doing so, however, the bastion host is the driver program and this is called the client mode (if I understood correctly). We would like to try the cluster mode, but we don't understand where to start the dispatcher process. The documentation says to start it in the cluster, but I'm confused since our