mesos

New to mesos/marathon. How to deploy a new self defined docker?

狂风中的少年 提交于 2019-12-23 01:40:07
问题 I am new to mesos and marathon. I have a set up where in one docker is self defined and another is mysql server instance. These two are linked and pass information. How do I deploy this on mesos? I am using a single node master and slave set up currently. 回答1: To link your Docker containers use Mesos-DNS. I'm using Playa Mesos in the following to explain the setup. Setting up Mesos-DNS on Playa is straightforward: use the mesosphere/mesos-dns image and deploy it on Marathon using the

How to run a one-off task with Apache Mesos/Marathon?

心不动则不痛 提交于 2019-12-21 09:19:34
问题 I'm trying to run a one-off task with Marathon. I'm able to get the task container running, but after the task command completes, marathon runs another task, and so on. How can I prevent Marathon from running more than one task/command? Or, if this is not possible with Marathon, how can I achieve the desired behaviour? 回答1: As a hack you can kill a marathon task at the end, as suggested here: https://github.com/mesosphere/marathon/issues/344#issuecomment-86697361 As rukletsov already

DCOS cluster resource allocation is np-hard

时光总嘲笑我的痴心妄想 提交于 2019-12-20 04:57:14
问题 Here in the DCOS documents it is stated that "Deciding where to run processes to best utilize cluster resources is hard, NP-hard in-fact." I don't deny that that sounds right, but is there a proof somewhere? 回答1: Best utilization of resources is variation of bin packaging problem: In the bin packing problem, objects of different volumes must be packed into a finite number of bins or containers each of volume V in a way that minimizes the number of bins used. In computational complexity theory

Setup Mesos-DNS dockerized on a mesos cluster

烂漫一生 提交于 2019-12-19 10:16:41
问题 I'm facing some trouble trying to run mesos-dns dockerized on a mesos cluster. I've setup 2 virtual machines with ubuntu trusty on a windows 8.1 host. My VMs are called docker-vm and docker-sl-vm ; where the first one runs mesos-master and the 2nd one runs mesos-slave. The VMs have 2 network cards; one running NAT for accesing internet through the host and the other one is a Host-only adapter for internal communication. The IPs for the VMs are: 192.168.56.101 for docker-vm 192.168.56.102 for

Transport Endpoint Not Connected - Mesos Slave / Master

匆匆过客 提交于 2019-12-18 18:47:31
问题 I'm trying to connect a Mesos slave to its master. Whenver the slave tries to connect to the master, I get the following message: I0806 16:39:59.090845 935 hierarchical.hpp:528] Added slave 20150806-163941-1027506442-5050-921-S3 (debian) with cpus(*):1; mem(*):1938; disk(*):3777; ports(*):[31000-32000] (allocated: ) E0806 16:39:59.091384 940 socket.hpp:107] Shutdown failed on fd=25: Transport endpoint is not connected [107] I0806 16:39:59.091508 940 master.cpp:3395] Registered slave 20150806

Can Mesos 'master' and 'slave' nodes be deployed on the same machines?

≯℡__Kan透↙ 提交于 2019-12-18 10:26:12
问题 Can Apache Mesos 'master' nodes be co-located on the same machine as Mesos 'slave' nodes? Similarly (for high-availability (HA) deploys), can the Apache Zookeeper nodes used in Mesos 'master' election be deployed on the same machines as Mesos 'slave' nodes? Mesos recommends 3 'masters' be used for HA deploys, and Zookeeper recommends 5 nodes be used for its quorum election system. It would be nice to have these services running along side Mesos 'slave' processes instead of committing 8

Can apache spark run without hadoop?

♀尐吖头ヾ 提交于 2019-12-17 21:39:50
问题 Are there any dependencies between Spark and Hadoop ? If not, are there any features I'll miss when I run Spark without Hadoop ? 回答1: Spark can run without Hadoop but some of its functionality relies on Hadoop's code (e.g. handling of Parquet files). We're running Spark on Mesos and S3 which was a little tricky to set up but works really well once done (you can read a summary of what needed to properly set it here). (Edit) Note: since version 2.3.0 Spark also added native support for

Can apache spark run without hadoop?

强颜欢笑 提交于 2019-12-17 21:28:57
问题 Are there any dependencies between Spark and Hadoop ? If not, are there any features I'll miss when I run Spark without Hadoop ? 回答1: Spark can run without Hadoop but some of its functionality relies on Hadoop's code (e.g. handling of Parquet files). We're running Spark on Mesos and S3 which was a little tricky to set up but works really well once done (you can read a summary of what needed to properly set it here). (Edit) Note: since version 2.3.0 Spark also added native support for

Jupyter Notebook Python, Scala, R, Spark, Mesos

怎甘沉沦 提交于 2019-12-17 19:51:39
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> 在Docker中运行Jupyter/Spark/Mesos服务。 来源[英]: https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook Spark on Docker,基于Jupyter Notebook Python, Scala, R, Spark, Mesos技术栈,提供一个远程操作的模型和任务编写Web界面,采用Python界著名的Ipython Notebook格式,非常简洁、友好。 集成的软件 Jupyter Notebook 4.2.x Conda Python 3.x 和 Python 2.7.x 环境 Conda R 3.2.x 环境 Scala 2.10.x pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn 预先安装在Python环境 ggplot2, rcurl 原装在 R 环境 Spark 1.6.0,运行在local模式,或者连接到 Spark workers的集群 Mesos client 0.22 binary that can communicate with a Mesos master 非私有用户名 jovyan (uid

How to pre-package external libraries when using Spark on a Mesos cluster

∥☆過路亽.° 提交于 2019-12-17 19:38:51
问题 According to the Spark on Mesos docs one needs to set the spark.executor.uri pointing to a Spark distribution: val conf = new SparkConf() .setMaster("mesos://HOST:5050") .setAppName("My app") .set("spark.executor.uri", "<path to spark-1.4.1.tar.gz uploaded above>") The docs also note that one can build a custom version of the Spark distribution. My question now is whether it is possible/desirable to pre-package external libraries such as spark-streaming-kafka elasticsearch-spark spark-csv