mesosphere

Mesos cannot deploy container from private Docker registry

Deadly 提交于 2019-12-06 15:24:21
I have a private Docker registry that is accessible at https://docker.somedomain.com (over standard port 443 not 5000). My infrastructure includes a set up of Mesosphere, which have docker containerizer enabled. I'm am trying to deploy a specific container to a Mesos slave via Marathon; however, this always fails with Mesos failing the task almost immediately with no data in stderr and stdout of that sandbox. I tried deploying from an image from the standard Docker Registry and it appears to work fine. I'm having trouble figuring out what is wrong. My private Docker registry does not require

How should a .dockercfg file be hosted in a Mesosphere-on-AWS setup so that only Mesosphere can use it?

与世无争的帅哥 提交于 2019-12-06 03:59:54
问题 We have set up a test cluster with Mesosphere on AWS, in a private VPC. We have some Docker images which are public, which are easy enough to deploy. However most of our services are private images, hosted on the Docker Hub private plan, and require authentication to access. Mesosphere is capable of private registry authentication, but it achieves this in a not-exactly-ideal way: a HTTPS URI to a .dockercfg file needs to be specified in all Mesos/Marathon task definitions. As the title

Spark shell connect to Mesos hangs: No credentials provided. Attempting to register without authentication

╄→гoц情女王★ 提交于 2019-12-05 01:06:37
I installed Mesos in an OpenStack environment using these instructions from Mesosphere: https://open.mesosphere.com/getting-started/datacenter/install/ . I ran the verification test as described and it was successful. UI for both Mesos and Marathon are working as expected. When I run the Spark shell from my laptop I cannot connect. The shell hangs with the output below. I don't see anything in the Mesos master or slave logs that would indicate an error, so am not sure what to investigate next. Any help would be appreciated. TOMWATER-M-60SN:bin tomwater$ ./spark-shell --master mesos://zk://10

marathon-lb health check failing on all spray.io containers

馋奶兔 提交于 2019-12-04 19:17:23
I'm running DC/OS 1.7 with marathon-lb. spray.io 1.3.3 is returning 400 to all marathon-lb/HAProxy heath check calls: request has a relative URI and is missing a Host header so marathon-lb never routes any requests to the service. The health check in the marathon json is: "healthChecks": [ { "path": "/health", "protocol": "HTTP", "portIndex": 0, "gracePeriodSeconds": 10, "intervalSeconds": 2, "timeoutSeconds": 10, "maxConsecutiveFailures": 10, "ignoreHttp1xx": false } ], and the logging by spray.io in the docker container is: [WARN] [08/19/2016 23:53:42.534] [asp-service-akka.actor.default

Accessing HDFS HA from spark job (UnknownHostException error)

烈酒焚心 提交于 2019-12-04 12:17:31
问题 I have Apache Mesos 0.22.1 cluster (3 masters & 5 slaves), running Cloudera HDFS (2.5.0-cdh5.3.1) in HA configuration and Spark 1.5.1 framework. When I try to spark-submit compiled HdfsTest.scala example app (from Spark 1.5.1 sources) - it fails with java.lang.IllegalArgumentException: java.net.UnknownHostException: hdfs error in executor logs. This error is only observed when I pass HDFS HA Path as an argument hdfs://hdfs/<file> , when I pass hdfs://namenode1.hdfs.mesos:50071/tesfile -

Does Apache Mesos recognize GPU cores?

佐手、 提交于 2019-12-04 07:58:08
In slide 25 of this talk by Twitter's Head of Open Source office, the presenter says that Mesos allows one to track and manage even GPU (I assume he meant GPGPU) resources. But I cant find any information on this anywhere else. Can someone please help? Besides Mesos, are there other cluster managers that support GPGPU? Mesos does not yet provide direct support for (GP)GPUs, but does support custom resource types. If you specify --resources="gpu(*):8" when starting the mesos-slave, then this will become part of the resource offer to frameworks, which can launch tasks that claim to use these

How should a .dockercfg file be hosted in a Mesosphere-on-AWS setup so that only Mesosphere can use it?

百般思念 提交于 2019-12-04 07:00:20
We have set up a test cluster with Mesosphere on AWS, in a private VPC. We have some Docker images which are public, which are easy enough to deploy. However most of our services are private images, hosted on the Docker Hub private plan, and require authentication to access. Mesosphere is capable of private registry authentication, but it achieves this in a not-exactly-ideal way: a HTTPS URI to a .dockercfg file needs to be specified in all Mesos/Marathon task definitions. As the title suggests, the question is basically: how should the .dockercfg file be hosted within AWS so that access may

Is HDFS necessary for Spark workloads?

我的未来我决定 提交于 2019-11-30 22:29:05
HDFS is not necessary but recommendations appear in some places. To help evaluate the effort spent in getting HDFS running: What are the benefits of using HDFS for Spark workloads? Spark is a distributed processing engine and HDFS is a distributed storage system. If HDFS is not an option, then Spark has to use some other alternative in form of Apache Cassandra Or Amazon S3. Have a look at this comparision S3 – Non urgent batch jobs. S3 fits very specific use cases, when data locality isn’t critical. Cassandra – Perfect for streaming data analysis and an overkill for batch jobs. HDFS – Great

Is HDFS necessary for Spark workloads?

独自空忆成欢 提交于 2019-11-30 17:53:25
问题 HDFS is not necessary but recommendations appear in some places. To help evaluate the effort spent in getting HDFS running: What are the benefits of using HDFS for Spark workloads? 回答1: Spark is a distributed processing engine and HDFS is a distributed storage system. If HDFS is not an option, then Spark has to use some other alternative in form of Apache Cassandra Or Amazon S3. Have a look at this comparision S3 – Non urgent batch jobs. S3 fits very specific use cases, when data locality isn

Can Mesos 'master' and 'slave' nodes be deployed on the same machines?

旧时模样 提交于 2019-11-29 21:05:09
Can Apache Mesos 'master' nodes be co-located on the same machine as Mesos 'slave' nodes? Similarly (for high-availability (HA) deploys), can the Apache Zookeeper nodes used in Mesos 'master' election be deployed on the same machines as Mesos 'slave' nodes? Mesos recommends 3 'masters' be used for HA deploys, and Zookeeper recommends 5 nodes be used for its quorum election system. It would be nice to have these services running along side Mesos 'slave' processes instead of committing 8 machines to effectively 'non-productive' tasks. If such a setup is feasible, what are the pros/cons of such a