mesos

Spark shell connect to Mesos hangs: No credentials provided. Attempting to register without authentication

情到浓时终转凉″ 提交于 2019-12-10 01:50:23
问题 I installed Mesos in an OpenStack environment using these instructions from Mesosphere: https://open.mesosphere.com/getting-started/datacenter/install/. I ran the verification test as described and it was successful. UI for both Mesos and Marathon are working as expected. When I run the Spark shell from my laptop I cannot connect. The shell hangs with the output below. I don't see anything in the Mesos master or slave logs that would indicate an error, so am not sure what to investigate next.

下一代云计算平台Apache Mesos定制自己的PaaS(应用发布+负载均衡+服务发现)

白昼怎懂夜的黑 提交于 2019-12-09 23:17:25
书接上文《 下一代云计算平台Apache Mesos之使用marathon发布应用 》 作为一个简单的PaaS(平台即服务),应该具备发布应用,调整应用个数,重启应用,暂停应用(marathon提供)以及负载均衡和服务发现的功能。本文主要演示负载均衡和服务发现。 1 发布docker程序到marathon 1.1 发布docker镜像到marathon平台 1.1.1 编写Docker.json { "container": { "type": "DOCKER", "docker": { "image": "192.168.1.103:5000/tomcat", "network": "BRIDGE", "portMappings": [ { "containerPort": 8080, "hostPort": 0, "protocol": "tcp" } ] } }, "id": "tomcat", "instances": 3, "cpus": 0.5, "mem": 512, "uris": [], "cmd":"/opt/tomcat/bin/deploy-and-run.sh" } 1.1.2 通过marathon api发布 curl -X POST -H "Content-Type: application/json" http://192.168.1.110

Mesos 1.1.1 发布说明

蓝咒 提交于 2019-12-09 17:43:00
Release Notes - Mesos - Version 1.1.1 (WIP) This is a bug fix release. Release Notes - Mesos - Version 1.1.0 This release contains the following new features: [MESOS-2449] - Experimental support for launching a group of tasks via a new LAUNCH_GROUP Offer operation. Mesos will guarantee that either all tasks or none of the tasks in the group are delivered to the executor. Executors receive the task group via a new LAUNCH_GROUP event. [MESOS-2533] - Experimental support for HTTP and HTTPS health checks. Executors may now use the updated HealthCheck protobuf to implement HTTP(S) health checks.

测试mesos-dns是否生效方案(mesos-dns服务)

亡梦爱人 提交于 2019-12-09 16:23:34
1) 在marathon中启动mesos-dns 2) 启动一个nginx的docker,里面默认监听的是80端口. nginx的docker.json [root@centos7 mywork]# cat docker_nginx.json { "id":"nginx", "cpus":0.2, "mem":20.0, "instances": 2, "container": { "type":"DOCKER", "docker": { "image": "nginx", "network": "BRIDGE", "portMappings":[{"containerPort":80,"hostPort":0,"servicePort":0,"protocol":"tcp"}] } } } docker镜像可以自己制作,或者使用官方的. 注意network是bridge, 这时候使用marathon部署 curl -X POST http://marathon_host:8080/v2/apps -d @docker_nginx.json -H "Content-type: application/json" 注意里面的marathon_host是marathon的主机ip地址 然后启动成功了docker,可以查看docker启动参数 [root@centos7 mywork]#

Can't connect to cassandra container via haproxy

不羁岁月 提交于 2019-12-08 19:43:33
I am trying to connect an external app to Cassandra which is running dockerized on a mesos cluster. These are the the apps I have running on mesos: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 137760ce852a cassandra:latest "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 7000-7001/tcp, 7199/tcp, 9160/tcp, 0.0.0.0:31634->9042/tcp mesos-1b65f33a-3d36-4bf4-8a77-32077d8d234a-S1.0db174cc-2e0c-4790-9cd7-1f142d08c6e2 fec5fc93ccfd cassandra:latest "/docker-entrypoint.s" 22 minutes ago Up 22 minutes 7000-7001/tcp, 7199/tcp, 9160/tcp, 0.0.0.0:31551->9042/tcp mesos-1b65f33a-3d36-4bf4-8a77

how to define HTTP health check in a consul container for a service on the same host?

眉间皱痕 提交于 2019-12-08 13:22:36
We are using a consul agent on a host that also runs a service. (RabbitMQ) To verify that the service is ready we have defined a curl based health check. however, we are using the registrator to inject this check using env variable. SERVICE_CHECK_SCRIPT=curl hostname :15672/.... problem is, we've also told the consul-agent that its hostname is the same as the host. (We must have this feature since we want to see the correct hostname registered with the consul cluster. When the consul agent runs the health check, it looks for the URL on its own container... this obviously fails... does anybody

极简Docker和Kubernetes发展史

北城以北 提交于 2019-12-08 08:30:37
极简Docker和Kubernetes发展史 https://www.cnblogs.com/chenqionghe/p/11454248.html 2013年 Docker项目开源 2013年,以AWS及OpenStack,以Cloud Foundry为代表的开源Pass项目,成了云计算领域的一股清流,pass提供了一种“应用托管”的能力。 当时的虚假机和云计算已经是比较普遍的技术了,主流用法就是租一批AWS或者OpenStack的虚拟机,然后用脚本或者手工的方式在机器上部署应用 Cloud Foudry这样的Pass项目,核心组件就是一套打包和分发机制,会调用操作系统的Cgroups和Namespace机制 为每个应用单独创建“沙盒”的隔离环境,然后在“沙盒”中运行这些进程,实现了多用户、批量、隔离运行的目的。 这个“沙盒”,就是所谓的容器。 这一年还叫dotCloud的Docker公司,也是Pass热潮中的一员。只不过,比起Heroku、Pivotal、Red Hat等大佬,dotCloud公司显得太微不足道,主打产品跟主流的CloudFoundry社区脱节,眼看就要阵亡的时候,dotCloud公司决定开源自己的容器项目Docker “容器”其实不是什么新鲜的东西,不是Docker发明的,当时最热的Pass项目Cloud Foundry中,容器也只是最底层、最不受关注的一部分

How to read mesos task stdout/stderr from Mesos framework Scheduler class?

空扰寡人 提交于 2019-12-08 08:17:37
问题 I am developing a Mesos framework, it is working perfectly fine, my only issue is that I am unable to read task stdout or stderr from inside the the Scheduler class. I am providing a code sample below, I would like to read the stdout and stderr of a finished task, preferably in the statusUpdate function but anywhere would be useful. How can I reach that info? I tried getting executorInfo or executorId from TaskInfo or TaskStatus objects without any luck. If someone can provide a code sample

how to auto launch new task instance when mesos-slave stopped?

假装没事ソ 提交于 2019-12-08 06:20:43
问题 version Info and command line args mesos-master & mesos-slave version 1.1.0 marathon version 1.4.3 docker server version 1.28 mesos-master's command line args: --zk=zk://ip1:2181,ip2:2181,ip3:2181/mesos \ --port=5050 \ --log_dir=/var/log/mesos \ --hostname=ip1 \ --quorum=2 \ --work_dir=/var/lib/mesosmaster mesos-slave's command line args: --master=zk://ip1:2181,ip2:2181,ip3:2181/mesos \ --log_dir=/var/log/mesos --containerizers=docker,mesos \ --executor_registration_timeout=10mins --hostname

how to define HTTP health check in a consul container for a service on the same host?

久未见 提交于 2019-12-08 06:19:11
问题 We are using a consul agent on a host that also runs a service. (RabbitMQ) To verify that the service is ready we have defined a curl based health check. however, we are using the registrator to inject this check using env variable. SERVICE_CHECK_SCRIPT=curl hostname :15672/.... problem is, we've also told the consul-agent that its hostname is the same as the host. (We must have this feature since we want to see the correct hostname registered with the consul cluster. When the consul agent