mesos

Apache Chronos Architecture Explaination

孤者浪人 提交于 2019-12-02 07:40:40
I was trying to see what makes Chronos better than Crons? I am not able to understand its job scheduling and executing architecture completely. Specifically, these are the questions around chronos architecture that are not clear to me. In one of the Chronos documentation I read that since crons has SPoF, crons are bad and cronos is better. How chronos avoids SPoF? Where are job schedules saved in Chronos? Does it maintain some sort of DB for that? How scheduled jobs are triggered, who sends an event to Chronos to trigger the job? Are dependent jobs triggered by chronos, if yes how chronos even

Resolving the Mesos Leading Master

别来无恙 提交于 2019-12-01 19:42:34
问题 We're using Mesos to run jobs on a cluster. We're using haproxy to point, e.g., mesos.seanmcl.com to a Mesos Master. If that Master happens to not be the leader, the UI will redirect the browser, after a delay , to the leader so you can see the running jobs. For various reasons (UI speed, avoiding ports blocked by a firewall), I'd really like to programmatically discover the host with the leader. I can not figure out how to do this. I grepped around in the Zookeeper files for Mesos, but only

深入剖析Kubernetes

非 Y 不嫁゛ 提交于 2019-12-01 19:31:49
毫无疑问,Kubernetes 已经成为容器领域当之无愧的事实标准。除了 Google、Microsoft 等技术巨擘们在容器领域里多年的博弈外,国内的 BAT、滴滴、蚂蚁、今日头条等技术大厂,也都已将容器和 Kubernetes 列入未来的战略重心,无数中小型企业也正走在容器化的道路上。 一个很长但精彩的故事 打包发布阶段 在docker 之前有一个 cloud foundry Paas项目,使用 cf push 将用户的可执行文件和 启动脚本打进一个压缩包内,上传到cloud foundry 的存储中,然后cloud foundry 会通过调度器选择一个可以运行这个应用的虚拟机,然后通知这个机器上的agent 把应用压缩包下载下来启动。由于需要在一个虚拟机中 启动不同用户的应用,cloud foundry为客户的应用单独创建一个称作沙盒的隔离环境,然后在沙盒中启动这些应用进程。 PaaS 主要是提供了一种名叫“应用托管”的能力。虚拟机技术发展 ==> 客户不自己维护物理机、转而购买虚拟机服务,按需使用 ==> 应用需要部署到云端 ==> 部署时云端虚拟机和本地环境不一致。所以产生了两种思路 将云端虚拟机 做的尽量与 本地环境一样 无论本地还是云端,代码都跑在 约定的环境里 ==> docker 镜像的精髓 与《尽在双11》作者提到的 “docker 最重要的特质是docker

Setup Mesos-DNS dockerized on a mesos cluster

倖福魔咒の 提交于 2019-12-01 09:43:15
I'm facing some trouble trying to run mesos-dns dockerized on a mesos cluster. I've setup 2 virtual machines with ubuntu trusty on a windows 8.1 host. My VMs are called docker-vm and docker-sl-vm ; where the first one runs mesos-master and the 2nd one runs mesos-slave. The VMs have 2 network cards; one running NAT for accesing internet through the host and the other one is a Host-only adapter for internal communication. The IPs for the VMs are: 192.168.56.101 for docker-vm 192.168.56.102 for docker-sl-vm The MESOS cluster is running Okay. I am trying to follow this tutorial . So, I am running

Is HDFS necessary for Spark workloads?

我的未来我决定 提交于 2019-11-30 22:29:05
HDFS is not necessary but recommendations appear in some places. To help evaluate the effort spent in getting HDFS running: What are the benefits of using HDFS for Spark workloads? Spark is a distributed processing engine and HDFS is a distributed storage system. If HDFS is not an option, then Spark has to use some other alternative in form of Apache Cassandra Or Amazon S3. Have a look at this comparision S3 – Non urgent batch jobs. S3 fits very specific use cases, when data locality isn’t critical. Cassandra – Perfect for streaming data analysis and an overkill for batch jobs. HDFS – Great

Spark配置参数详解

邮差的信 提交于 2019-11-30 22:19:35
以下是整理的Spark中的一些配置参数,官方文档请参考 Spark Configuration 。 Spark提供三个位置用来配置系统: Spark属性:控制大部分的应用程序参数,可以用SparkConf对象或者Java系统属性设置 环境变量:可以通过每个节点的 conf/spark-env.sh 脚本设置。例如IP地址、端口等信息 日志配置:可以通过log4j.properties配置 Spark属性 Spark属性控制大部分的应用程序设置,并且为每个应用程序分别配置它。这些属性可以直接在 SparkConf 上配置,然后传递给 SparkContext 。 SparkConf 允许你配置一些通用的属性(如master URL、应用程序名称等等)以及通过 set() 方法设置的任意键值对。例如,我们可以用如下方式创建一个拥有两个线程的应用程序。 [plain] view plain copy val conf = new SparkConf() .setMaster("local[2]") .setAppName("CountingSheep") .set("spark.executor.memory", "1g") val sc = new SparkContext(conf) 动态加载Spark属性 在一些情况下,你可能想在 SparkConf 中避免硬编码确定的配置。例如

Is HDFS necessary for Spark workloads?

独自空忆成欢 提交于 2019-11-30 17:53:25
问题 HDFS is not necessary but recommendations appear in some places. To help evaluate the effort spent in getting HDFS running: What are the benefits of using HDFS for Spark workloads? 回答1: Spark is a distributed processing engine and HDFS is a distributed storage system. If HDFS is not an option, then Spark has to use some other alternative in form of Apache Cassandra Or Amazon S3. Have a look at this comparision S3 – Non urgent batch jobs. S3 fits very specific use cases, when data locality isn

Transport Endpoint Not Connected - Mesos Slave / Master

℡╲_俬逩灬. 提交于 2019-11-30 17:13:00
I'm trying to connect a Mesos slave to its master. Whenver the slave tries to connect to the master, I get the following message: I0806 16:39:59.090845 935 hierarchical.hpp:528] Added slave 20150806-163941-1027506442-5050-921-S3 (debian) with cpus(*):1; mem(*):1938; disk(*):3777; ports(*):[31000-32000] (allocated: ) E0806 16:39:59.091384 940 socket.hpp:107] Shutdown failed on fd=25: Transport endpoint is not connected [107] I0806 16:39:59.091508 940 master.cpp:3395] Registered slave 20150806-163941-1027506442-5050-921-S3 at slave(1)@127.0.1.1:5051 (debian) with cpus(*):1; mem(*):1938; disk(*)

Can Mesos 'master' and 'slave' nodes be deployed on the same machines?

旧时模样 提交于 2019-11-29 21:05:09
Can Apache Mesos 'master' nodes be co-located on the same machine as Mesos 'slave' nodes? Similarly (for high-availability (HA) deploys), can the Apache Zookeeper nodes used in Mesos 'master' election be deployed on the same machines as Mesos 'slave' nodes? Mesos recommends 3 'masters' be used for HA deploys, and Zookeeper recommends 5 nodes be used for its quorum election system. It would be nice to have these services running along side Mesos 'slave' processes instead of committing 8 machines to effectively 'non-productive' tasks. If such a setup is feasible, what are the pros/cons of such a

Ubuntu 12.04中设置安装Google拼音输入法

心不动则不痛 提交于 2019-11-29 20:56:36
写在最前 好久没写文章了,随着近期时间的充裕,肯定会加快会博文更新的速度。言归正传,在安装英文Linux系统后(作为开发来说,本人更倾向于安装英文语言环境的Linux系统,这样各种提示,尤其是错误提示,都很容易在Google上找到),如果为了方便一些日常的使用,也为了方便在搜索引擎中输入中文,我们可以安装中文输入法,网上有很多教程,但有些说得过于冗杂。本文所针对的系统环境是,利用Ubuntu的Windows Installer安装了12.04版本后,如何设置中文输入法的过程。 设置Language Support 点击System Setting -> Language Support后,在出现的窗口中选择Install/Remove Languages...,在新弹出的窗口中,选择Chinese(simplified),之后点击Apply Changes。等待其安装完成即可。 设置IBus 这里可以说明一下,很多教程都没有告诉普通用户IBus是个什么东西。IBus的全称是 Intelligent Input Bus, 是一个输入法框架,允许输入非英文字符。其实在上一步安装语言支持的过程中,已经安装了IBus框架,这里可以安装一些IBus的库,以更好地支持某些中文输入法的工作。在终端中执行以下命令: sudo apt-get install ibus-clutter ibus