coreos

How to specify root volume size of core-os ec2 instance using boto3?

三世轮回 提交于 2019-12-03 16:10:40
I cannot figure out from documentation and source code how to define size of the root device. You can specify N additional block devices using BlockDeviceMappings section where you can declare their sizes. But there is no way to set size of the root volume. So it always create an instance with root volume of size 8GB which is the default value. Ran into this issue myself today, probably to late for the original poster, but in case anyone else stumbles across this question later I did the following: import boto3 ec2 = boto3.resource('ec2', region_name='eu-west-1', aws_access_key_id='my-key',

Vagrant “Authentication failure” during up, but “vagrant ssh” can get in just fine

为君一笑 提交于 2019-12-03 15:58:12
I'm stumped. I'm trying to run a vagrant/virtualbox/coreos cluster on Windows 8.1 to develop the cluster for running in the cloud. I've tried this on four machines (all are Windows 8.1 with latest updates and all with the latest VirtualBox, Vagrant, Git, and the same config for Vagrant. I'm checking the Vagrant config out of a repo on all 4 system so I'm confident the configs are the same for each. I get 2 successes and 2 failures. Two machines succeed like this: Bringing machine 'core-01' up with 'virtualbox' provider... ==> core-01: Checking if box 'coreos-stable' is up to date... (snip)

Kubernetes External Load Balancer Service on DigitalOcean

狂风中的少年 提交于 2019-12-03 13:08:02
问题 I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP. 回答1: The LoadBalancer type of service is implemented by adding code to the kubernetes master specific

systemd: “Environment” directive to set PATH

你。 提交于 2019-12-03 10:05:31
What is the right way to set PATH variable in a systemd unit file? After seeing a few examples, I tried to use the format below, but the variable doesn't seem to expand. Environment="PATH=/local/bin:$PATH" I am trying this on CoreOS with the below version of systemd. systemd 225 -PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT -GNUTLS -ACL +XZ -LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD -IDN You can't use EnvVars in Environment directives. The whole Environment= will be ignored. If you use EnvironmentFile= , then the specified file will be loaded without substitution.

Redis sentinel docker image / Dockerfile

可紊 提交于 2019-12-03 09:48:18
问题 I'm looking to deploy high availability Redis on a coreOS cluster, and I need a Redis Sentinel docker image (i.e. Dockerfile) that works. I've gathered enough information/expertise to create one (I think)... but my limited knowledge/experience with advanced networking is the only thing keeping me from building and sharing it. Can someone who is an expert here help me developing a Redis Sentinel Dockerfile (none exist right now)? The Redis/Docker community would really benefit from this. Here

docker: having trouble running npm install after creating a new user

巧了我就是萌 提交于 2019-12-03 07:03:36
So I have another follow-up question regarding installing a node.js-based framework under Docker on CoreOS, per this post . So, because npm is finicky about installing from package.json via root, I've had to create a nonroot sudo user in order to install the package. This is what my Dockerfile currently looks like inside of our repo, building off of an ubuntu image: # Install dependencies and nodejs RUN apt-get update RUN apt-get install -y python-software-properties python g++ make RUN add-apt-repository ppa:chris-lea/node.js RUN apt-get update RUN apt-get install -y nodejs # Install git RUN

Kubernetes External Load Balancer Service on DigitalOcean

∥☆過路亽.° 提交于 2019-12-03 04:18:29
I'm building a container cluster using CoreOs and Kubernetes on DigitalOcean, and I've seen that in order to expose a Pod to the world you have to create a Service with Type: LoadBalancer. I think this is the optimal solution so that you don't need to add external load balancer outside kubernetes like nginx or haproxy. I was wondering if it is possible to create this using DO's Floating IP. Robert Bailey The LoadBalancer type of service is implemented by adding code to the kubernetes master specific to each cloud provider. There isn't a cloud provider for Digital Ocean ( supported cloud

How to auto restart a Docker container after a reboot in CoreOS?

假如想象 提交于 2019-12-03 02:37:02
问题 Assuming the Docker daemon is restarted automatically by whatever init.d or systemd like process when the OS is restarted, what is the preferred way to restart one or more Docker containers? For example I might have a number of web servers behind a reverse proxy or a database server. 回答1: CoreOS uses systemd to manage long running services: https://coreos.com/os/docs/latest/getting-started-with-systemd.html 回答2: if you start the daemon with docker -d -r , it will restart all containers that

Docker的对手来了:CoreOS发布新款容器引擎Rocket

眉间皱痕 提交于 2019-12-03 01:56:04
Docker 刚问世就红透半边天,不仅拿了融资,还得到了Google等巨头的支持。CoreOS此前一直忙于为 Docker 提供技术支持服务,似乎准备好好沾沾 Docker 的光,现在看来它另有一番打算:据gigaom.com的消息,昨天 CoreOS在Github上发布了一款容器引擎竞争产品原型Rocket,意在和 Docker 抢抢风头。 Rocket 是一款容器引擎,和 Docker 类似,帮助开发者打包应用和依赖包到可移植容器中,简化搭环境等部署工作。CoreOS 的 CEO Alex Polvi 在官方博文里介绍道,Rocket 和 Docker 不同的地方在于,Rocket 没有 Docker 那些为企业用户提供的“友好功能”,比如云服务加速工具、集群系统等。反过来说,Rocket 想做的,是一个更纯粹的业界标准。 Alex Polvi 认为,由于 Docker 貌似已经从原本做 " 业界标准容器 " 的初心转变成打造一款以容器为中心的企业服务平台,CoreOS 才决定开始推出自己的标准化产品。 CoreOS 把它的容器称为 App Containers,里面包含 app container image、runtime、container-discovery 协议等。其中,App Container Image 和 Docker 里的 Image 比较类似

how do I clean up my docker host machine

拟墨画扇 提交于 2019-12-03 01:34:21
问题 As I create/debug a docker image/container docker seems to be leaving all sorts of artifacts on my system. (at one point there was a 48 image limit) But the last time I looked there were 20-25 images; docker images . So the overarching questions are: how does one properly cleanup? as I was manually deleting images more started to arrive. huh? how much disk space should I really allocate to the host? will running daemons really restart after the next reboot? and the meta question... what