coreos

kubernetes + coreos cluster - replacing certificates

荒凉一梦 提交于 2019-12-05 12:09:34
I have a coreos kubernetes cluster, which I started by following this article: kubernetes coreos cluster on AWS TLDR; > kube-aws init > kube-aws render > kube-aws up Everything worked good and I had a kubernetes coreos cluster on AWS. In the article there is a warning that said: PRODUCTION NOTE: the TLS keys and certificates generated by kube-aws should not be used to deploy a production Kubernetes cluster. Each component certificate is only valid for 90 days, while the CA is valid for 365 days. If deploying a production Kubernetes cluster, consider establishing PKI independently of this tool

docker is using the v1 registry api when it should use v2

不想你离开。 提交于 2019-12-05 10:15:30
I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error: Error response from daemon: v1 ping attempt failed with error: Get https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry 172.22.22.11:5000` to

How to copy /var/lib/docker with overlayfs directory structure with data *as-is* without increasing the storage space

妖精的绣舞 提交于 2019-12-05 09:30:43
I have a docker installation with several images and about 150Gigs of data in /var/lib/docker . This setup uses overlayfs as its storage driver. There are several directories for each layer under /var/lib/docker/overlay holding the actual data. The partition size is 160G. My requirement is to copy the the docker directory from /var/lib/docker to a new disk of 1TB, so that I point docker to start from this new partition and continue to use my old images. Now the problem is, when I use an rsync or a cp command with -a , to copy /var/lib/docker to new partition, instead of a total of 150G actual

What is the correct way to install addons with Kubernetes 1.1?

允我心安 提交于 2019-12-05 08:00:32
What is the correct way to install addons with Kubernetes 1.1? The docs aren't as clear as I'd like on this subject; they seem to imply that one should copy addons' yaml files to /etc/kubernetes/addons on master nodes, but I have tried this and nothing happens. Additionally, for added confusion, the docs imply that addons are bundled with Kubernetes: So the only persistent way to make changes in add-ons is to update the manifests on the master server. But still, users are discouraged to do it on their own - they should rather wait for a new release of Kubernetes that will also contain new

How to execute host's Docker command from container?

♀尐吖头ヾ 提交于 2019-12-05 01:34:41
问题 I want to write Docker containers management script in Python. However, since I use CoreOS, Python is not included as standard command. So, I am thinking of using Python Docker container (https://registry.hub.docker.com/_/python/) to execute my script. However, in that case the script will be executed in container's VM which doesn't have access to the host's Docker CLI. Is there a way to use Python (or other programming languages not packaged in CoreOS), to manage host environment without

Vagrant “Authentication failure” during up, but “vagrant ssh” can get in just fine

随声附和 提交于 2019-12-05 00:53:11
问题 I'm stumped. I'm trying to run a vagrant/virtualbox/coreos cluster on Windows 8.1 to develop the cluster for running in the cloud. I've tried this on four machines (all are Windows 8.1 with latest updates and all with the latest VirtualBox, Vagrant, Git, and the same config for Vagrant. I'm checking the Vagrant config out of a repo on all 4 system so I'm confident the configs are the same for each. I get 2 successes and 2 failures. Two machines succeed like this: Bringing machine 'core-01' up

How to specify root volume size of core-os ec2 instance using boto3?

让人想犯罪 __ 提交于 2019-12-04 23:59:50
问题 I cannot figure out from documentation and source code how to define size of the root device. You can specify N additional block devices using BlockDeviceMappings section where you can declare their sizes. But there is no way to set size of the root volume. So it always create an instance with root volume of size 8GB which is the default value. 回答1: Ran into this issue myself today, probably to late for the original poster, but in case anyone else stumbles across this question later I did the

etcd-operator快速入门完全教程

夙愿已清 提交于 2019-12-04 23:28:43
Operator是指一类基于Kubernetes自定义资源对象(CRD)和控制器(Controller)的云原生拓展服务,其中CRD定义了每个operator所创建和管理的自定义资源对象,Controller则包含了管理这些对象所相关的运维逻辑代码。 对于普通用户来说,如果要在k8s集群中部署一个高可用的etcd集群,那么不仅要了解其相关的配置,同时又需要特定的etcd专业知识才能完成维护仲裁,重新配置集群成员,创建备份,处理灾难恢复等等繁琐的事件。 而在operator这一类拓展服务的协助下,我们就可以使用简单易懂的YAML文件(同理参考Deployment)来声明式的配置,创建和管理我们的etcd集群,下面我们就来一同了解下etcd-operator这个服务的架构以及它所包含的一些功能。 目 标 了解etcd-operator的架构与CRD资源对象 部署etcd-operator 使用etcd-operator创建etcd cluster 基于etcd-operator备份和恢复etcd cluster 服务架构 etcd-operator的设计是基于k8s的API Extension机制来进行拓展的,它为用户设计了一个类似于Deployment的Controller,只不过这个Controller是用来专门管理etcd这一服务的。

CoreOS

北城以北 提交于 2019-12-04 18:04:44
CoreOS是一个基于Linux 内核的轻量级操作系统,为了 计算机集群 的基础设施建设而生,专注于自动化,轻松部署,安全,可靠,规模化。作为一个操作系统,CoreOS 提供了在应用容器内部署应用所需要的基础功能环境以及一系列用于服务发现和配置共享的内建工具。 Docker做容器引擎,CoreOS做容器管理 来源: https://www.cnblogs.com/hshy/p/11876689.html

CoreOS Vagrant Virtual box SSH password

◇◆丶佛笑我妖孽 提交于 2019-12-04 18:02:17
I'm trying to SSH into CoreOS Virtual Box using Putty. I know the username appears in the output when I do Vagrant up but I don't know what the password is. I've also tried overriding it with config.ssh.password settings in Vagrantfile but when I do vagrant up again it comes up with Authentication failure warning and retries endlessly. How do we use Putty to log into this Box instance? By default there is no password set for the core user, only key-based authentication. If you'd like to set a password this can be done via cloud-config . Place the cloud-config file in a user-data file within