coreos

kubernetes pod can't connect (through service) to self, only to other pod-containers

混江龙づ霸主 提交于 2019-11-30 14:07:46
问题 I have a kubernetes single-node setup (see https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html ) I have a service and an replication controller creating pods. Those pods need to connect to the other pods in the same service (Note: this is ultimately so that I can get mongo running w/replica sets (non localhost), but this simple example demonstrates the problem that mongo has). When I connect from any node to the service, it will be distributed (as expected) to one of

Using etcd as primary store/database?

别等时光非礼了梦想. 提交于 2019-11-30 08:00:11
Can etcd be used as reliable database replacement? Since it is distributed and stores key/value pairs in a persistent way, it would be a great alternative nosql database. In addition, it has a great API. Can someone explain why this is not a thing? etcd etcd is a highly available key-value store which Kubernetes uses for persistent storage of all of its objects like deployment, pod, service information. etcd has high access control, that it can be accessed only using API in master node. Nodes in the cluster other than master do not have access to etcd store. nosql database There are currently

k8s master查看不到worker节点

久未见 提交于 2019-11-30 05:55:42
k8s master查看不到worker节点 一、 问题 master节点已经安装好,但是worker加入master显示成功,但是在master节点上使用 kubectl get nodes 命令查看不到,且master节点时而 ready 时而 NotReady , worker 重置 kubeadm reset 后, master 节点恢复正常。 二 、 解决方法 通过设置 hosts 和 hostname 解决 2.1 设置hosts $ cat /etc/hosts 192.168.25.131 master01 192.168.25.132 node01 192.168.25.133 node02 2.2 设置hostname $ hostnamectl set-hostname master01 | node01 | node02 2.3 可能存在的问题 当master节点已经安装好了之后,修改完host #查看pods,coredns一直处于ContainerCreating状态 $ kubectl get pod --all-namespaces corednsxxx ContainerCreating corednsxxx ContainerCreating 使用其他Blog上的方案 $ rm -rf /var/lib/cni/flannel/* && rm

Can I clean /var/lib/docker/tmp?

时光总嘲笑我的痴心妄想 提交于 2019-11-30 02:51:16
问题 My server is CoreOS. There are so many files in /var/lib/docker/tmp , their name's like " GetV2ImageBlob998303926 ". The size of all GetV2ImageBlobxxxxxxxx files is 640MB. Can I remove all files in /var/lib/docker/tmp ? 回答1: This is reported in issues/14506, and addressed in PR 14389 , now PR 15414. Ensure images downloaded by pullTagV2 are always cleaned up Previously, if only some of the downloads succeed, we would not close and delete the file handles. This does change the behavior of the

Should I use forever/pm2 within a (Docker) container?

南笙酒味 提交于 2019-11-29 20:46:34
I am refactoring a couple of node.js services. All of them used to start with forever on virtual servers, if the process crashed they just relaunch. Now, moving to containerised and state-less application structures, I think the process should exit and the container should be restarted on a failure. Is that correct? Are there benefits or disadvantages? My take is do not use an in-container process supervisor (forever, pm2) and instead use docker restart policy via the --restart=always (or one of the other flavors of that option). This is more inline with the overall docker philosophy, and

Using etcd as primary store/database?

▼魔方 西西 提交于 2019-11-29 10:44:16
问题 Can etcd be used as reliable database replacement? Since it is distributed and stores key/value pairs in a persistent way, it would be a great alternative nosql database. In addition, it has a great API. Can someone explain why this is not a thing? 回答1: etcd etcd is a highly available key-value store which Kubernetes uses for persistent storage of all of its objects like deployment, pod, service information. etcd has high access control, that it can be accessed only using API in master node.

kubernetes service IPs not reachable

隐身守侯 提交于 2019-11-29 07:30:43
So I've got a kubernets cluster up and running using the Kubernetes on CoreOS Manual Installation Guide . $ kubectl get no NAME STATUS AGE coreos-master-1 Ready,SchedulingDisabled 1h coreos-worker-1 Ready 54m $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} $ kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE default curl-2421989462-h0dr7 1/1 Running 1 53m 10.2.26.4 coreos-worker-1 kube-system busybox 1/1

How do I check if my local docker image is outdated, without pushing from somewhere else?

孤者浪人 提交于 2019-11-29 06:14:47
问题 I'm running a react app in a docker container, on a Coreos server. Let's say it's been pulled from dockerhub from https://hub.docker.com/r/myimages/myapp . Now I want to check periodically if the dockerhub image for the app container has been updated, to see if the image I'm running locally is behind. What would be the most efficient way to check if a local docker image is outdated compared to the remote image? All solutions I've found so far are bash scripts or external services that push on

Allow scheduling of pods on Kubernetes master?

点点圈 提交于 2019-11-29 03:43:26
I set up Kubernetes on CoreOS on bare metal using the generic install scripts . It's running the current stable release, 1298.6.0, with Kubernetes version 1.5.4. We'd like to have a highly available master setup, but we don't have enough hardware at this time to dedicate three servers to serving only as Kubernetes masters, so I would like to be able to allow user pods to be scheduled on the Kubernetes master. I set --register-schedulable=true in /etc/systemd/system/kubelet.service but it still showed up as SchedulingDisabled. I tried to add settings for including the node as a worker,

webserver Etcd Cluster / CoreOS etcd / macOS etcd

99封情书 提交于 2019-11-28 19:27:21
s https://coreos.com/etcd/ https://coreos.com/etcd/docs/latest/ ETCD 单机启动 https://kaixiansheng.iteye.com/blog/2401500 etcd就两个可执行文件(截至到3.0.15前)。 下载地址:https://github.com/coreos/etcd/releases 注:etcd可以集群安装,在这里只想做一个单机测试,所以只是启动一个节点就可以。 1. 解压,并将文件放入系统路径中: tar zxvf etcd-v3.0.15-linux-amd64.tar.gz cp etcd /usr/bin/ cp etcdctl /usr/bin/ 2. 创建一个服务描述文件,放入systemd的服务目录下 cat /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd/ EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/bin/etcd [Install] WantedBy=multi-user