coreos

Responses from kubernetes containers getting lost

无人久伴 提交于 2019-12-10 23:41:55
问题 I have installed kubernetes on openstack. The setup has one master and one node on coreos. I have a pod hosting an SIP application on UDP port 5060 and I have created service as NODEPORT on 5060. The spec: "spec": { "ports": [ { "port": 5061, "protocol": "UDP", "targetPort": 5060, "nodeport": 5060, "name": "sipu" } ], "selector": { "app": "opensips" }, "type": "NodePort" } IPs Public IP of node: 192.168.144.29 . Private IP of node: 10.0.1.215. . IP of the container: 10.244.2.4 . docker0

Create docker base image for a linux iso image

心已入冬 提交于 2019-12-10 11:15:34
问题 How can i make a docker base image from a coreos iso image? i tried tar -cf the iso image to tar file, but it's failed. docker import ... just for .tar archive file? thanks 回答1: It is untypical to go from a full OS image (even when it is a small OS) to a docker image. Actually CoreOS is more intended to run docker instead of beeing the appliance of a docker image. What base image you want to use and why? You might not need any if you pack your app with some dependencies (and run it on a

Iptables remove specific rules by comment

我与影子孤独终老i 提交于 2019-12-10 11:02:01
问题 I need to delete some rules with same comment. For example I have rules with comment = "test it", so i can get list of them like this: sudo iptables -t nat -L | grep 'test it' But how can i delete all PREROUTING rules with comment 'test it'? UPD: As @hek2mgl said, i can do something like this: sudo bash -c "iptables-save > iptables.backup" sed -i '/PREROUTING.*--comment.* "test it"/d' iptables.backup sudo iptables-restore < iptables.backup sudo rm iptables.backup But between save and restore

docker: having trouble running npm install after creating a new user

情到浓时终转凉″ 提交于 2019-12-09 05:42:37
问题 So I have another follow-up question regarding installing a node.js-based framework under Docker on CoreOS, per this post. So, because npm is finicky about installing from package.json via root, I've had to create a nonroot sudo user in order to install the package. This is what my Dockerfile currently looks like inside of our repo, building off of an ubuntu image: # Install dependencies and nodejs RUN apt-get update RUN apt-get install -y python-software-properties python g++ make RUN add

NFS volume mount results in exit code 32 on in Kubernetes?

主宰稳场 提交于 2019-12-09 02:05:24
问题 I'm trying to mount an external nfs share in a Replication Controller. When I create the replication controller, the pod is pending. Getting the details on the pod, I get these events: Events: FirstSeen LastSeen Count From SubobjectPath Reason Message Thu, 05 Nov 2015 11:28:33 -0700 Thu, 05 Nov 2015 11:28:33 -0700 1 {scheduler } scheduled Successfully assigned web-worker-hthjq to jolt-server-5 Thu, 05 Nov 2015 11:28:43 -0700 Thu, 05 Nov 2015 11:28:43 -0700 1 {kubelet jolt-server-5}

Structured logging to journald from within a docker container

旧时模样 提交于 2019-12-08 07:52:37
问题 What is the best way to write structured logs to journald from within a docker container? For example, I have an application that writes using sd_journal_send Rather than change the app, I have tried passing through -v /var/log/systemd/journal:/var/log/systemd/journal It works on the my Ubuntu 16.04 desktop, but not on the CoreOS instances where the app runs (which use the Ubuntu 16.04 base image). I don't quite understand why. Perhaps there is a better way to send to the journal? What

How do I enable snmp on CoreOS

坚强是说给别人听的谎言 提交于 2019-12-07 19:46:48
问题 I cannot seem to find any useful info on the topic. Furthermore, what is the best way to monitor CoreOS (we use observium). 回答1: If standard Linux SNMP metrics are most you need, you just want to deploy a container that runs SNMP daemon. For that purpose you will probably need to expose it to host's network namespace ( --net=host , if you are using Docker) and then you definitely need to bind-mount /proc (with -v /proc:/hostproc passed to docker run ). The only unusual thing you would need

can Kubectl remember me?

大兔子大兔子 提交于 2019-12-07 15:36:02
问题 I have implemented basic authentication on my kubernetes api-server, now I am trying to configure my ./kube/config file in a way I could simply run, kubectl get pods kubectl config set-cluster digitalocean \ --server=https://SERVER:6443 \ --insecure-skip-tls-verify=true \ --api-version="v1" kubectl config set-context digitalocean --cluster=digitalocean --user=admin kubectl config set-credentials admin --password="PASSWORD" kubectl config use-context digitalocean But now, it asks for

Kubernetes: how to enable API Server Bearer Token Auth?

自古美人都是妖i 提交于 2019-12-07 12:11:05
问题 I've been trying to enabled token auth for HTTP REST API Server access from a remote client. I installed my CoreOS/K8S cluster controller using this script: https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/controller-install.sh My cluster works fine. This is a TLS installation so I need to configure any kubectl clients with the client certs to access the cluster. I then tried to enable token auth via running: echo `dd if=/dev/urandom bs=128 count=1 2>/dev/null |

docker is using the v1 registry api when it should use v2

ぐ巨炮叔叔 提交于 2019-12-07 05:36:00
问题 I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error: Error response from daemon: v1 ping attempt failed with error: Get https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout. If this private registry supports