weave

2 minutes for ZMQ pub/sub to connect in kubernetes

試著忘記壹切 提交于 2021-02-20 04:49:49
问题 I have a Kubernetes 1.18 cluster using weave as my CNI. I have a ZMQ based pub/sub app and I am often (not always) seeing it take 2 minutes before the subscriber can receive messages from the publisher. This seems to be some sort of socket timeout uniqe to my Kubernetes environment. Here is my trivial ZMQ app example #!/bin/env python2 import zmq, sys, time, argparse, logging, datetime, threading from zmq.utils.monitor import recv_monitor_message FORMAT = '%(asctime)-15s %(message)s' logging

Kubernetes软件网络-Weave安装和配置

可紊 提交于 2020-04-06 22:02:02
Kubernetes软件网络-Weave安装和配置 本文来源, https://www.weave.works/docs/net/latest/kubernetes/kube-addon/ The following topics are discussed: Installation Upgrading Kubernetes to version 1.6 Upgrading the Daemon Sets CPU and Memory Requirements Pod Eviction Features Pod Network Network Policy Troubleshooting Troubleshooting Blocked Connections Things to watch out for Changing Configuration Options Installation Weave Net can be installed onto your CNI-enabled Kubernetes cluster with a single command: $ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')

k8s CNI插件简单了解

泪湿孤枕 提交于 2020-02-20 23:48:05
Kubernetes网络模型本身对某些特定的网络功能有一定要求,但在实现方面也具有一定的灵活性。业界已经有不少不同的网络方案,来满足特定的环境和要求。 CNI(container network interface)是容器网络接口,它是一种标准设计和库,为了让用户在容器创建或者销毁时都能够更容易的配置容器网络。 目前比较流行的CNI插件:Flannel、Calico、Weave、Canal(技术上是多个插件的组合)。这些插件即可以确保满足Kubernetes的网络要求,又能为kubernetes集群管理员提供他们所需的某些特定的网络功能。 背景 容器网络 是容器选择连接到其他容器、主机和外部网络(如Internet)的机制。容器的runtime提供了各种网络模式,每种模式都会产生不同的效果。例如,Docker默认情况下可以为容器配置以下网络: none:将容器添加到一个容器专门的网络堆栈中,没有对外连接。 host:将容器添加到主机的网络堆栈中,没有隔离。 default bridge:默认网络模式。每个容器可以通过IP地址互相连接。 自定义网桥:用户定义的网桥,具有更多的灵活性、隔离性和其他便利功能。 Docker还可以让用户通过其他驱动程序和插件,来配置更高级的网络(包括多主机覆盖网络)。 CNI( https://github.com/containernetworking

Kubernetes' container creation gets stuck at container creation (ContainerCreating) with flannel

雨燕双飞 提交于 2020-02-16 09:53:12
问题 Context I installed Docker following this instruction on my Ubuntu 18.04 LTS (Server) and later on Kubernetes followed via kubeadm . After initializing ( kubeadm init --pod-network-cidr=10.10.10.10/24 ) and joining a second node (I got a two node cluster for the start) I cannot get my coredns as well as the later applied Web UI (Dashboard) to actually go into status Running . As pod network I tried both, Flannel ( kubectl apply -f https://raw.githubusercontent.com/coreos/flannel

docker网络配置方法总结

扶醉桌前 提交于 2020-01-30 07:09:20
docker启动时,会在宿主主机上创建一个名为docker0的虚拟网络接口,默认选择172.17.42.1/16,一个16位的子网掩码给容器提供了65534个IP地址。docker0只是一个在绑定到这上面的其他网卡间自动转发数据包的虚拟以太网桥,它可以使容器和主机相互通信,容器与容器间通信。问题是,如何让位于不同主机上的docker容器可以通信。如何有效配置docker网络目前来说还是一个较复杂的工作,因而也涌现了很多的开源项目来解决这个问题,如flannel、Kubernetes、weave、pipework等等。 1. flannel CoreOS团队出品,是一个基于etcd的覆盖网络(overlay network)并为每台主机提供一个独立子网的服务。Rudder简化了集群中Docker容器的网络配置,避免了多主机上容器子网冲突的问题,更可以大幅度减少端口映射方面的工作。具体代码见 https://github.com/coreos/flannel ,其工作原理为: An overlay network is first configured with an IP range and the size of the subnet for each host. For example, one could configure the overlay to use 10.100

How to fix weave-net CrashLoopBackOff for the second node?

梦想的初衷 提交于 2020-01-10 14:17:32
问题 I have got 2 VMs nodes. Both see each other either by hostname (through /etc/hosts) or by ip address. One has been provisioned with kubeadm as a master. Another as a worker node. Following the instructions (http://kubernetes.io/docs/getting-started-guides/kubeadm/) I have added weave-net. The list of pods looks like the following: vagrant@vm-master:~$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-vm-master 1/1 Running 0 3m kube-system kube

weavescope监控容器

前提是你 提交于 2020-01-06 17:59:30
一.介绍 Docker和k8s的监控Weave Scope,功能强大,但配置简单,于是开始了接下来的配置,Weave Scope这个项目会自动生成容器之间的关系图,方便理解容器之间的关系,也方便监控容器化和微服务化的应用。Weave Scope能够很便捷的监控多容器主机,并且消耗的资源非常少。而且能从web界面进入到容器和宿主机,所以管理容器很方便,但是个人感觉不好的地方是没有密码认证,登陆起来不安全 二.安装 测试环境 server1 190.168.3.250 server2 190.168.3.251 都安装好docker 1.下载curl -L git.io/scope -o /usr/local/bin/scope chmod +x /usr/local/bin/scope server1服务端:scope launch server2客户端:scope launch 190.168.3.251 190.168.3.250 前面是本机地址,后面服务器地址 三.访问使用 http://190.168.3.250:4040/ 如果有weave网络,还可以查看weave网络关系 以table表格监控容器 来源: 51CTO 作者: 一百个小排 链接: https://blog.51cto.com/anfishr/2460919

Docker Weave and WeaveDNS issues

本秂侑毒 提交于 2020-01-01 11:32:27
问题 I'm having an issue with setting up weaveDNS on a small weave network that I have running on my local machine. For now the problem manifests itself in the fact that when I run 'weave status' I'm not seeing a DNS section in the output (as its suggested in the Troubleshooting section of http://docs.weave.works/weave/latest_release/weavedns.html). I'm running 4 containers. weave ps output is: c1d106ed5717 c2:ce:53:49:98:f6 10.0.1.12/24 8f01765b2ba6 ba:2e:c3:4b:8f:8f 10.0.1.30/24 0d824d914383 ae

How to setup Docker Swarm for weave intended to run 3 different hosts?

余生颓废 提交于 2019-12-25 01:55:06
问题 I'm trying to figure out the best solution for my (maybe simple) problem. I have a (one) docker compose file with some services : Rest-Api (java) Mongo Redis The rest api needs to be scalable. Java-1, Java-2, Java-3, etc. What you see below are 3 different hosts. What's the best solution to script everything when all my hosts are up? I want to be able to do something like docker-compose up -d and spawn my services on 3 differents hosts . I know docker swarm can do something. I've also read

How to get kube-dns working in Vagrant cluster using kubeadm and Weave

前提是你 提交于 2019-12-18 04:04:28
问题 I deployed a few VMs using Vagrant to test kubernetes: master: 4 CPUs, 4GB RAM node-1: 4 CPUs, 8GB RAM Base image: Centos/7. Networking: Bridged. Host OS: Centos 7.2 Deployed kubernetes using kubeadm by following kubeadm getting started guide. After adding the node to the cluster and installing Weave Net, I'm unfortunately not able to get kube-dns up and running as it stays in a ContainerCreating state: [vagrant@master ~]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS