docker-swarm

Docker 1.12 Swarm Mode - Load balance tasks of the same service on single node

霸气de小男生 提交于 2019-12-04 04:49:47
问题 On Docker 1.12 Swarm Mode, if i have more than one task of the same service running in a single node and publishing the same port, is possible to do any kind of load balance between the tasks? Or, whats the purpose of having more instances of a service than the number of nodes? Eg. node swarm init node service create --name web --replicas=2 --publish=80:80 nginx Now if i open the browser and access http://localhost/ (refreshing the page many times) all connections seems to be handled by the

Log client's “real” IP address in Docker Swarm 1.12 when accessing a service

时光怂恿深爱的人放手 提交于 2019-12-04 03:58:07
I have nginx container running as a service in Docker Swarm inside user created overlay network. Both created with: docker network create --driver overlay proxy docker service create --name proxy --network proxy -p 80:80 nginx When accessing nginx site through a browser, in nginx access log remote address is logged as 10.255... formatted address, what I presume to be the Swarm load balancer address. The question is how to know/log the address of the end client accessing the site and not the load balancer address. Good catch!, Most people analyzing the nginx access.log and client ip is

Docker swarm with a custom network

安稳与你 提交于 2019-12-04 03:41:25
I'm trying to work out how to properly use swarm mode in Docker. First I tried running containers on my 2 workers and manager machine without specifying a custom network (so I'm using the default ingress overlay network). However, If I use the ingress network, for some reason I cannot resolve tasks.myservice . So I tried configuring a custom network like this: docker network create -d overlay elasticnet So now, when I bash into one of the containers, I can successfully resolve tasks.myservice but I can no longer access the port I've defined in my service creation under --publish externally

Docker service exposed publicly though made to expose ports to localhost only

ε祈祈猫儿з 提交于 2019-12-03 23:51:36
I have created one service and exposed it to run only on localhost in one of my docker swarm node but I can access the service publicly too easily. I have deleted and redeployed the docker stack but still same issue. Here is my docker-compose.yml I have used to deploy the service in stack version: "3" networks: api-net: ipam: config: - subnet: 10.0.10.0/24 services: health-api: image: myprivateregistry:5000/healthapi:qa ports: - "127.0.0.1:9010:9010" networks: - api-net depends_on: - config-server deploy: mode: replicated replicas: 1 placement: constraints: - node.role == manager I haven't

How can I use a docker swarm mode manager behind a floating IP

五迷三道 提交于 2019-12-03 17:11:06
Some providers, such as ScaleWay will give your server an IP that is not attached to a local interface on the box. # docker swarm init --advertise-addr <my-external-ip>:2377 --listen-addr 0.0.0.0:2377 Error response from daemon: must specify a listening address because the address to advertise is not recognized as a system address While # docker swarm init --advertise-addr eth0:2377 will advertise a private IP address. How is docker swarm supposed to be setup in such an environment? There is an issue with native swarm mode , when it comes to binding to a non system IP Address as docker 1.12.5

can we deploy a container into a specific node in a docker swarm

痴心易碎 提交于 2019-12-03 16:37:06
问题 I have a docker swarm cluster, it contains 1 master 3 nodes. When we deploy a container through swarm master, e.g with the below command docker -H tcp://<master_ip>:5001 run -dt --name swarm-test busybox /bin/sh Swarm will auto pick a node and deploy my container. Is there a way to hand pick a node? e.g I want to deploy a container in node 1. 回答1: Take a look at the Swarm filter docs. You can set various constraints on what node Swarm should pick for any given container. For your case try

How to share volumes across multiple hosts in docker engine swarm mode?

坚强是说给别人听的谎言 提交于 2019-12-03 16:30:17
问题 Can we share a common/single named volume across multiple hosts in docker engine swarm mode, what's the easiest way to do it ? 回答1: If you have an NFS server setup you can use use some nfs folder as a volume from docker compose like this: volumes: grafana: driver: local driver_opts: type: nfs o: addr=192.168.xxx.xx,rw device: ":/PathOnServer" 回答2: From scratch, Docker does not support this by itself. You must use additional components either a docker plugin which would provide you with a new

Connecting two docker containers

。_饼干妹妹 提交于 2019-12-03 16:25:17
问题 I have two existing docker container web and db. I want to link these two container, so that they will communicate with each other. If i go with --link command means it will link web to a new image and not to the db. 回答1: Using --link was the only way of connecting containers before the advent of docker networks. These provide a "cleaner" solution to the problem of inter-container communication and at the same time solves 2 of the major limits of links: restart of linked container breaks the

docker swarm - how to balance already running containers in a swarm cluster?

蹲街弑〆低调 提交于 2019-12-03 13:29:47
问题 I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker. When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all. Please tell me how to do load balance? Is swarm manager not responsible to do when worker started? 回答1: Swarm currently (18.03) does not

How to set up Hadoop in Docker Swarm?

大兔子大兔子 提交于 2019-12-03 12:59:34
问题 I would like to be able to start a Hadoop cluster in Docker, distributing the Hadoop nodes to the different physical nodes, using swarm. I have found the sequenceiq image that lets me run hadoop in a docker container, but this doesn't allow me to use multiple nodes. I have also looked at the cloudbreak project, but it seems to need an openstack installation, which seems a bit overkill, because it seems to me like swarm alone should be enough to do what we need. Also I found this Stackoverflow