docker-swarm

How to bind the published port to specific eth[x] in docker swarm mode

会有一股神秘感。 提交于 2019-12-04 18:48:20
问题 I'm trying to deploy my container to docker swarm cluster(docker engine 1.12.1). The features of docker swarm mode really are exciting, such as clustering docker, multi-host networking. However I find something can't be archived in swarm mode so far( docker 1.12.x ), which works well when using docker run to start container. My host has eth0 for Intranet network, eth1 for Internet network. I would like to only publish the service deployed by docker service create on Intranet network. But the

Should swarm loadbalancing perform healthchecks on its nodes?

家住魔仙堡 提交于 2019-12-04 16:39:56
The Load Balancing section in the swarm docs don't make it clear if the internal loadbalancer also does health checks, and if it removes nodes that aren't running the service anymore (because it got killed or the node got rebooted). In the following case I've got a service with replicas 3, 1 instance running on each of the 3 nodes. Manager: [root@centosvm ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a593d485050a ddewaele/springboot.crud.sample:latest "sh -c 'java $JAVA_OP" 7 minutes ago Up 7 minutes springbootcrudsample.1.5syc6j4c8i3bnerdqq4e1yelm Node1: [root@node1 ~]#

Docker-swarm >> Cannot connect to the docker engine endpoint

懵懂的女人 提交于 2019-12-04 13:33:34
问题 docker version 1.9.1 swarm version 1.0.1 why on connecting 3 VMs (bridged net) to swarm. "docker info" shows me all nodes Status pending. 1 of 3 hosts is manager all output is from this host. I don't know where to look for. On running swarm --debug manage token://XXXXX output >> *INFO[0000] Listening for HTTP addr=127.0.0.1:2375 proto=tcp DEBU[0000] Failed to validate pending node: Cannot connect to the docker engine endpoint Addr=10.32.1.38:2375 DEBU[0000] Failed to validate pending node:

Host environment variables with docker stack deploy

两盒软妹~` 提交于 2019-12-04 12:13:48
问题 I was wondering if there is a way to use environment variables taken from the host where the container is deployed, instead of the ones taken from where the docker stack deploy command is executed. For example imagine the following docker-compose.yml launched on three node Docker Swarm cluster: version: '3.2' services: kafka: image: wurstmeister/kafka ports: - target: 9094 published: 9094 protocol: tcp mode: host deploy: mode: global environment: KAFKA_JMX_OPTS: "-Djava.rmi.server.hostname=$

Docker: Swarm worker nodes not finding locally built image

微笑、不失礼 提交于 2019-12-04 11:07:51
问题 Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public. That is, if I do: docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpine Then the redis service is sent to one of the worker nodes and is fully functional. Likewise, if

Docker swarm multiple managers and workers Vs

♀尐吖头ヾ 提交于 2019-12-04 10:21:04
I have a 3 node docker swarm cluster. We might want to have 2 managers. I know at one time there is only one leader. Since it is a 3 node cluster, I am trying to find some literature to understand what are the pros and cons of multiple managers. I need this info since in my 3 node cluster if I have 2 masters, 1 worker, what is the downside if I simply create 3 masters in a cluster. Any thoughts would be helpful. A Docker swarm with two managers is not recommended . Why? Docker swarm implements a RAFT consensus : Raft tolerates up to (N-1)/2 failures and requires a majority or quorum of (N/2)+1

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

戏子无情 提交于 2019-12-04 09:27:33
问题 There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus , which seems like nice/important to be able to set. So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions

Can Docker containers run in Windows IoT Core

ぐ巨炮叔叔 提交于 2019-12-04 09:19:28
问题 Is there a way to run a Docker container in Windows IoT Core? I have seen it can be used in Azure, Windows Server and desktop W10 but there is no evidence about Windows IoT Core and I am not sure if some of the already existing installations of docker-engine is compatible with IoT Core or it is just not possible. 回答1: Last Friday, Azure IoT Edge v2 launched in Public Preview yesterday with out-of-box support for native Windows containers! There is even a how-to for deploying on Windows IoT

Pros and Cons of running all Docker Swarm nodes as Managers?

我怕爱的太早我们不能终老 提交于 2019-12-04 08:37:09
问题 I am considering building out a Docker Swarm cluster. For the purpose of keeping things both simple and relatively fault-tolerant, I thought about simply running 3 nodes as managers. What are the trade-offs when not using any dedicated worker nodes? Is there anything I should be aware of that might not be obvious? I found this Github issue which asks a similar question, but the answer is a bit ambiguous to me. It mentions the performance may be worse. It also mentions that it will take longer

How to directly mount NFS share/volume in container using docker compose v3

泪湿孤枕 提交于 2019-12-04 07:33:54
问题 I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster. I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster. I have found below two ways of doing it but it needs extra steps to be performed on the docker host - Mount the NFS share using "fstab" or "mount" command on the host & then use