docker-swarm

Docker services stops communicating after some time

半城伤御伤魂 提交于 2019-12-03 12:12:28
I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface. Interface receives my requests but application lost connection with service A,B,C or all of them. If I

docker swarm init could not choose an IP address error

删除回忆录丶 提交于 2019-12-03 10:03:36
Experimenting with Docker Swarm with Docker Desktop for Mac . I tried this: docker-machine create -d virtualbox node-1 docker-machine create -d virtualbox node-2 docker-machine create -d virtualbox node-3 eval $(docker-machine env node-1) docker swarm init \ --secret my-secret \ --auto-accept worker \ --listen-addr $(docker-machine ip node-1):2377 The last command ( docker swarm init ) returns this error: Error response from daemon: could not choose an IP address to advertise since this system has multiple addresses I have no idea what's going on. Anyone have any idea how to debug? Update 2017

Docker: Swarm worker nodes not finding locally built image

拜拜、爱过 提交于 2019-12-03 08:05:18
Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public. That is, if I do: docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpine Then the redis service is sent to one of the worker nodes and is fully functional. Likewise, if I run my locally built image WITHOUT the constraint, since my manager is also a worker, it gets

How to specify an iterator in the volume path when using docker-compose to scale up service?

牧云@^-^@ 提交于 2019-12-03 06:17:47
Background: I'm using docker-compose in order to place a tomcat service into a docker swarm cluster but I'm presently struggling with how I would approach the logging directory given that I want to scale the service up yet retain the uniqueness of the logging directory. Consider the (obviously) made up docker-compose which simply starts tomcat and mounts a logging filesystem in which to capture the logs. version: '2' services: tomcat: image: "tomcat:latest" hostname: tomcat-example command: /start.sh volumes: - "/data/container/tomcat/logs:/opt/tomcat/logs,z" Versions docker 1.11 docker

Docker Data Volume Container - Can I share across swarm

眉间皱痕 提交于 2019-12-03 05:47:20
问题 I know how to create and mount a data volume container to multiple other containers using --volumes-from, but I do have a few questions regarding it's usage and limitations: Situation: I am looking to use a data volume container to store user uploaded images in for my web application. This data volume container will be used/mounted by many other containers running the web frontend. Questions: Can data volume containers be used/mounted in containers residing on other hosts within a docker

How is Docker Swarm different than Kubernetes?

江枫思渺然 提交于 2019-12-03 04:14:31
问题 I am finding docker swarm, kubernetes quite similar and then there is docker which is a company and the above two are docker clustering tool. So what exactly all these tools are and differences between them? 回答1: There are lots of articles out there which will explain the differences. In a nutshell: Both are trying to solve the same problem - container orchestration over a large number of hosts. Essentially these problems can be broken down like so: Scheduling containers across multiple hosts

docker swarm - how to balance already running containers in a swarm cluster?

大憨熊 提交于 2019-12-03 03:36:54
I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker. When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all. Please tell me how to do load balance? Is swarm manager not responsible to do when worker started? Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode".

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

孤街醉人 提交于 2019-12-03 03:15:12
There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy , which include mem_limit and cpus , which seems like nice/important to be able to set. So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose). But maybe there

How to set up Hadoop in Docker Swarm?

限于喜欢 提交于 2019-12-03 03:11:59
I would like to be able to start a Hadoop cluster in Docker, distributing the Hadoop nodes to the different physical nodes, using swarm . I have found the sequenceiq image that lets me run hadoop in a docker container, but this doesn't allow me to use multiple nodes. I have also looked at the cloudbreak project, but it seems to need an openstack installation, which seems a bit overkill, because it seems to me like swarm alone should be enough to do what we need. Also I found this Stackoverflow question+answer which relies on weave, which needs sudo-rights, which our admin won't give to

Docker Data Volume Container - Can I share across swarm

淺唱寂寞╮ 提交于 2019-12-02 19:06:32
I know how to create and mount a data volume container to multiple other containers using --volumes-from, but I do have a few questions regarding it's usage and limitations: Situation: I am looking to use a data volume container to store user uploaded images in for my web application. This data volume container will be used/mounted by many other containers running the web frontend. Questions: Can data volume containers be used/mounted in containers residing on other hosts within a docker swarm? How is the performance? is it recommended to structure things this way? Is there a better way to