docker-swarm

How to directly mount NFS share/volume in container using docker compose v3

做~自己de王妃 提交于 2019-12-02 14:18:53
I have a compose file with v3 where there are 3 services sharing/using the same volume. While using swarm mode we need to create extra containers & volumes to manage our services across the cluster. I am planning to use NFS server so that single NFS share will get mounted directly on all the hosts within the cluster. I have found below two ways of doing it but it needs extra steps to be performed on the docker host - Mount the NFS share using "fstab" or "mount" command on the host & then use it as a host volume for docker services. Use Netshare plugin - https://github.com/ContainX/docker

How to assign different port to container replicas in docker swarm

非 Y 不嫁゛ 提交于 2019-12-02 09:22:19
问题 We are deploying storm supervisor with docker container in docker swarm mode with replica 3. Now we want to access supervisor logs on through browser. We have exposed port 8080 on which we can access storm UI. This is working fine. Now storm also exposes their log files on port 8000. As we have only one nimbus and 3 supervisor, accessing nimbus logs through port 8000 was pretty easy. The problem we faced on supervisor which is deployed using docker swarm service. And in docker swarm service

how to get secrets from broken docker swarm

青春壹個敷衍的年華 提交于 2019-12-02 04:28:51
问题 My swarm server is broken(Linux system error), sadly it is only one node. I read https://docs.docker.com/v17.09/engine/swarm/admin_guide/#back-up-the-swarm So I tried to backup /var/lib/docker/swarm and restore it on a new set up docker server as below: The new docker daemon works fine without any swarm feature, but swarm feature doesn't work like: $ docker service ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this

how to get secrets from broken docker swarm

元气小坏坏 提交于 2019-12-02 02:16:49
My swarm server is broken(Linux system error), sadly it is only one node. I read https://docs.docker.com/v17.09/engine/swarm/admin_guide/#back-up-the-swarm So I tried to backup /var/lib/docker/swarm and restore it on a new set up docker server as below: The new docker daemon works fine without any swarm feature, but swarm feature doesn't work like: $ docker service ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. I think that I need to force re-init swarm manager: docker swarm init -

run docker exec from swarm manager

眉间皱痕 提交于 2019-12-01 20:08:14
问题 I have two worker nodes: worker1 and worker2 and one swarm manager. I'm running all the services in the worker nodes only. I need to run from the manager docker exec to access some of the containers created in the worker nodes but I keep getting that the service is not recognized. I know I can run docker exec in any of the worker nodes and it works fine but I dont want to have to find on which node the service is running and then ssh to the designated node to run docker exec command. Is there

What is the different between putting a separate service discovery and integrate it into the cluster machine in Docker Swarm

风流意气都作罢 提交于 2019-12-01 12:31:32
I am having problem understanding the need of a separated service discovery server while we could register the slave node to the master node at the slave node start-up through whatever protocol. Hosting another service seem redundant to me. VonC Docker Swarm is there to create a cluster of hosts running Docker and schedule containers across the cluster. It does not include service discovery , which is provided by a backend service, such as etcd, consul or zookeeper. The first problem: service registration and discovery is an infrastructure concern, not an application concern. The second

Use docker-compose with docker swarm

你离开我真会死。 提交于 2019-12-01 12:23:54
I'm using docker 1.12.1 I have an easy docker-compose script. version: '2' services: jenkins-slave: build: ./slave image: jenkins-slave:1.0 restart: always ports: - "22" environment: - "constraint:NODE==master1" jenkins-master: image: jenkins:2.7.1 container_name: jenkins-master restart: always ports: - "8080:8080" - "50000" environment: - "constraint:NODE==node1" I run this script with docker-compose -p jenkins up -d . This Creates my 2 containers but only on my master (from where I execute my command). I would expect that one would be created on the master and one on the node. I also tried

Can Docker 1.12 in “swarm mode” provide “a single, virtual Docker host”?

岁酱吖の 提交于 2019-12-01 06:02:09
One of the nifty features of the original " Docker Swarm " was that it: turns a pool of Docker hosts into a single, virtual Docker host allowing tools (such as the docker CLI, and docker-compose ) to be agnostic about whether they were operating against a single instance of Docker Engine, or a Swarm cluster. Docker 1.12 brings an integrated "swarm mode", which is an exciting new take on Docker orchestration. But, have we lost the "cluster as virtual Docker host" feature in the process? Using docker run against a swarm-mode master only ever seems to start containers on the master node itself.

Docker swarm run tasks only in workers

若如初见. 提交于 2019-12-01 04:14:20
Say that we are working in swarm mode and we have three nodes: manager1 worker1 worker2 Is it possible to create a service and specify that the tasks only has to run in the workers (worker1 and worker2) and not in the managers (manager1) I am running the following command to create the service: docker-machine ssh manager1 "docker service create --network dognet --name dog-db redis" and when I ps the service: docker-machine ssh manager1 "docker service ps dog-db" I get: ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 3kvfpbhl6fj0qwtglc5k7sbkw dog-db.1 redis manager1 Running Preparing 4

Can Docker 1.12 in “swarm mode” provide “a single, virtual Docker host”?

与世无争的帅哥 提交于 2019-12-01 03:09:07
问题 One of the nifty features of the original "Docker Swarm" was that it: turns a pool of Docker hosts into a single, virtual Docker host allowing tools (such as the docker CLI, and docker-compose ) to be agnostic about whether they were operating against a single instance of Docker Engine, or a Swarm cluster. Docker 1.12 brings an integrated "swarm mode", which is an exciting new take on Docker orchestration. But, have we lost the "cluster as virtual Docker host" feature in the process? Using