docker-swarm

Docker GELF driver env option

筅森魡賤 提交于 2019-12-11 00:47:03
问题 Having an issue getting the --log-opt env=env1,env2 option to work with docker 1.12 swarm-mode and graylog. All of my logs are being sent fine and the tag is coming through. However I see nothing coming in at all from the env setting. I also tried using --log-opt labels=dev but had the same issue. They are being shipped to graylog and I'm not seeing it anywhere within any of the log fields that come through. Any ideas on what I'm doing wrong here? docker service create --log-driver=gelf --log

Communication between containers in docker swarm

霸气de小男生 提交于 2019-12-10 20:46:46
问题 I would like to communicate between master and worker nodes via WebSocket connection in docker swarm mode. A master node should have been reached from the worker node. The connection fails. Also, I would like to connect via http to the master node from my host machine. The connection fails as well. Here is my docker-compose.yml version: '3' services: master: image: master build: context: . dockerfile: ./docker/master/Dockerfile env_file: - ./config.env command: ['node', './src/master/'] ports

In Docker Swarm mode is there any point in replicating a service more than the number of hosts available?

爷,独闯天下 提交于 2019-12-10 19:27:00
问题 I have been looking into the new Docker Swarm mode that will be available in Docker 1.12. In this Docker Swarm Mode Walkthrough video, they create a simple Nginx service that is composed of a single Nginx container. In the video, they have 4 nodes in the Swarm cluster. During the scaling demonstration, they increase the replication factor to 10, thus creating 10 copies of the Nginx container across all 4 machines in the cluster. I get that the video is just a demonstration, but in the real

Is it possible to create container in multiple host using a single docker compose file?

依然范特西╮ 提交于 2019-12-10 16:57:26
问题 I have to create containers in multiple host. I have a dockerfile for each container. I found that docker-compose can be used to run multiple containers from a single yaml file. I have to run containerA in HostA, containerB in HostB and so on.. is it possible to achieve this using docker-compose ? or what is the best way to create container in different host using the dockerfile. 回答1: No, docker-compose alone won't achieve this. Managing containers across multiple hosts is generally the job

tcp_keepalive_time in docker container

空扰寡人 提交于 2019-12-10 15:43:20
问题 I have a docker host that has set a net.ipv4.tcp_keepalive_time kernel parameter to 600. But when a container runs, it uses a different value: $ sysctl net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_time = 600 $ docker run --rm ubuntu:latest sysctl net.ipv4.tcp_keepalive_time net.ipv4.tcp_keepalive_time = 7200 Why is this and how can I change this value without having to pass --sysctl option? The reason I cannot pass --sysctl in my case is that this host is a docker swarm container and

Getting invalid mount config for type “bind”: bind source path does not exist in docker

孤人 提交于 2019-12-10 13:33:28
问题 I am trying to deploy following docker-compose into docker swarm cluster. version: '3.2' services: jenkins: image: jenkins/jenkins:lts ports: - 8080:8080 volumes: - ./data_jenkins:/var/jenkins_home deploy: mode: replicated replicas: 1 I do have the data_jenkins in the same locations where docker-compose is and passing that path as volume . But why is it throwing the source path does not exist. What exactly is the problem. Also if the directory doesnot exist -v should have created it right.

How does Docker Swarm load balance?

血红的双手。 提交于 2019-12-10 13:32:52
问题 I have a cluster of 10 Swarm nodes started via docker swarm join command If i want to scale a docker instance to 15 via docker service create --replicas 15 how does docker swarm know where to start the container? is it round-robin or does it take into consideration of compute resource (how much cpu/mem being used)? 回答1: When you create a service or scale it in the Swarm mode, scheduler on Elected Leader (one of the managers) will choose a node to run the service on. There are 3 strategies

Symfony 4 app works with Docker Compose but breaks with Docker Swarm (no login, profiler broken)

孤人 提交于 2019-12-10 10:28:59
问题 I'm using Docker Compose locally with: app container: Nginx & PHP-FPM with a Symfony 4 app PostgreSQL container Redis container It works great locally but when deployed to the development Docker Swarm cluster, I can't login to the Symfony app. The Swarm stack is the same as local, except for PostgreSQL which is installed on its own server (not a Docker container). Using the profiler , I nearly always get the following error: Token not found Token "2df1bb" was not found in the database. When I

How to setup multi-host networking with docker swarm on multiple remote machines

99封情书 提交于 2019-12-10 10:28:26
问题 Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work. I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose. For example: Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11) Machine 2(Docker Swarm Node)(192.168.5.12)

Is network security / encryption provided by default in docker swarm mode?

半城伤御伤魂 提交于 2019-12-10 09:57:24
问题 In this document it says that: Overlay networking for Docker Engine swarm mode comes secure out of the box. You can also encrypt data exchanged between containers on different nodes on the overlay network. To enable encryption, when you create an overlay network pass the --opt encrypted flag: > $ docker network create --opt encrypted --driver overlay my-multi-host-network So if all the containers are running on the my-multi-host-network is all the traffic between the containers encrypted