docker-swarm

cant connect to postgres db - docker swarm

若如初见. 提交于 2019-12-13 03:53:26
问题 I have difficulties to connect to my postgresql via IntelliJ. I am using this docker-compose file: version: '3' services: db: image: postgres environment: POSTGRES_DB: postgres POSTGRES_USER: postgres_user POSTGRES_PASSWORD: postgres_password PG_DATA: /var/lib/postgresql/data/pgdatai expose: - "5432" ports: - "5432" volumes: - /var/lib/postresql/db/ deploy: placement: constraints: - node.hostname == vmAPT1 I put this inside by command: docker stack deploy --compose-file docker-compose.yml

Multicast with Docker Swarm and overlay network

北城余情 提交于 2019-12-13 01:40:22
问题 I am testing an application using multicast for the discovery. I created a Swarm cluster and a network create -d overlay swarm-net so the containers share the same LAN across the several Swarm agents hosts. The discovery seemed to not be working, so I installed tshark . tshark shows the IP address node within which tshark is running and the multicast address for the packet being sent though tshark does not show any incoming multicast packet. Note that, as I don't know a better way to do so,

Docker swarm: How to manually set node names?

自古美人都是妖i 提交于 2019-12-12 15:23:31
问题 For some background on my environment: I have docker swarm running on 3 ubuntu 14.04 vagrant boxes. The swarm master is running on 1 machine (with consul) and the other 2 machines are running swarm workers that are joined to the master. I set up the environment following the documentation page https://docs.docker.com/swarm/install-manual/. It is working correctly so that any docker -H :4000 <some_docker_command> run from my master machine works fine. Service discovery is active as I am

Docker 1.12: Multiple replicas, single database

冷暖自知 提交于 2019-12-12 12:40:54
问题 With the introduction of the new 'swarm mode' with Docker 1.12, we've been trying to migrate our application on containers and make use of the swarm mode's orchestration & clusters. Our application requires some initial database scripts to be run for it to start. We're not packaging the database inside our dockerized application so that it could follow a stateless microservice architecture and multiple containers would eventually talk to a single (at the moment) database instance. While

Jenkins service in Docker swarm stays at 0/1 replicas

浪子不回头ぞ 提交于 2019-12-12 09:48:54
问题 I'm trying to run a fault tolerant Jenkins in a docker swarm using the following command: docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home jenkins:alpine But checking the service status and containers running I see that the replicas stay in 0. ubuntu@ip-172-30-3-81:~$ docker service create --replicas 1 --name jenkins -p 8080:8080 -p 50000:50000 --mount src=/home/ubuntu/jenkins_home,dst=/var/jenkins_home

docker service replicas remain 0/1

六月ゝ 毕业季﹏ 提交于 2019-12-12 09:29:04
问题 I am trying out docker swarm with 1.12 on my Mac. I started 3 VirtualBox VMs, created a swarm cluster of 3 all fine. docker@redis1:~$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 2h1m8equ5w5beetbq3go56ebl redis3 Ready Active 8xubu8g7pzjvo34qdtqxeqjlj redis2 Ready Active Reachable cbi0lyekxmp0o09j5hx48u7vm * redis1 Ready Active Leader However, when I create a service, I see no errors yet replicas always displays 0/1: docker@redis1:~$ docker service create --replicas 1 --name

Docker services stops communicating after some time

北战南征 提交于 2019-12-12 07:47:24
问题 I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface.

Use docker-compose with docker swarm

安稳与你 提交于 2019-12-12 07:27:13
问题 I'm using docker 1.12.1 I have an easy docker-compose script. version: '2' services: jenkins-slave: build: ./slave image: jenkins-slave:1.0 restart: always ports: - "22" environment: - "constraint:NODE==master1" jenkins-master: image: jenkins:2.7.1 container_name: jenkins-master restart: always ports: - "8080:8080" - "50000" environment: - "constraint:NODE==node1" I run this script with docker-compose -p jenkins up -d . This Creates my 2 containers but only on my master (from where I execute

How to run my python script on docker?

放肆的年华 提交于 2019-12-12 07:12:23
问题 I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below: import os print ('hello') I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself. 回答1: Alright, first create a specific project directory for your docker image. For example: mkdir /home/pi/Desktop/teasr/capturing Copy your dockerfile

docker stack deploy from compose file - all services one node?

∥☆過路亽.° 提交于 2019-12-12 04:57:30
问题 I have a docker-compose file with 10 services (containers). I have it configured one instance of each server. When I execute the stack deploy, all 10 services go to one node (manager). I trust that adding a second instance of a service will distribute it, but I want my 10 unique services distributed. 回答1: If you are using a private registry its important to share the login and credentials with the worker nodes by using docker stack deploy --with-registry-auth 回答2: Don't have the reputation to