docker-swarm

Windows Container swarm publish port and not access

ⅰ亾dé卋堺 提交于 2019-12-24 10:09:03
问题 I use windows container and try to create docker swarm ,I create three virtual machine use hyper-v , and each OS is windows server 2016.There machines ip is : windocker211 192.168.1.211 windocker212 192.168.1.212 windocker219 192.168.1.219 The docker swarm node is : PS C:\ConsoleZ> docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 4c0g0o0uognheugw4do1a1h7y windocker212 Ready Active bbxot0c8zijq7xw4lm86svgwp * windocker219 Ready Active Leader wftwpiqpqpbqfdvgenn787psj windocker211

Copying data from and to Docker containers

谁说我不能喝 提交于 2019-12-24 08:57:07
问题 I have 2 docker containers running on my system. I wanted to copy the data from one container to another container from my host system itself. i know that to copy data from container to host we have to use docker cp <Source path> <container Id>:path in container Now i am trying to copy the data directly from one container to another, is there any way to do that ?? i tried doing this. docker cp <container-1>:/usr/local/nginx/vishnu/vishtest.txt <container-2>:/home/smadmin/vishnusource/ but the

Troubleshooting onyx-kafka not writing to a topic. Howto run kafka in docker swarm. Error setting runtime volume size (/dev/shm)?

泪湿孤枕 提交于 2019-12-24 07:33:10
问题 I'm trying to i) troubleshoot a simple onyx-kafka job not writing to a topic. More details are given here. And you can try it out in this sample project. I think the reason is because there's only one kafka node. So I tried ii) launching kafka confluentinc/cp-kafka:3.3.1 (with zookeeper confluentinc/cp-zookeeper:3.3.1 ) running docker (17.09.0-ce, build afdb6d4) in swarm mode . But then I get this error. Warning: space is running low in /dev/shm (shm) threshold=167,772,160 usable=58,716,160 A

how to make a non-hardcoded URL path in docker image to call backend service?

喜夏-厌秋 提交于 2019-12-24 06:40:50
问题 I'm new in docker. Let me describe my scenario: I made 2 docker images for a web application. one image is front-end web layer for presentation and the other is back-end layer supplying REST service. So I need to run 2 containers for this 2 images. the front-end calls services in back-end. Now I need to write the back-end's URL in front-end's code and build the image...I don't think this is the right way for micro service... because if my laptop's IP changes or others want to use my image

How to remove an image across all nodes in a Docker swarm?

▼魔方 西西 提交于 2019-12-24 06:21:24
问题 On the local host, I can remove an image using either docker image rm or docker rmi . What if my current host is a manager node in a Docker swarm and I wish to cascade this operation throughout the swarm? When I first created the Docker service, the image was pulled down on each node in the swarm. Removing the service did not remove the image and all nodes retain a copy of the image. It feels natural that if there's a way to "push" an image out to all the nodes then there should be an equally

How to log container in docker swarm mode

旧街凉风 提交于 2019-12-23 07:03:46
问题 Is there a way to log the containers which are created with the docker service create in docker swarm mode? 回答1: Finally that feature has been implemented in docker 17.03. You can get the logs of a service running on different/multiple nodes with this command: docker service logs -f {NAME_OF_THE_SERVICE} You can get the name of the service with: docker service ls Note that this is an experimental feature (not production ready) and in order to use it you must enable the experimental mode :

Set up of Hyperledger fabric on 2 different PCs

心已入冬 提交于 2019-12-22 13:02:58
问题 I need to run Hyperledger-Fabric instances on 4 different machines PC-1 should contain CA and peers of ORG-1 in containers, Pc-2 should contain CA and peers of ORG-2, PC-3 should contain orderer(solo) and PC-4 should Node api Is my approach missing something ? if not how can I achieve this? 回答1: I would recommend that you look at the Ansible driver in Hyperledger Cello project to manage deployment across multiple hosts/vms. In short, you need to establish network visibility across the set of

How to change the service name generated by Docker stack in docker-compose

给你一囗甜甜゛ 提交于 2019-12-22 04:38:15
问题 When deploying a stack of this compose file using: docker stack deploy -c docker-compose.yml myapp service-name: image: service-image namelike-property: my-custom-service-name // here I would like to know the property The generated service name will be myapp_service-name I would want it to be named and referenced by my-custom-service-name 回答1: For communication between services you can use the serviceName as defined in the compose file (in your case your service name is service-name) if both

Redis cluster with docker swarm using docker compose

时光怂恿深爱的人放手 提交于 2019-12-21 13:09:25
问题 I'm just learning docker and all of its goodness like swarm and compose. My intention is to create a Redis cluster in docker swarm. Here is my compose file - version: '3' services: redis: image: redis:alpine command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 60000","--cluster-require-full-coverage no"] deploy: replicas: 5 restart_policy: condition: on-failure ports: - 6379:6379 - 16379:16379 networks: host: external: true If I add the network: - host

How to specify an iterator in the volume path when using docker-compose to scale up service?

亡梦爱人 提交于 2019-12-20 19:45:14
问题 Background: I'm using docker-compose in order to place a tomcat service into a docker swarm cluster but I'm presently struggling with how I would approach the logging directory given that I want to scale the service up yet retain the uniqueness of the logging directory. Consider the (obviously) made up docker-compose which simply starts tomcat and mounts a logging filesystem in which to capture the logs. version: '2' services: tomcat: image: "tomcat:latest" hostname: tomcat-example command: