docker-swarm

Hyperledger Fabric - Error while Instantiating chaincode (error trying to connect to local peer: context deadline exceeded)

被刻印的时光 ゝ 提交于 2019-12-10 00:32:42
问题 I am using Hyperledger Fabric v 1.3.0 and trying to deploy using swarm network on multiple hosts. I am facing an issue when trying to instantiate the chaincode. The error i get is below in the image Chaincode Instantiate Error : I keep getting "Error trying to connect to local peer: context deadline exceeded" From some discussions, i added these 2 environment variables in the peer yaml CORE_PEER_ADDRESSAUTODETECT=true CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052 But still i get the same

Docker swarm with a custom network

耗尽温柔 提交于 2019-12-09 16:53:13
问题 I'm trying to work out how to properly use swarm mode in Docker. First I tried running containers on my 2 workers and manager machine without specifying a custom network (so I'm using the default ingress overlay network). However, If I use the ingress network, for some reason I cannot resolve tasks.myservice . So I tried configuring a custom network like this: docker network create -d overlay elasticnet So now, when I bash into one of the containers, I can successfully resolve tasks.myservice

Log client's “real” IP address in Docker Swarm 1.12 when accessing a service

自古美人都是妖i 提交于 2019-12-09 16:48:36
问题 I have nginx container running as a service in Docker Swarm inside user created overlay network. Both created with: docker network create --driver overlay proxy docker service create --name proxy --network proxy -p 80:80 nginx When accessing nginx site through a browser, in nginx access log remote address is logged as 10.255... formatted address, what I presume to be the Swarm load balancer address. The question is how to know/log the address of the end client accessing the site and not the

docker stack deploy results in “No such image error”

妖精的绣舞 提交于 2019-12-09 14:50:17
问题 I am using docker swarm and would like to deploy a service with docker-compose . My service uses a custom image called myuser/myrepo:mytag that I successfully deploy to Docker-Hub to a private repository. My docker-compose looks like this: version: "3.3" services: myservice: image: myuser/myrepo:mytag ports: - "8080:8080" Before executing, I successfully pulled the image with: docker pull myuser/myrepo:mytag When I run docker stack deploy -c docker-compose.yml myapp I always receive the error

docker swarm mode multiple services same port

邮差的信 提交于 2019-12-08 14:40:23
问题 Suppose you have two services on your topology API Web Interface Both suppose to be running on port 80. On docker swarm when you create a service if you wanna to access it outside the cluster you need to expose and map the port from the service to the nodes (external ports). But if you map port 80 to lets say API service then you cant map the same port for Web Interface service since it will be already mapped. How can this be solve? As far as i see this use case is not supported. Even though

Docker swarm - add new worker - re scale the service

自作多情 提交于 2019-12-08 11:31:52
问题 I have created a docker manager. Created a service and scaled to 5 instances in the same server. I added two workers. Now, How do I redistribute 5 instances of the applications across 3 nodes? Is there any option to do without doing everything from the beginning? docker service scale id=5 does it. Is it the right way? I don't want to restart already existing instances. It restarts at node 1. docker service update servicename I remove one node from the cluster by docker swarm leave . I updated

Docker for Windows Swarm IIS Service with Win10 Insider running but unreachable

狂风中的少年 提交于 2019-12-08 10:32:06
问题 I'm currently experimenting with Swarm Services with Docker for Windows. The new Win10 Insider build supports overlay networking for Windows containers and I was pleased to see my IIS service actually starting. The only issue i came across is that i can not reach the service in the browser, despite trying multiple things such as different ports and networks. The command issued is as following: docker service create --name webfarm -p 80:80 microsoft/iis I have also tried to use the --network

Docker 1.12.1: after swarm init, workers unable to join swarm

╄→гoц情女王★ 提交于 2019-12-08 03:15:16
问题 I am seeing the same problem as described here and here. I have tried everything that worked in those two cases to no avail - I still see the same behavior. Can someone offer alternatives I might try? My setup: I am running 3 Centos 7.2 boxes. Network Time Protocol (ntpd) running on all machines. All have been yum updated. Here is some detailed info: Linux version 3.10.0-327.28.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC) ) Docker version:

Prometheus dns service discovery in docker swarm relabel instance

僤鯓⒐⒋嵵緔 提交于 2019-12-07 23:55:58
问题 My question is an addition to Prometheus dns service discovery in docker swarm. I define the prometheus scrape targets as follows: - job_name: 'node-exporter' dns_sd_configs: - names: - 'tasks.nodeexporter' type: 'A' port: 9100 This works fine but results in prometheus using the IP of the docker container as instance label. I tried to relabel the instance label as follows: relabel_configs: - source_labels: [__meta_dns_name] target_label: instance But doing so results in all instances of node

Kubernetes pod distribution

点点圈 提交于 2019-12-07 22:41:19
问题 I've worked quite a lot with Docker in the past years, but I'm a newbie when it comes to Kubernetes. I'm starting today and I am struggling with the usefulness of the Pod concept in comparison with the way I used to do thinks with Docker swarm. Let's say that I have a cluster with 7 powerful machines and I have the following stack: I want three Cassandra replicas each running in a dedicated machine (3/7) I want two Kafka replicas each running in a dedicated machine (5/7) I want a MyProducer