docker-compose

Docker Compose: Volume declared as external, but could not be found

眉间皱痕 提交于 2019-12-06 11:36:06
问题 Running the external volume sample yml from docker compose v3 docs gives me the following error: ERROR: Volume data declared as external, but could not be found. Please create the volume manually using `docker volume create --name=data` and try again. This is the yml code: version: '2' services: db: image: postgres volumes: - data:/var/lib/postgresql/data volumes: data: external: true I`m running it on windows 10. Also tried to set version to '3' but got the same error. 回答1: As the error

use nvidia-docker-compose launch a container, but exited soon

余生长醉 提交于 2019-12-06 11:26:03
问题 My docker-compose.yml file : version: '2' services: zl: image: zl/caffe-torch-gpu:12.27 ports: - "8801:8888" - "6001:6008" devices: - /dev/nvidia0 volumes: - ~/dl-data:/root/dl-data After nvidia-docker-compose up -d the container launched, but exited soon. But when I launch a container by nvidia-docker way, it worked well. nvidia-docker run -itd -p 6008:6006 -p 8808:8888 -v `pwd`:/root/dl-data --name zl_test 回答1: You don't have to use nvidia-docker-compose. By configuring the nvdia-docker

Docker容器化技术(下)

瘦欲@ 提交于 2019-12-06 11:16:44
Docker容器化技术(下) 一、Dockerfile基础命令 1.1.FROM - 基于基准镜像 FROM centos #制作基准镜像(基于centos) FROM scratch #不依赖任何基准镜像base image FROM tomcat:9.022-jdk8-openjdk 尽量使用官方的Base Image 1.2.LABEL&MAINTAINER - 说明信息 MAINTAINER xxx.com LABEL version = "1.0" LABEL description = "xxx啥作用" 1.3.WORKDIR - 设置工作目录 WORKDIR /usr/local WORKDIR /usr/local/newdir #自动创建 尽量使用绝对路径 1.4.ADD&COPY - 复制文件 ADD hello / #f复制到根路径 ADD test.tar.gz / #添加根目录并解压 ADD 除了复制,还具备添加远程文件的功能,+网址,类似wget 1.5.ENV - 设置环境常量 ENV JAVA_HOME /usr/local/openjdk8 RUN ${JAVA_HOME}/bin/java -jar test.jar 尽量使用环境常量,可提高程序维护性 二、Dockerfile执行指令 RUN&CMD&ENTRYPOINT RUN

Micro Services With Docker Compose: Same Container, Multiple Projects

故事扮演 提交于 2019-12-06 11:16:09
Along with a few others, I am having issues using a micro services architecture of applications and employing docker-compose the way I want to. Summary: I have X micro service projects (lets call these project A , project B and project C . Each micro service depends on the same containers (lets call these dependency D and dependency E . The Problem: Ideally, project A , B and C would ALL have both dependencies ( D & E ) in their docker-compose.yml files; however, this becomes an issue as docker compose sees these as duplicate containers when in reality, I would like to reuse them. Here is an

Docker (Compose) client connects to Kafka too early

三世轮回 提交于 2019-12-06 11:12:16
问题 I am trying to run Kafka with Docker and Docker Compose. This is the docker-compose.yml : version: "2" services: zookeeper: image: "wurstmeister/zookeeper" ports: - "2181:2181" kafka: build: context: "./services/kafka" dockerfile: "Dockerfile" ports: - "9092:9092" environment: KAFKA_ADVERTISED_HOST_NAME: "0.0.0.0" KAFKA_CREATE_TOPICS: "test:1:1" KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" volumes: - "/var/run/docker.sock:/var/run/docker.sock" users: build: context: "./services/users" dockerfile

Linking nginx and php-fpm container together for fast interaction in docker prod

梦想与她 提交于 2019-12-06 10:47:09
For my symfony project i am using a classic nginx/mysql/php-fpm image combination. For both (fpm and nginx) i use a local mount to the docker container to make the source code available. So that is very slow, but bearable with nfs extension for docker. In prod environment i prepare the images so, that the local src code is copied to the php-fpm image and a named volume is afterwards created in the docker compose file for the nginx container. So i can connect php-fpm and nginx to make them using the same php files. Thats basically working. My problem is, that its still slow to get a site

docker-compose push image to aws ecr

只愿长相守 提交于 2019-12-06 10:31:24
问题 is it possible to have docker-compose both build an image and push it to a remote repo? right now I do the docker-compose build then I do docker-compose config --services loop through the names reconstruct the imagename and the tag, then do docker push blah Seems like there mush be a way to just ask it to push as well. 回答1: Check out this snippet from the docs at https://docs.docker.com/compose/compose-file/#build: If you specify image as well as build, then Compose names the built image with

SpringCloud Alibaba微服务实战一 - 基础环境准备

会有一股神秘感。 提交于 2019-12-06 10:27:12
Springcloud Aibaba现在这么火,我一直想写个基于Springcloud Alibaba一步一步构建微服务架构的系列博客,终于下定决心从今天开始本系列文章的第一篇 - 基础环境准备。 该系列文章内容主要基于三个微服务:用户服务 AccountService ,订单服务 OrderService ,产品服务 ProductService 用到的组件有: 注册中心、配置中心 Nacos 限流 Sentinel 分布式事务 Seata 网关 SpringCloud Gateway 认证授权 Spring Cloud Oauth2 docker、docker-compose 由于用到的组件相对较多,部署会很繁琐,最关键的是没有资源服务器,所以在开发过程中我会逐渐将一些组件使用docker-compose部署。 本篇内容就是使用Dokcer-compose部署Nacos,Sentinel,Mysql,作为后面的系列文章的基础环境。 如果你对docker或者docker-compose不是很熟悉的话,你可以翻看我之前的两篇文章,看完后相信你能很快入手。 Docker基础与实战,看这一篇就够了 Docker-Compose基础与实战,看这一篇就够了 容器化 mysql 由于nacos需要依赖于Mysql作为资源存储,所以在编写完整docker

Gunicorn graceful stopping with docker-compose

拈花ヽ惹草 提交于 2019-12-06 10:06:28
问题 I find that when I use docker-compose to shut down my gunicorn (19.7.1) python application, it always takes 10s to shut down. This is the default maximum time docker-compose waits before forcefully killing the process (adjusted with the -t / --timeout parameter). I assume this means that gunicorn isn't being gracefully shut down. I can reproduce this with: docker-compose.yml: version: "3" services: test: build: ./ ports: - 8000:8000 Dockerfile: FROM python RUN pip install gunicorn COPY test

Set up of Hyperledger fabric on 2 different PCs

折月煮酒 提交于 2019-12-06 10:03:29
I need to run Hyperledger-Fabric instances on 4 different machines PC-1 should contain CA and peers of ORG-1 in containers, Pc-2 should contain CA and peers of ORG-2, PC-3 should contain orderer(solo) and PC-4 should Node api Is my approach missing something ? if not how can I achieve this? I would recommend that you look at the Ansible driver in Hyperledger Cello project to manage deployment across multiple hosts/vms. In short, you need to establish network visibility across the set of host/vm nodes such that the peer knows about the orderer to which it will connect and so that gossip can