docker-compose

How to connect from docker-compose to Host PostgreSQL?

蓝咒 提交于 2019-12-12 09:17:53
问题 I have a server with installed PostgreSQL. All my services work in containers (docker-compose). I want to use my Host PostgreSQL from containers. Buy I have the error: Unable to obtain Jdbc connection from DataSource (jdbc:postgresql://localhost:5432/shop-bd) for user 'shop-bd-user': Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections. ------------------------------------------------------------------------

Set symfony cache directory in parameters

给你一囗甜甜゛ 提交于 2019-12-12 09:01:32
问题 I'm building a docker environment for a Symfony application. I have a container per application with an attached data only container for the web root that is linked to the application server. As part of the security hardening for the infrastructure these data containers are set to read only, to prevent any remote code exploits. Each application then also has a side car container that allows logs to be written to. Symfony currently writes the cache to the default cache_dir location of ${web

symfony docker permission problems for cache files

笑着哭i 提交于 2019-12-12 08:55:25
问题 I have a symfony setup for docker with docker-compose which is working well except when i run cache:clear from console, the webserver cant access the files. I can circumvent the permission problem by uncommenting umask(0000); in console and web/app_dev.php but i would like to run symfony as recommended. What i do is spin up the containers docker-compose up Then i enter the container. The container contains the apache, php and the code via a data volume. docker exec -i -t apache_1 /bin/bash

Serving multiple tensorflow models using docker

倾然丶 夕夏残阳落幕 提交于 2019-12-12 08:48:46
问题 Having seen this github issue and this stackoverflow post I had hoped this would simply work. It seems as though passing in the environment variable MODEL_CONFIG_FILE has no affect. I am running this through docker-compose but I get the same issue using docker-run . The error: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config: model_name: model model_base_path: /models/model I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating

how to add --auth for mongodb image when using docker-compose?

。_饼干妹妹 提交于 2019-12-12 08:34:28
问题 I'm using docker-compose to run my project created by node,mongodb,nginx; and I have build the project using docker build and then I use docker up -d nginx to start my project. but I haven't found the config to run mongodb image with '--auth', so how to add '--auth' when compose start the mongodb? here is my docker-compose.yml: version: "2" services: mongodb: image: mongo:latest expose: - "27017" volumes: - "/home/open/mymongo:/data/db" nginx: build: /home/open/mynginx/ ports: - "8080:8080" -

Node.js connect to MySQL Docker container ECONNREFUSED

大城市里の小女人 提交于 2019-12-12 08:09:33
问题 Before you flag this question as a duplicate, please note that I did read other answers, but it didn't solve my problem. I have a Docker compose file consisting of two services: version: "3" services: mysql: image: mysql:5.7 environment: MYSQL_HOST: localhost MYSQL_DATABASE: mydb MYSQL_USER: mysql MYSQL_PASSWORD: 1234 MYSQL_ROOT_PASSWORD: root ports: - "3307:3306" expose: - 3307 volumes: - /var/lib/mysql - ./mysql/migrations:/docker-entrypoint-initdb.d restart: unless-stopped web: build:

Can I have a writable Docker volume mounted under a read-only volume?

倖福魔咒の 提交于 2019-12-12 07:49:25
问题 I'm trying to mount a writable Docker volume as a child of a read-only volume, but I get this error: ERROR: for wordpress rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: mkdir /mnt/sda1/var/lib /docker/aufs/mnt/.../var/www/html/wp-content/uploads: read-only file system" I'm working with a WordPress image, and the two volumes I want to mount are: /var/www/html/wp-content: Contains my development code. Read-only, since I don't want any unexpected

docker-compose not printing stdout in Python app

核能气质少年 提交于 2019-12-12 07:40:01
问题 When using a print() statement in a Python app running inside a Docker container that's managed by Docker Compose, only sys.stderr output is logged. Vanilla print() statements aren't seen, so this: print("Hello? Anyone there?") ... never shows up in the regular logs: (You can see other logs explicitly printed by other libs in my app, but none of my own calls.) How can I avoid my print() calls being ignored? 回答1: By default, Python buffers output to sys.stdout . There are a few options: 1.

Docker Compose + Rails: best practice to migrate?

*爱你&永不变心* 提交于 2019-12-12 07:29:57
问题 I just followed this article on Running a Rails Development Environment in Docker. Good article, works great. After setting everything up, I decided to go on and set up a production environment. GOAL: I want to rake db:create && rake db:migrate every time my docker image is run. PROBLEM: If I move the database creation and migrations steps... docker-compose run app rake db:create docker-compose run app rake db:migrate ...into the Dockerfile... RUN rake db:create && rake db:migrate ...that

Use docker-compose with docker swarm

安稳与你 提交于 2019-12-12 07:27:13
问题 I'm using docker 1.12.1 I have an easy docker-compose script. version: '2' services: jenkins-slave: build: ./slave image: jenkins-slave:1.0 restart: always ports: - "22" environment: - "constraint:NODE==master1" jenkins-master: image: jenkins:2.7.1 container_name: jenkins-master restart: always ports: - "8080:8080" - "50000" environment: - "constraint:NODE==node1" I run this script with docker-compose -p jenkins up -d . This Creates my 2 containers but only on my master (from where I execute