Running multiple projects using docker which each runs with docker-compose

后端 未结 3 908
轮回少年
轮回少年 2020-12-24 13:32

We are using microservices approach to build our product. We are using some projects which each uses docker-compose to run. The problem is that in development environment, i

相关标签:
3条回答
  • 2020-12-24 14:13

    You can do this by combining services from multiple files using the extends feature of docker-compose. Put your projects in some well-defined location, and refer to them using relative paths:

    ../
    ├── foo/
    │   └── docker-compose.yml
    └── bar/
        └── docker-compose.yml
    

    foo/docker-compose.yml:

    base:
        build: .
    
    foo:
        extends:
            service: base
        links:
            - db
    
    db:
        image: postgres:9
    

    If you wanted to test this project by itself, you would do something like:

    sudo docker-compose up -d foo
    

    Creating foo_foo_1

    bar/docker-compose.yml:

    foo:
        extends:
            file: ../foo/docker-compose.yml
            service: base
        links:
            - db
    
    bar:
        build: .
        extends:
            service: base
        links:
            - db
            - foo
    
    db:
        image: postgres:9
    

    Now you can test both services together with:

    sudo docker-compose up -d bar
    

    Creating bar_foo_1
    Creating bar_bar_1

    0 讨论(0)
  • 2020-12-24 14:17

    This is our approach for anyone else having same problem:

    Now each of our projects has a docker-compose which can be run standalone. We have another project called 'development-kit' which clones needed projects and store them in a directory. We can run our projects using command similiar to:

    python controller.py --run projectA projectB
    

    It runs each project using docker-compose up command. Then when all projects are up and running, it starts adding all other projects main docker's IP to other projects by adding them to the /etc/hosts ips using these commands:

    # getting contaier id of projectA and projectB
    CIDA = commands.getoutput("docker-compose ps -q %s" % projectA)
    CIDB = commands.getoutput("docker-compose ps -q %s" % projectB)
    # getting ip of container projectA
    IPA = commands.getoutput("docker inspect --format '{{ .NetworkSettings.IPAddress }}' %s" % CIDA)
    

    Now for sending requests from projectB to projectA we only need to define projectA IP as "projectA" in projectB's settings.

    0 讨论(0)
  • 2020-12-24 14:30

    Am not 100% sure on your question so this will be a wide answer.

    1) Everything can be in the same compose file if it's running on the same machine or server cluster.

    #proxy
    haproxy:
      image: haproxy:latest
      ports:
        - 80:80
    
    
    #setup 1
    ubuntu_1:
      image: ubuntu
      links:
        - db_1:mysql
      ports:
        - 80
    
    db1:
      image: ubuntu
      environment:
        MYSQL_ROOT_PASSWORD: 123
    
    
    #setup 2
    ubuntu_2:
       image: ubuntu
       links:
         - db_2:mysql
       ports:
        - 80
    
    db2:
      image: ubuntu
      environment:
        MYSQL_ROOT_PASSWORD: 123
    

    It's also possible to combine several yml files like
    $docker-compose -f [File A].yml -f [File B].yml up -d

    2) Every container in the build can be controlled separately with compose.
    $docker-compose stop/start/build/ ubuntu_1

    3) Using $docker-compose build it will only rebuild where changes have been done.

    Here is more information that could be useful https://docs.docker.com/compose/extends/#extending-services

    If none of above is correct please example of build.

    0 讨论(0)
提交回复
热议问题