Install node_modules inside Docker container and synchronize them with host

前端 未结 10 1374
旧巷少年郎
旧巷少年郎 2020-12-12 11:33

I have the problem with installing node_modules inside the Docker container and synchronize them with the host. My Docker\'s version is 18.03.1-ce, build

相关标签:
10条回答
  • 2020-12-12 11:53

    Having run into this issue and finding the accepted answer pretty slow to copy all node_modules to the host in every container run, I managed to solve it by installing the dependencies in the container, mirror the host volume, and skip installing again if a node_modules folder is present:

    Dockerfile:

    FROM node:12-alpine
    
    WORKDIR /usr/src/app
    
    CMD [ -d "node_modules" ] && npm run start || npm ci && npm run start
    

    docker-compose.yml:

    version: '3.8'
    
    services:
      service-1:
        build: ./
        volumes:
          - ./:/usr/src/app
    

    When you need to reinstall the dependencies just delete node_modules.

    0 讨论(0)
  • 2020-12-12 12:03

    There's three things going on here:

    1. When you run docker build or docker-compose build, your Dockerfile builds a new image containing a /usr/src/app/node_modules directory and a Node installation, but nothing else. In particular, your application isn't in the built image.
    2. When you docker-compose up, the volumes: ['./app/frontend:/usr/src/app'] directive hides whatever was in /usr/src/app and mounts host system content on top of it.
    3. Then the volumes: ['frontend-node-modules:/usr/src/app/node_modules'] directive mounts the named volume on top of the node_modules tree, hiding the corresponding host system directory.

    If you were to launch another container and attach the named volume to it, I expect you'd see the node_modules tree there. For what you're describing you just don't want the named volume: delete the second line from the volumes: block and the volumes: section at the end of the docker-compose.yml file.

    0 讨论(0)
  • 2020-12-12 12:04

    I know that this was resolved, but what about:

    Dockerfile:

    FROM node
    
    # Create app directory
    WORKDIR /usr/src/app
    
    # Your other staffs
    
    EXPOSE 3000
    

    docker-composer.yml:

    version: '3.2'
    services:
        api:
            build: ./path/to/folder/with/a/dockerfile
            volumes:
                - "./volumes/app:/usr/src/app"
            command: "npm start"
    

    volumes/app/package.json

    {
        ... ,
        "scripts": {
            "start": "npm install && node server.js"
        },
        "dependencies": {
            ....
        }
     }
    

    After run, node_modules will be present in your volumes, but its contents are generated within the container so no cross platform problems.

    0 讨论(0)
  • 2020-12-12 12:05

    I wouldn't suggest overlapping volumes, although I haven't seen any official docs ban it, I've had some issues with it in the past. How I do it is:

    1. Get rid of the external volume as you are not planning on actually using it how it's meant to be used - respawning the container with its data created specifically in the container after stopping+removing it.

    The above might be achieved by shortening your compose file a bit:

    frontend:
      build: ./app/frontend
      volumes:
        - ./app/frontend:/usr/src/app
      ports:
        - 3000:3000
      environment:
        NODE_ENV: ${ENV}
      command: npm start
    
    1. Avoid overlapping volume data with Dockerfile instructions when not necessary.

    That means you might need two Dockerfiles - one for local development and one for deploying a fat image with all the application dist files layered inside.

    That said, consider a development Dockerfile:

    FROM node:10
    RUN mkdir -p /usr/src/app
    WORKDIR /usr/src/app
    RUN npm install
    

    The above makes the application create a full node_modules installation and map it to your host location, while the docker-compose specified command would start your application off.

    0 讨论(0)
  • 2020-12-12 12:06

    No one has mentioned solution with actually using docker's entrypoint feature.

    Here is my working solution:

    Dockerfile (multistage build, so it is both production and local dev ready):

    FROM node:10.15.3 as production
    WORKDIR /app
    
    COPY package*.json ./
    RUN npm install && npm install --only=dev
    
    COPY . .
    
    RUN npm run build
    
    EXPOSE 3000
    
    CMD ["npm", "start"]
    
    
    FROM production as dev
    
    COPY docker/dev-entrypoint.sh /usr/local/bin/
    
    ENTRYPOINT ["dev-entrypoint.sh"]
    CMD ["npm", "run", "watch"]
    

    docker/dev-entrypoint.sh:

    #!/bin/sh
    set -e
    
    npm install && npm install --only=dev ## Note this line, rest is copy+paste from original entrypoint
    
    if [ "${1#-}" != "${1}" ] || [ -z "$(command -v "${1}")" ]; then
      set -- node "$@"
    fi
    
    exec "$@"
    

    docker-compose.yml:

    version: "3.7"
    
    services:
        web:
            build:
                target: dev
                context: .
            volumes:
                - .:/app:delegated
            ports:
                - "3000:3000"
            restart: always
            environment:
                NODE_ENV: dev
    

    With this approach you achieve all 3 points you required and imho it is much cleaner way - not need to move files around.

    0 讨论(0)
  • 2020-12-12 12:09

    I'm not sure to understand why you want your source code to live inside the container and host and bind mount each others during development. Usually, you want your source code to live inside the container for deployments, not development since the code is available on your host and bind mounted.

    Your docker-compose.yml

    frontend:
      volumes:
        - ./app/frontend:/usr/src/app
    

    Your Dockerfile

    FROM node:10
    
    RUN mkdir -p /usr/src/app
    WORKDIR /usr/src/app
    

    Of course you must run npm install first time and everytime package.json changes, but you run it inside the container so there is no cross-platform issue: docker-compose exec frontend npm install

    Finally start your server docker-compose exec frontend npm start

    And then later, usually in a CI pipeline targetting a deployment, you build your final image with the whole source code copied and node_modules reinstalled, but of course at this point you don't need anymore the bind mount and "synchronization", so your setup could look like :

    docker-compose.yml

    frontend:
      build:
        context: ./app/frontend
        target: dev
      volumes:
        - ./app/frontend:/usr/src/app
    

    Dockerfile

    FROM node:10 as dev
    
    RUN mkdir -p /usr/src/app
    WORKDIR /usr/src/app
    
    FROM dev as build
    
    COPY package.json package-lock.json ./
    RUN npm install
    
    COPY . ./
    
    CMD ["npm", "start"]
    

    And you target the build stage of your Dockerfile later, either manually or during a pipeline, to build your deployment-ready image.

    I know it's not the exact answer to your questions since you have to run npm install and nothing lives inside the container during development, but it solves your node_modules issue, and I feel like your questions are mixing development and deployment considerations, so maybe you thought about this problem in the wrong way.

    0 讨论(0)
提交回复
热议问题