Docker and symlinks

前端 未结 4 1191
别那么骄傲
别那么骄傲 2020-12-23 16:28

I\'ve got a repo set up like this:

/config
   config.json
/worker-a
   Dockerfile
   
   /code
/worker-b
   Dockerfile
   

        
相关标签:
4条回答
  • 2020-12-23 16:46

    I also ran into this problem, and would like to share another method that hasn't been mentioned above. Instead of using npm link in my Dockerfile, I used yalc.

    1. Install yalc in your container, e.g. RUN npm i -g yalc.
    2. Build your library in Docker, and run yalc publish (add the --private flag if your shared lib is private). This will 'publish' your library locally.
    3. Run yalc add my-lib in each repo that would normally use npm link before running npm install. It will create a local .yalc folder in your Docker container, create a symlink in node_modules that works inside Docker to this folder, and rewrite your package.json to refer to this folder too, so you can safely run install.
    4. Optionally, if you do a two stage build, make sure that you also copy the .yalc folder to your final image.

    Below an example Dockerfile, assuming you have a mono repository with three packages: models, gui and server, and the models repository must be shared and named my-models.

    # You can access the container using:
    #   docker run -it my-name sh
    # To start it stand-alone:
    #   docker run -it -p 8888:3000 my-name
    
    FROM node:alpine AS builder
    # Install yalc globally (the apk add... line is only needed if your installation requires it)
    RUN apk add --no-cache --virtual .gyp python make g++ && \
      npm i -g yalc
    RUN mkdir /packages && \
      mkdir /packages/models && \
      mkdir /packages/gui && \
      mkdir /packages/server
    COPY ./packages/models /packages/models
    WORKDIR /packages/models
    RUN npm install && \
      npm run build && \
      yalc publish --private
    COPY ./packages/gui /packages/gui
    WORKDIR /packages/gui
    RUN yalc add my-models && \
      npm install && \
      npm run build
    COPY ./packages/server /packages/server
    WORKDIR /packages/server
    RUN yalc add my-models && \
      npm install && \
      npm run build
    
    FROM node:alpine
    RUN mkdir -p /app
    COPY --from=builder /packages/server/package.json /app/package.json
    COPY --from=builder /packages/server/dist /app/dist
    # Make sure you copy the yalc registry too.
    COPY --from=builder /packages/server/.yalc /app/.yalc
    COPY --from=builder /packages/server/node_modules /app/node_modules
    COPY --from=builder /packages/gui/dist /app/dist/public
    WORKDIR /app
    EXPOSE 3000
    CMD ["node", "./dist/index.js"]
    

    Hope that helps...

    0 讨论(0)
  • 2020-12-23 16:51

    The docker build CLI command sends the specified directory (typically .) as the "build context" to the Docker Engine (daemon). Instead of specifying the build context as /worker-a, specify the build context as the root directory, and use the -f argument to specify the path to the Dockerfile in one of the child directories.

    docker build -f worker-a/Dockerfile .
    docker build -f worker-b/Dockerfile .
    

    You'll have to rework your Dockerfiles slightly, to point them to ../config/config.json, but that is pretty trivial to fix.

    Also check out this question/answer, which I think addresses the exact same problem that you're experiencing.

    How to include files outside of Docker's build context?

    Hope this helps! Cheers

    0 讨论(0)
  • 2020-12-23 16:58

    Docker doesn't support symlinking files outside the build context.

    Here are some different methods for using a shared file in a container:

    Share a base image

    Create a Dockerfile for the base worker-config image that includes the shared config/files.

    COPY config.json /config.json
    

    Build and tag the image as worker-config

    docker build -t worker-config:latest .
    

    Source the base worker-config image for all your worker Dockerfiles

    FROM worker-config:latest
    

    Build script

    Use a script to push the common config to each of your worker containers.

    ./build worker-n

    #!/bin/sh
    set -uex 
    rundir=$(readlink -f "${0%/*}")
    container=$(shift)
    cd "$rundir/$container"
    cp ../config/config.json ./config-docker.json
    docker build "$@" .
    

    Build from URL

    Pull the config from a common URL for all worker-n builds.

    ADD http://somehost/config.json /
    

    Increase the scope of the image build context

    Include the symlink target files in the build context by building from a parent directory that includes both the shared files and specific container files.

    cd ..
    docker build -f worker-a/Dockerfile .
    

    All the source paths you reference in a Dockerfile must also change to match the new build context:

    COPY workerathing /app
    

    becomes

    COPY worker-a/workerathing /app
    

    Using this method can make all build contexts large if you have one large build context, as they all become shared. It can slow down builds, especially to remote Docker build servers.

    Mount a config directory from a named volume

    Volumes like this only work as directories, so you can't specify a file like you could when mounting a file from the host to container.

    docker volume create --name=worker-cfg-vol
    docker run -v worker-cfg-vol:/config worker-config cp config.json /config
    
    docker run -v worker-cfg-vol:/config:/config worker-a
    

    Mount config directory from data container

    Again, directories only as it's basically the same as above. This will automatically copy files from the destination directory into the newly created shared volume though.

    docker create --name wcc -v /config worker-config /bin/true
    docker run --volumes-from wcc worker-a
    

    Mount config file from host

    docker run -v /app/config/config.json:/config.json worker-a
    
    0 讨论(0)
  • 2020-12-23 17:08

    An alternative solution is to upgrade all your soft links into hard links.

    0 讨论(0)
提交回复
热议问题