I am using the new Elastic File System provided by amazon, on my single container EB deploy. I can\'t figure out why the mounted EFS cannot be mapped into the container.
AWS has instructions to automatically create and mount an EFS on elastic beanstalk. They can be found here
These instructions link to two config files to be customized and placed in .ebextensions folder of your deployment package.
The file storage-efs-mountfilesystem.config needs to be further modified to work with Docker containers. Add the following command:
02_restart:
command: "service docker restart"
And for multi-container environments Elastic Container Service has to be restarted as well (it was killed when docker was restarted above):
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"
so the complete commands section of storage-efs-mountfilesystem.config is:
commands:
01_mount:
command: "/tmp/mount-efs.sh"
02_restart:
command: "service docker restart"
03_start_eb:
command: |
start ecs
start eb-docker-events
sleep 120
test: sh -c "[ -f /etc/init/ecs.conf ]"
The reason this does not work "out-of-the-box" is because the docker daemon is started by the EC2 instance before the commands in .ebextensions are run. The startup order is:
At step one, the filesystem view the docker daemon provides to the containers is fixed. Therefore changes in the host filesystems made during step 3 are not reflected in the container's view.
One strange effect is that the container sees a mount point prior to the filesystem being mounted on the host. The host sees the mounted filesystem. Therefore a file written by a container will be written to the host directory under the mounted directory, not the mounted filesystem. Unmounting the filesystem on the EC2 host will expose the container files written into the mount directory.