Using AWS EFS with Docker

后端 未结 3 525
不思量自难忘°
不思量自难忘° 2021-01-02 03:09

I am using the new Elastic File System provided by amazon, on my single container EB deploy. I can\'t figure out why the mounted EFS cannot be mapped into the container.

3条回答
  •  梦毁少年i
    2021-01-02 03:16

    EFS with AWS Beanstalk - Multicontainer Docker will work. But numerous of things will stop working because you have to restart docker after you mount the EFS.

    The instance commands

    Searching around you might find that you need to do "docker restart" after mounting EFS. It's not that simple. You will experience troubles when autoscaling happens and / or when deploying new version of the app.

    Below is a script I use for mounting a EFS to the docker instance, where the following steps is needed:

    1. Stop ECS manager. Takes 60 seconds.
    2. Stop Docker service
    3. Kill remaining docker stuff
    4. Remove network previous bindings. See the issue https://github.com/docker/docker/issues/7856#issuecomment-239100381
    5. Mount EFS
    6. Start docker service.
    7. Start the ECS service
    8. Wait for 120 seconds. Making the ECS come to the correct start/* state. Else e.g. 00enact script will fail. Note this display is mandatory and is really hard to find any documentation on.

    Here is my script:

    .ebextensions/commands.config:

    commands:
      01stopdocker:
        command: "sudo stop ecs  > /dev/null 2>&1 || /bin/true && sudo service docker stop"
      02killallnetworkbindings:
        command: 'sudo killall docker  > /dev/null 2>&1 || /bin/true'
      03removenetworkinterface:
        command: "rm -f /var/lib/docker/network/files/local-kv.db"
        test: test -f /var/lib/docker/network/files/local-kv.db
      # Mount the EFS created in .ebextensions/media.config
      04mount:
        command: "/tmp/mount-efs.sh"
      # On new instances, delay needs to be added because of 00task enact script. It tests for start/ but it can be various states of start...
      # Basically, "start ecs" takes some time to run, and it runs async - so we sleep for some time.
      # So basically let the ECS manager take it's time to boot before going on to enact scritps and post deploy scripts.
      09restart:
        command: "service docker start && sudo start ecs && sleep 120s"
    

    The mount script and environment variables

    .ebextensions/mount-config.config

    # efs-mount.config
    # Copy this file to the .ebextensions folder in the root of your app source folder
    option_settings:
      aws:elasticbeanstalk:application:environment:
        EFS_REGION: '`{"Ref": "AWS::Region"}`'
        # Replace with the required mount directory
        EFS_MOUNT_DIR: '/efs_volume'
        # Use in conjunction with efs_volume.config or replace with EFS volume ID of an existing EFS volume
        EFS_VOLUME_ID: '`{"Ref" : "FileSystem"}`'
    
    packages:
      yum:
        nfs-utils: []
    files:
      "/tmp/mount-efs.sh":
          mode: "000755"
          content : |
            #!/bin/bash
    
            EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_REGION')
            EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_MOUNT_DIR')
            EFS_VOLUME_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_VOLUME_ID')
    
            echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..."
    
            echo 'Stopping NFS ID Mapper...'
            service rpcidmapd status &> /dev/null
            if [ $? -ne 0 ] ; then
                echo 'rpc.idmapd is already stopped!'
            else
                service rpcidmapd stop
                if [ $? -ne 0 ] ; then
                    echo 'ERROR: Failed to stop NFS ID Mapper!'
                    exit 1
                fi
            fi
    
            echo 'Checking if EFS mount directory exists...'
            if [ ! -d ${EFS_MOUNT_DIR} ]; then
                echo "Creating directory ${EFS_MOUNT_DIR} ..."
                mkdir -p ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ]; then
                    echo 'ERROR: Directory creation failed!'
                    exit 1
                fi
                chmod 777 ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ]; then
                    echo 'ERROR: Permission update failed!'
                    exit 1
                fi
            else
                echo "Directory ${EFS_MOUNT_DIR} already exists!"
            fi
    
            mountpoint -q ${EFS_MOUNT_DIR}
            if [ $? -ne 0 ]; then
                AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
                echo "mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}"
                mount -t nfs4 -o nfsvers=4.1 ${AZ}.${EFS_VOLUME_ID}.efs.${EFS_REGION}.amazonaws.com:/ ${EFS_MOUNT_DIR}
                if [ $? -ne 0 ] ; then
                    echo 'ERROR: Mount command failed!'
                    exit 1
                fi
            else
                echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!"
            fi
    
            echo 'EFS mount complete.'
    

    The resource and configuration

    You will have to change the option_settings below. To find the VPC and subnets which you must define under option_settings below, look in AWS web console -> VPC, there you must find the Default VPC id and the 3 default subnet ids. If your beanstalk uses custom VPC you must use these settings.

    .ebextensions/efs-volume.config:

    # efs-volume.config
    # Copy this file to the .ebextensions folder in the root of your app source folder
    option_settings:
      aws:elasticbeanstalk:customoption: 
        EFSVolumeName: "EB-EFS-Volume"
        VPCId: "vpc-xxxxxxxx"
        SubnetUSWest2a: "subnet-xxxxxxxx"
        SubnetUSWest2b: "subnet-xxxxxxxx"
        SubnetUSWest2c: "subnet-xxxxxxxx"
    
    Resources:
      FileSystem:
        Type: AWS::EFS::FileSystem
        Properties:
          FileSystemTags:
          - Key: Name
            Value:
              Fn::GetOptionSetting: {OptionName: EFSVolumeName, DefaultValue: "EB_EFS_Volume"}
      MountTargetSecurityGroup:
        Type: AWS::EC2::SecurityGroup
        Properties:
          GroupDescription: Security group for mount target
          SecurityGroupIngress:
          - FromPort: '2049'
            IpProtocol: tcp
            SourceSecurityGroupId:
              Fn::GetAtt: [AWSEBSecurityGroup, GroupId]
            ToPort: '2049'
          VpcId:
            Fn::GetOptionSetting: {OptionName: VPCId}
      MountTargetUSWest2a:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2a}
      MountTargetUSWest2b:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2b}
      MountTargetUSWest2c:
        Type: AWS::EFS::MountTarget
        Properties:
          FileSystemId: {Ref: FileSystem}
          SecurityGroups:
          - {Ref: MountTargetSecurityGroup}
          SubnetId:
            Fn::GetOptionSetting: {OptionName: SubnetUSWest2c}
    

    Resources:

    • For the mounting issue: https://forums.aws.amazon.com/message.jspa?messageID=730288#730555
    • For the network port mapping issue: https://github.com/docker/docker/issues/7856
    • For the EFS mount script : https://forums.aws.amazon.com/thread.jspa?messageID=733181

提交回复
热议问题