Locked myself out of SSH with UFW in EC2 AWS

后端 未结 6 899
孤城傲影
孤城傲影 2020-12-04 14:38

I have an EC2 Instance with Ubuntu. I used sudo ufw enable and after only allow the mongodb port

sudo ufw allow 27017

When th

相关标签:
6条回答
  • 2020-12-04 15:03

    Other approaches didn't work for me. My EC2 instance is based on Bitnami image. Attaching volume to another instance didn't work because of marketplace locks.

    So instead stop the problem instance and paste this script in instanceSettings > view-change user data.

    This approach do not require detaching the volume so it's more straight forward as compared to other ones.

    
    Content-Type: multipart/mixed; boundary="//"
    MIME-Version: 1.0
    --//
    Content-Type: text/cloud-config; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="cloud-config.txt"
    #cloud-config
    cloud_final_modules:
    - [scripts-user, always]
    --//
    Content-Type: text/x-shellscript; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="userdata.txt"
    #!/bin/bash
    ufw disable
    iptables -L
    iptables -F
    --//
    

    Must stop instance before pasting this, after this start your instance and you should be able to ssh.

    0 讨论(0)
  • 2020-12-04 15:03

    I know this is an old question but I fixed mine by adding a command in View/Change User Data using bootcmd

    I first stopped my instance

    Then I added this in User Data

    #cloud-config
    bootcmd:
     - cloud-init-per always fix_broken_ufw_1 sh -xc "/usr/sbin/service ufw stop >> /var/tmp/svc_$INSTANCE_ID 2>&1 || true" 
     - cloud-init-per always fix_broken_ufw_2 sh -xc "/usr/sbin/ufw disable>> /var/tmp/ufw_$INSTANCE_ID 2>&1 || true"
    

    #Note: My instance is Ubuntu

    0 讨论(0)
  • 2020-12-04 15:06
    • Launch another EC2 server instance The best way to accomplish this is use EC2’s “Launch More Like This” feature. This will ensure that the OS type, security group and other attributes are the same thus saving a bit of setup time.
    • Stop the problem instance
    • Detach volume from problem instance
    • Attach volume to new instance

    Note: Newer Linux kernels may rename your devices to /dev/xvdf through /dev/xvdp internally, even when the device name entered is /dev/sdf through /dev/sdp.

    • Mount the volume
    cd ~
    mkdir lnx1
    sudo mount /dev/xvdf ./lnx1
    
    • Disable UFW
    cd lnx1
    sudo vim ufw.conf
    

    Now find ENABLED=yes and change it to ENABLED=no.

    • Detach volume

    Be sure to unmount the volume first:

    sudo umount ./lnx1/
    
    • Reattach the volume to /dev/sda1 on our problem instance
    • Boot problem instance
    • Reassign elastic IP address if necessary
    • Delete the temporary instance and its associated volume

    Hola !! you are good go.

    0 讨论(0)
  • 2020-12-04 15:21

    I have the same problem and found out that this steps works:

    1- Stop your instance

    2- Go to `Instance Settings -> View/Change user Data

    3- Paste this

    Content-Type: multipart/mixed; boundary="//"
    MIME-Version: 1.0
    --//
    Content-Type: text/cloud-config; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="cloud-config.txt"
    #cloud-config
    cloud_final_modules:
    - [scripts-user, always]
    --//
    Content-Type: text/x-shellscript; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="userdata.txt"
    #!/bin/bash
    ufw disable
    iptables -L
    iptables -F
    --//
    

    4- Start your instance

    Hope it works for you

    0 讨论(0)
  • 2020-12-04 15:21

    Here's a little more extended version of the user-data-script thing:

    Content-Type: multipart/mixed; boundary="//"
    MIME-Version: 1.0
    
    --//
    Content-Type: text/cloud-config; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="cloud-config.txt"
    
    #cloud-config
    cloud_final_modules:
    - [scripts-user, always]
    
    --//
    Content-Type: text/x-shellscript; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="userdata.txt"
    
    #!/bin/bash
    set -x
    USERNAME="ubuntu"
    ls -Al
    ls -Al /home
    ls -Al /home/${USERNAME}
    ls -Al /home/${USERNAME}/.ssh
    sudo cat /home/${USERNAME}/.ssh/authorized_keys
    ls -Al /etc/ssh
    ls -ld /etc/ssh
    
    sudo grep -vE '^$|^#' /etc/hosts.*
    sudo sed -i -e 's/^\([^#].*\)/# \1/g' /etc/hosts.deny
    sudo sed -i -e 's/^\([^#].*\)/# \1/g' /etc/hosts.allow
    sudo grep -vE '^$|^#' /etc/hosts.*
    sed '/^$\|^#/d' /etc/ssh/sshd_config
    
    chown -v root:root /home
    chmod -v 755 /home
    chown -v ${USERNAME}:${USERNAME} /home/${USERNAME} -R
    chmod -v 700 /home/${USERNAME}
    chmod -v 700 /home/${USERNAME}/.ssh
    chmod -v 600 /home/${USERNAME}/.ssh/authorized_keys
    
    sudo tail /var/log/auth.log
    sudo ufw status numbered
    sudo ufw disable
    sudo iptables -F
    sudo service iptables stop
    sudo service sshd restart
    sudo service sshd status -l
    --//
    
    0 讨论(0)
  • 2020-12-04 15:29

    # Update

    Easiest way is to update the instance's user data

    • Stop your instance
    • In the console, select your instance, go to Actions -> Instance Settings -> View/Change user Data and

    Paste this

    Content-Type: multipart/mixed; boundary="//"
    MIME-Version: 1.0
    --//
    Content-Type: text/cloud-config; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="cloud-config.txt"
    #cloud-config
    cloud_final_modules:
    - [scripts-user, always]
    --//
    Content-Type: text/x-shellscript; charset="us-ascii"
    MIME-Version: 1.0
    Content-Transfer-Encoding: 7bit
    Content-Disposition: attachment; filename="userdata.txt"
    #!/bin/bash
    ufw disable
    iptables -L
    iptables -F
    --//
    
    • Once added, restart the instance and ssh should work. The userdata disables ufw if enabled and also flushes any iptable rules blocking ssh access

    Source here

    # Old Answer

    • Launch a new instance (recovery instance).

    • Stop the original instance (DO NOT TERMINATE)

    • Detach the volume (problem volume) from the original instance

    • Attached it to the recovery instance as /dev/sdf.

    • Login to the recovery instance via ssh/putty

    • Run sudo lsblk to display attached volumes and confirm the name of the problem volume. It usually begins with /dev/xvdf. Mine is /dev/xvdf1

    • Mount problem volume.

        $ sudo mount /dev/xvdf1 /mnt
        $ cd /mnt/etc/ufw
      
    • Open ufw configuration file

        $ sudo vim ufw.conf
      
    • Press i to edit the file.

    • Change ENABLED=yes to ENABLED=no

    • Type Ctrl-C and type :wq to save the file.

    • Display content of ufw conf file using the command below and ensure that ENABLED=yes has been changed to ENABLED=no

        $ sudo cat ufw.conf 
      
    • Unmount volume

        $ cd ~
        $ sudo umount /mnt
      
    • Detach problem volume from recovery instance and re-attach it to the original instance as /dev/sda1.

    • Start the original instance and you should be able to log back in.

    Source: here

    0 讨论(0)
提交回复
热议问题