Running EC2 instance suddenly refuses SSH connection Running EC2 instance suddenly refuses SSH connection apache apache

Running EC2 instance suddenly refuses SSH connection


With the help of @abhi.gupta200297, we were able to resolve it.

The issue was the error in /etc/fstab, and sshd was supposed to be started after fstab is successful. But it wasn't, hence, the sshd wouldn't start and that's why it was refusing the connection. Solution was to create a temporary instance, mount the root EBS from the original instance, and comment out stuff from the fstab and voila, it's letting me connect again. And for the future, I just stopped using fstab and created bunch of shell commands to mount the EBS volumes to directories and added them in /etc/init.d/ebs-init-mount file and then run update-rc.d ebs-init-mount defaults to initialize the file and I'm no longer having issues with locked ssh.

UPDATE 4/23/2015

Amazon team created a video tutorial of similar issue and show how to debug using this method: https://www.youtube.com/watch?v=_P29ZHu_feU


Looks like sshd might have stopped for some reason. Is the instance EBS backed? if thats the case, try shutting it down and starting it back up. That should solve the problem.

Also, are you able to ssh from AWS web console? They have a java plugin there to ssh into the instance.


For those of you who came across this post because you are unable to SSH into your EC2 instance after a reboot, this is cross-posted to a similar question at serverfault:

From the AWS Developer Forum post on this topic:

Try stopping the broken instance, detaching the EBS volume, and attaching it as a secondary volume to another instance. Once you've mounted the broken volume somewhere on the other instance, check the /etc/sshd_config file (near the bottom). I had a few RHEL instances where Yum scrogged the sshd_config inserting duplicate lines at the bottom that caused sshd to fail on startup because of syntax errors.

Once you've fixed it, just unmount the volume, detach, reattach to your other instance and fire it back up again.

Let's break this down, with links to the AWS documentation:

  1. Stop the broken instance and detach the EBS (root) volume by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped.
  2. Start a new instance in the same region and of the same OS as the broken instance then attach the original EBS root volume as a secondary volume to your new instance. The commands in step 4 below assume you mount the volume to a folder called "data".
  3. Once you've mounted the broken volume somewhere on the other instance,
  4. check the "/etc/sshd_config" file for the duplicate entries by issuing these commands:
    • cd /etc/ssh
    • sudo nano sshd_config
    • ctrl-v a bunch of times to get to the bottom of the file
    • ctrl-k all the lines at the bottom mentioning "PermitRootLogin without-password" and "UseDNS no"
    • ctrl-x and Y to save and exit the edited file
  5. @Telegard points out (in his comment) that we've only fixed the symptom. We can fix the cause by commenting out the 3 related lines in the "/etc/rc.local" file. So:
    • cd /etc
    • sudo nano rc.local
    • look for the "PermitRootLogin..." lines and delete them
    • ctrl-x and Y to save and exit the edited file
  6. Once you've fixed it, just unmount the volume,
  7. detach by going into the EC2 Management Console, clicking on "Elastic Block Store" > "Volumes", the right-clicking on the volume associated with the instance you stopped,
  8. reattach to your other instance and
  9. fire it back up again.