This question was made before the documentation for save and restore was available.
For now I would consider this question deprecated and say people to rely on the official
Using tf.train.MonitoredTrainingSession() helped me to resume my training when my machine restarted.
Things to keep in Mind:
Make sure you are saving your checkpoints. In tf.train.saver() you can specify max_checkpoints to keep.
Specify the directory of the checkpoints in the tf.train.MonitoredTrainingSession(checkpoint='dir_path',save_checkpoint_secs=).
Based on the save_checkpoint_secs argument, the above session would keep saving and updating the checkpoints.
When you constantly keep saving the checkpoints, above function, looks for the latest checkpoint and resumes training from there.