ml-engine vague error: “grpc epoll fd: 3”

好久不见. 提交于 2019-12-10 10:15:12

问题


I'm trying to train with gcloud ml-engine jobs submit training, and job is getting stuck with the following output on logs:

My config.yaml:

trainingInput:
  scaleTier: CUSTOM
  masterType: standard_gpu
  workerType: standard_gpu
  parameterServerType: large_model
  workerCount: 1
  parameterServerCount: 1

Any hints about what "grpc epoll fd: 3" means and how to fix that? My input function is feeding a 16G TFRecord from gs://, but with batch = 4, shuffle buffer_size = 4. Each input sample is a single channel 99 x 161px image: shape (15939,) - not huge.

Thanks


回答1:


Maybe this is a bug in the Estimator implementation, not sure. The solution for now is to use tf.estimator.train_and_eval as suggested by @guoqing-xu

Working sample

train_input_fn = gen_input(FLAGS.train_input)
eval_input_fn = gen_input(FLAGS.eval_input)

model_params = {
  'learning_rate': FLAGS.learning_rate,
}

estimator = tf.estimator.Estimator(model_dir=model_dir, model_fn=model_fn, params=model_params)
train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=1000)
eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn, steps=None, start_delay_secs=30, throttle_secs=30)

tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)


来源:https://stackoverflow.com/questions/47875972/ml-engine-vague-error-grpc-epoll-fd-3

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!