How to prevent tensorflow from allocating the totality of a GPU memory?

前端 未结 16 2434
南旧
南旧 2020-11-22 04:26

I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.

For small to m

16条回答
  •  萌比男神i
    2020-11-22 04:56

    All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.

    When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,

    opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    conf = tf.ConfigProto(gpu_options=opts)
    trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
    tf.estimator.Estimator(model_fn=..., 
                           config=trainingConfig)
    

    Similarly in Eager mode (TensorFlow 1.5 and above),

    opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    conf = tf.ConfigProto(gpu_options=opts)
    tfe.enable_eager_execution(config=conf)
    

    Edit: 11-04-2018 As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:

    tf.contrib.gan.gan_train(........, config=conf)
    

提交回复
热议问题