Changing tf.Variable value in Estimator SessionRunHook

给你一囗甜甜゛ 提交于 2021-01-28 22:02:58

问题


I have a tf.Estimator whose model_fn contains a tf.Variable initialized to 1.0. I would like to change the variable value at every epoch based on the accuracy on the dev set. I implemented a SessionRunHook to achieve this, but when I try to change the value I receive the following error:

raise RuntimeError("Graph is finalized and cannot be modified.")

This is the code for the Hook:

    class DynamicWeightingHook(tf.train.SessionRunHook):
        def __init__(self, epoch_size, gamma_value):
            self.gamma = gamma_value
            self.epoch_size = epoch_size
            self.steps = 0

        def before_run(self, run_context):
            self.steps += 1

        def after_run(self, run_context, run_values):
            if self.steps % epoch_size == 0:  # epoch 
                with tf.variable_scope("lambda_scope", reuse=True):
                    lambda_tensor = tf.get_variable("lambda_value")
                    tf.assign(lambda_tensor, self.gamma_value)
                    self.gamma_value += 0.1

I understand the Graph is finalized when I run the hook, but I would like to know if there's any other way to change a variable value in the model_fn graph with the Estimator API during training.


回答1:


The way your hook is set up right now you are essentially trying to create new variables/ops after each session run. Instead, you should define the tf.assign op beforehand and pass it to the hook so that it can run the op itself if necessary, or define the assign op in the hook's __init__. You can access the session inside after_run via the run_context argument. So something like

class DynamicWeightingHook(tf.train.SessionRunHook):
    def __init__(self, epoch_size, gamma_value, lambda_tensor):
        self.gamma = gamma_value
        self.epoch_size = epoch_size
        self.steps = 0
        self.update_op = tf.assign(lambda_tensor, self.gamma_placeholder)

    def before_run(self, run_context):
        self.steps += 1

    def after_run(self, run_context, run_values):
        if self.steps % epoch_size == 0:  # epoch 
            run_context.session.run(self.update_op)
            self.gamma += 0.1

There are some caveats here. For one, I'm not sure whether you can do tf.assign with a Python integer like this, i.e. whether it will update properly once gamma is changed. If this doesn't work, you could try this:

class DynamicWeightingHook(tf.train.SessionRunHook):
    def __init__(self, epoch_size, gamma_value, lambda_tensor):
        self.gamma = gamma_value
        self.epoch_size = epoch_size
        self.steps = 0
        self.gamma_placeholder = tf.placeholder(tf.float32, [])
        self.update_op = tf.assign(lambda_tensor, self.gamma_placeholder)

    def before_run(self, run_context):
        self.steps += 1

    def after_run(self, run_context, run_values):
        if self.steps % epoch_size == 0:  # epoch 
            run_context.session.run(self.update_op, feed_dict={self.gamma_placeholder: self.gamma})
            self.gamma += 0.1

Here, we use an additional placeholder to be able to pass the "current" gamma to the assign op at all times.

Second, since the hooks needs access to the variables, you would need to define the hook inside the model function. You can pass such hooks to the training process in the EstimatorSpec (see here).



来源:https://stackoverflow.com/questions/51784864/changing-tf-variable-value-in-estimator-sessionrunhook

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!