Does K.function method of Keras with Tensorflow backend work with network layers?

后端 未结 2 1015
囚心锁ツ
囚心锁ツ 2021-02-02 04:09

I recently have started using Keras to build neural networks. I built a simple CNN to classify MNIST dataset. Before learning the model I used K.set_image_dim_ordering(\'t

2条回答
  •  感动是毒
    2021-02-02 04:59

    I think you can also use K.function to get gradients.

    self.action_gradients = K.gradients(Q_values, actions)
    self.get_action_gradients=K.function[*self.model.input, K.learning_phase()], outputs=action_gradients) 
    

    which basically runs the graph to obtain the Q-value to calculate the gradient of the Q-value w.r.t. action vector in DDPG. Source code here (lines 64 to 70): https://github.com/nyck33/autonomous_quadcopter/blob/master/criticSolution.py#L65

    In light of the accepted answer and this usage here (originally from project 5 autonomous quadcopter in the Udacity Deep Learning nanodegree), a question remains in my mind, ie. is K.function() something that can be used fairly flexibly to run the graph and to designate as outputs of K.function() for example outputs of a particular layer, gradients or even weights themselves?

    Lines 64 to 67 here: https://github.com/nyck33/autonomous_quadcopter/blob/master/actorSolution.py

    It is being used as a custom training function for the actor network in DDPG:

    #caller
    self.actor_local.train_fn([states, action_gradients, 1])
    #called
    self.train_fn = K.function(inputs=[self.model.input, action_gradients, K.learning_phase()], \
                outputs=[], updates=updates_op)
    

    outputs is given a value of an empty list because we merely want to train the actor network with the action_gradients from the critic network.

提交回复
热议问题