How to accumulate gradients in tensorflow?

后端 未结 2 862
小鲜肉
小鲜肉 2021-01-04 08:24

I have a question similar to this one.

Because I have limited resources and I work with a deep model (VGG-16) - used to train a triplet network - I want to accumulat

2条回答
  •  甜味超标
    2021-01-04 09:14

    Tensorflow 2.0 Compatible Answer: In line with the Pop's Answer mentioned above and the explanation provided in Tensorflow Website, mentioned below is the code for Accumulating Gradients in Tensorflow Version 2.0:

    def train(epochs):
      for epoch in range(epochs):
        for (batch, (images, labels)) in enumerate(dataset):
           with tf.GradientTape() as tape:
            logits = mnist_model(images, training=True)
            tvs = mnist_model.trainable_variables
            accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
            zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
            loss_value = loss_object(labels, logits)
    
           loss_history.append(loss_value.numpy().mean())
           grads = tape.gradient(loss_value, tvs)
           #print(grads[0].shape)
           #print(accum_vars[0].shape)
           accum_ops = [accum_vars[i].assign_add(grad) for i, grad in enumerate(grads)]
    
    
    
        optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables))
        print ('Epoch {} finished'.format(epoch))
    
    # call the above function    
    train(epochs = 3)
    

    Complete code can be found in this Github Gist.

提交回复
热议问题