Second derivative in Keras

前端 未结 2 1715
伪装坚强ぢ
伪装坚强ぢ 2020-12-09 23:15

For a custom loss for a NN I use the function . u, given a pair (t,x), both points in an interval, is the the output of my NN. Problem is I\'m stuck at how

2条回答
  •  生来不讨喜
    2020-12-09 23:54

    Solution posted by Peter Szoldan is an excellent one. But it seems like the way keras.layers.Input() take in arguments has changed since the latest version with tf2 backend. The following simple fix will work though:

    import tensorflow as tf
    from tensorflow import keras
    from tensorflow.keras import backend as K
    import numpy as np
    
    class CustomModel(tf.keras.Model):
    
        def __init__(self):
            super(CustomModel, self).__init__()
            self.input_layer = Lambda(lambda x: K.log( x + 2 ) )
    
        def findGrad(self,func,argm):
            return keras.layers.Lambda(lambda x: K.gradients(x[0],x[1])) ([func,argm])
        
        def call(self, inputs):
            log_layer = self.input_layer(inputs)
            gradient_layer = self.findGrad(log_layer,inputs)
            hessian_layer = self.findGrad(gradient_layer, inputs)
            return hessian_layer
    
    
    custom_model = CustomModel()
    x = np.array([[0.],
                [1],
                [2]])
    custom_model.predict(x) 
    
    
    • Going through layers: input layer-> lambda layer appylying log(x+2) -> lambda layer applying gradient -> one more lambda layer applying gradeint -> Output.
    • Note that this solution is for a general custom model and if you are using functional api, it should be similar.
    • If you are using tf backend, then using tf.hessians, instead of applying K.gradients twice, will work as well.

提交回复
热议问题