For a custom loss for a NN I use the function . u, given a pair (t,x), both points in an interval, is the the output of my NN. Problem is I\'m stuck at how
Solution posted by Peter Szoldan is an excellent one. But it seems like the way keras.layers.Input() take in arguments has changed since the latest version with tf2 backend. The following simple fix will work though:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
import numpy as np
class CustomModel(tf.keras.Model):
def __init__(self):
super(CustomModel, self).__init__()
self.input_layer = Lambda(lambda x: K.log( x + 2 ) )
def findGrad(self,func,argm):
return keras.layers.Lambda(lambda x: K.gradients(x[0],x[1])) ([func,argm])
def call(self, inputs):
log_layer = self.input_layer(inputs)
gradient_layer = self.findGrad(log_layer,inputs)
hessian_layer = self.findGrad(gradient_layer, inputs)
return hessian_layer
custom_model = CustomModel()
x = np.array([[0.],
[1],
[2]])
custom_model.predict(x)