Keras Regression to approximate function (goal: loss < 1e-7)

六眼飞鱼酱① 提交于 2021-02-19 07:30:08

问题


I'm working on a neural network which approximates a function f(X)=y, with X a vector [x0, .., xn] and y in [-inf, +inf]. This approximated function needs to have an accuracy (sum of errors) around 1e-8. In fact, I need my neural network to overfit.

X is composed of random points in the interval -500 and 500. Before putting these points into the input layer I normalized them between [0, 1].

I use keras as follow:

dimension = 10 #example

self.model = Sequential()
self.model.add(Dense(128, input_shape=(dimension,), init='uniform',  activation='relu'))
self.model.add(Dropout(.2))
self.model.add(Activation("linear"))
self.model.add(Dense(64, init='uniform', activation='relu'))
self.model.add(Activation("linear"))
self.model.add(Dense(64, init='uniform', activation='relu'))
self.model.add(Dense(1))

X_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))
y_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1))

X_scaled = (X_scaler.fit_transform(train_dataset))
y_scaled = (y_scaler.fit_transform(train_labels))

self.model.compile(loss='mse', optimizer='adam')
self.model.fit(X_scaled, y_scaled, epochs=10000, batch_size=10, verbose=1)

I tried different NN, first [n] -> [2] -> [1] with Relu activation function, then [n] -> [128] -> [64] -> [1]. I tried the SGB Optimizer and I slowly increase the learning rate from 1e-9 to 0.1. I also tried without normalized the data but, in this case, the loss is very high.

My best loss (MSE) is 0.037 with the current setup but i'm far from my goal (1e-8).

First, I would like to know if I did something wrong. I'm in the good way ? If not, how can I reach my goal ?

Thanks you very much


Try #2

I tried this new configuration:

model = Sequential()
model.add(Dense(128, input_shape=(10,), init='uniform', activation='relu'))
model.add(Dropout(.2))
model.add(Dense(64, init='uniform', activation='relu'))
model.add(Dense(64, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))

On a sample of 50 elements, batch_size at 10 and during 100000 epochs. I get a loss around 1e-4.


Try #3

model.add(Dense(128, input_shape=(10,), activation='tanh'))
model.add(Dense(64,  activation='tanh'))
model.add(Dense(1, activation='sigmoid'))

batch_size=1000 epochs=1e5

result: Loss around 1.e-7

来源:https://stackoverflow.com/questions/48665201/keras-regression-to-approximate-function-goal-loss-1e-7

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!