Neural network sine approximation

后端 未结 3 1261
粉色の甜心
粉色の甜心 2020-12-07 05:07

After spending days failing to use neural network for Q learning, I decided to go back to the basics and do a simple function approximation to see if everything was working

3条回答
  •  感情败类
    2020-12-07 05:42

    I managed to get a good approximation by changing the architecture and the training as in the following code. It's a bit of an overkill but at least I know where the problem was coming from.

    from keras.models import Sequential
    from keras.layers import Dense
    import matplotlib.pyplot as plt
    import random
    import numpy
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.ensemble import ExtraTreesRegressor
    from keras import optimizers
    
    regressor = Sequential()
    regressor.add(Dense(units=500, activation='sigmoid', kernel_initializer='uniform', input_dim=1))
    regressor.add(Dense(units=500, activation='sigmoid', kernel_initializer='uniform'))
    regressor.add(Dense(units=1, activation='sigmoid'))
    regressor.compile(loss='mean_squared_error', optimizer='adam')
    #regressor = ExtraTreesRegressor()
    
    N = 5000
    
    X = numpy.empty((N,))
    Y = numpy.empty((N,))
    
    for i in range(N):
        X[i] = random.uniform(-10, 10)
    
    X = numpy.sort(X).reshape(-1, 1)
    
    for i in range(N):
        Y[i] = numpy.sin(X[i])
    
    Y = Y.reshape(-1, 1)
    
    X_scaler = MinMaxScaler()
    Y_scaler = MinMaxScaler()
    X = X_scaler.fit_transform(X)
    Y = Y_scaler.fit_transform(Y)
    
    regressor.fit(X, Y, epochs=50, verbose=1, batch_size=2)
    #regressor.fit(X, Y.reshape(5000,))
    
    x = numpy.mgrid[-10:10:100*1j]
    x = x.reshape(-1, 1)
    y = numpy.mgrid[-10:10:100*1j]
    y = y.reshape(-1, 1)
    x = X_scaler.fit_transform(x)
    for i in range(len(x)):
        y[i] = regressor.predict(numpy.array([x[i]]))
    
    
    plt.figure()
    plt.plot(X_scaler.inverse_transform(x), Y_scaler.inverse_transform(y))
    plt.plot(X_scaler.inverse_transform(X), Y_scaler.inverse_transform(Y))
    

    However I'm still baffled that I found papers saying that they were using only two hidden layers of five neurons to approximate the Q function of the mountain car problem and training their network for only a few minutes and get good results. I will try changing my batch size in my original problem to see what results I can get but I'm not very optimistic

提交回复
热议问题