keras model not able to generalise

时光毁灭记忆、已成空白 提交于 2020-04-07 08:30:53

问题


Can you help me to find what wrong with my keras model, because it is overfitting since the second epoch. the following is the code:

import random
import pandas as pd
import tensorflow as tf
import numpy
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras import backend as K
import glob, os
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer


class CustomSaver(tf.keras.callbacks.Callback):
   def on_epoch_end(self, epoch, logs={}):
          if((epoch % 50)== 0 ):
            model_json = self.model.to_json()
            with open("model_{}.json".format(epoch), "w") as json_file:
                json_file.write(model_json)
            self.model.save_weights("model_weights_{}.h5".format(epoch))
            self.model.save("model_{}.h5".format(epoch))
            print("Saved model to disk")


model= tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=806, activation='relu',input_shape= (100,),activity_regularizer=tf.keras.regularizers.l1(0.01))) #50
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(units=806, activation='relu',activity_regularizer=tf.keras.regularizers.l1(0.01))) #50
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(units=806, activation='relu',activity_regularizer=tf.keras.regularizers.l1(0.01))) #50
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(units=14879, activation='softmax')) 


optm = tf.keras.optimizers.Adam(learning_rate=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(optimizer=optm,loss='categorical_crossentropy', metrics=['accuracy',tf.keras.metrics.Precision(),tf.keras.metrics.Recall()])
saver = CustomSaver()


encoder = LabelEncoder()
ds = pd.read_csv("all_labels.csv")
y = ds.iloc[:,0].values
encoder.fit(y)

dataset_val = pd.read_csv('validation_dataset.csv')
X_val = dataset_val.iloc[:,1:101].values
y_val = dataset_val.iloc[:,0].values
order = list(range(0,len(y_val)))
random.shuffle(order)
X_val = X_val[order,:]
y_val = y_val[order]

encoded_Y=encoder.transform(y_val)
y_val = tf.keras.utils.to_categorical(encoded_Y,14879)
X_val = X_val.astype('float32')


chunksize = 401999



co = 1
for dataset in pd.read_csv("training_dataset.csv", chunksize=chunksize):
  if(co<38):
    epoc = 100 #10
  else: 
    epoc = 1000 #1000
  print(co)
  X = dataset.iloc[:,1:101].values
  y = dataset.iloc[:,0].values
  order =list(range(0,len(y)))
  random.shuffle(order)
  X = X[order,:]
  y = y[order]


  encoded_Y=encoder.transform(y)
  y = tf.keras.utils.to_categorical(encoded_Y,14879)
  X = X.astype('float32')

  model.fit(X,y,validation_data=(X_val,y_val),callbacks=[saver],batch_size=10000,epochs=epoc,verbose=1)  #epochs=20
  co += 1

I looped over the trainning dataset usning chunks becasue of the hunge number of lables (401999,14897), the to_categorical retunrs an out of memory.

The file which contains all lables is : all_labels.csv (https://drive.google.com/file/d/1UZvBTT9ZTM40fA5qJ8gdhmj-k6-SkpwS/view?usp=sharing).
The file which contains all training dataset is : training_dataset.csv (https://drive.google.com/file/d/1LwRBytg44_x62lfLkx9iKTbEhA5IsJM1/view?usp=sharing).
Ths file which contains validation dataset is : validation_dataset.csv (https://drive.google.com/open?id=1LZI2f-VGU3werjPIHUmdw0X_Q9nBAgXN)

The shape of the training dataset before being passed to the chunk loop is:
X.shape = (14878999, 100)
Y.shape = (14878999,)


回答1:


Your problem comes from your data :

  • Your are trying to outputs 14 879 values from an input of shape (batch_size, 100), it's impossible for your network to be able to learn something from your data.
  • As said by @Nopileos, a batch size of 10 000 is way way too hudge, i don't think you have hundred of millions inputs so considere using a batch size more reasonable !

Add your inputs/labels shape and what its corresponding too if you want us to help you to give some intuitions !




回答2:


If you are running out of memory, decrease your chunk size. Lower it to 10, and see if that works. Larger chunk size means your computer has to hold more information in RAM at one time.



来源:https://stackoverflow.com/questions/60390228/keras-model-not-able-to-generalise

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!