Top k categorical accuracy for Time Distributed LSTM results

▼魔方 西西 提交于 2020-01-05 07:22:06

问题


I'm trying to evaluate the results of an LSTM using top_k_categorical_accuracy.

For each One-Hot encoded token, I try to predict the next token. In order to do this I take the output for each instance in the sequence by using the TimeDistributed layer wrapper, and pass it to a Dense layer to re-encode the results into the same One-Hot encoding.

While using the built in accuracy metric metrics=['accuracy'] works without a hitch, using top_k_categorical_accuracy fails, giving me the error message:

ValueError: Shape must be rank 2 but is rank 3 for 'metrics/my_acc/in_top_k/InTopKV2' (op: 'InTopKV2') with input shapes: [?,?,501], [?,?], [].

What do I need to change in order to make this metric work?

My code is a follows:

import numpy as np
import glob

import keras
from keras.models import Sequential
from keras.layers import LSTM, Dense, TimeDistributed,Lambda, Dropout, Activation
from keras.metrics import top_k_categorical_accuracy




train_val_split=0.2 # portion to be placed in validation


train_control_number=0
val_control_number=0


def my_acc(y_true, y_pred):
    return top_k_categorical_accuracy(y_true, y_pred, k=5)


def basic_LSTM(features_num):
    model = Sequential()
    model.add(LSTM(40, return_sequences=True, input_shape=(None, features_num)))
    model.add(LSTM(40, return_sequences=True))
    model.add(LSTM(40, return_sequences=True))

    model.add(TimeDistributed(Dense(features_num)))
    model.add(Activation('linear')) 

    print(model.summary())
    model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=[my_acc])   
    return (model)


def main ():
    input_files=glob.glob('*npy')
    data_list,dim=loader(input_files)
    train_list,val_list=data_spliter(data_list)
    model=basic_LSTM(dim)
    model.fit_generator(train_generator(train_list), steps_per_epoch=len(train_list), epochs=10, verbose=1,validation_data=val_generator(val_list),validation_steps=len(val_list))




def train_generator(data_list):
    while True:
        global train_control_number
        train_control_number=cycle_throught(len(data_list),train_control_number)    
        this=data_list[train_control_number]
        x_train = this [:,:-1,:] # all but the last 1
        y_train = this [:,1:,:] # all but the first 1

        yield (x_train, y_train)




def val_generator(data_list):
    while True:
        global val_control_number
        val_control_number=cycle_throught(len(data_list),val_control_number)    
        this=data_list[val_control_number]
        x_train = this [:,:-1,:] # all but the last 1
        y_train = this [:,1:,:] # all but the first 1

        yield (x_train, y_train)



def cycle_throught (total,current):
    current+=1
    if (current==total):
        current=0
    return (current)


def loader(input_files):

    data_list=[]

    for input_file in input_files:
        a=np.load (input_file)
        incoming_shape=list(a.shape)
        requested_shape=[1]+incoming_shape
        a=a.reshape(requested_shape)
        data_list.append(a)


    return (data_list,incoming_shape[-1])


def data_spliter(input_list):
    val_num=int(len(input_list)*train_val_split)
    validation=input_list[:val_num]
    train=input_list[val_num:]

    return (train,validation)



main()

Many thanks.


回答1:


You can transform your data in a 2D tensor within a custom metric so it suits to the required shape, keeping the last axis untouched:

import keras.backend as K #or tf.keras.backend as K    

def 3D_top_k(true, pred):
    true = K.reshape(true, (-1, features_num))   
    pred = K.reshape(pred, (-1, features_num))
    return top_k_categorical_accuracy(true, pred, k=5)


来源:https://stackoverflow.com/questions/58100322/top-k-categorical-accuracy-for-time-distributed-lstm-results

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!