How to plot a learning curve for a keras experiment?

Deadly 提交于 2019-12-04 02:20:44

To get accuracy values, you need to request that they are calculated during fit, because accuracy is not an objective function, but a (common) metric. Sometimes calculating accuracy does not make sense, so it is not enabled by default in Keras. However, it is a built-in metric, and easy to add.

To add the metric, use metrics=['accuracy'] parameter to model.compile.

In your example:

history = model.fit(X_train, y_train, batch_size = 512, 
          nb_epoch = 5, validation_split = 0.05)

You can then access validation accuracy as history.history['val_acc']

Why do you find the average accuracy more important than the final accuracy? Depending on your initial values, your average might be quite misleading. It's easy to come up with different curves that have the same average but different interpretations.

I'd just plot the complete history of train_acc and val_acc to decide whether the RNN is performing well within the given setup. And also don't forget to have a sample size N > 1. Random initialization can have a big impact on RNNs, take at least N=10 different initializations for each setup to make sure that the different performance is actually caused by your set size and not by better/worse initializations.

The history object is created during fit()ting the model. See keras/engine/training.py for details.

You can access the history using the history attribute on the model: model.history.

After fitting the model you simply average over the attribute.

np.mean([v['val_acc'] for v in model.history])

Note that the pattern is val_<your output name here> for every output you specify.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!