I am implementing a neural network and I would like to assess its performance with cross validation. Here is my current code:
def recall_m(y_true, y_pred):
t
cross_val_score
is not the appropritate tool here; you should take manual control of your CV procedure. Here is how, combining aspects from my answer in the SO thread you have linked, as well as from Cross-validation metrics in scikit-learn for each data split, and using accuracy just as an example metric:
from sklearn.model_selection import KFold
from sklearn.metrics import accuracy_score
import numpy as np
n_splits = 10
kf = KFold(n_splits=n_splits, shuffle=True)
cv_acc = []
# prepare a single-digit copy of your 1-hot encoded true labels:
y_single = np.argmax(y, axis=1)
for train_index, val_index in kf.split(x):
# fit & predict
model = KerasClassifier(build_fn=build_model, batch_size=10, epochs=ep)
model.fit(x[train_index], y[train_index])
pred = model.predict_classes(x[val_index]) # predicts single-digit classes
# get fold accuracy & append
fold_acc = accuracy_score(y_single[val_index], pred)
cv_acc.append(fold_acc)
acc = mean(cv_acc)
At completion of the loop, you will have the accuracies of each fold in the list cv_acc
, and taking the mean will give you the average value.
This way, you don't need the custom definitions you use for precision, recall, and f1; you can just use the respective ones from scikit-learn. You can add as many different metrics you want in the loop (something you cannot do with cross_cal_score
), as long as you import them appropriately from scikit-learn as done here with accuracy_score
.