cross-validation

Scoring metrics from Keras scikit-learn wrapper in cross validation with one-hot encoded labels

余生长醉 提交于 2020-08-20 12:10:31
问题 I am implementing a neural network and I would like to assess its performance with cross validation. Here is my current code: def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred

Scoring metrics from Keras scikit-learn wrapper in cross validation with one-hot encoded labels

╄→гoц情女王★ 提交于 2020-08-20 12:10:17
问题 I am implementing a neural network and I would like to assess its performance with cross validation. Here is my current code: def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred

Scoring metrics from Keras scikit-learn wrapper in cross validation with one-hot encoded labels

最后都变了- 提交于 2020-08-20 12:09:21
问题 I am implementing a neural network and I would like to assess its performance with cross validation. Here is my current code: def recall_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision_m(y_true, y_pred): true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred

Setting hidden layers and neurons in neuralnet and caret (R)

≯℡__Kan透↙ 提交于 2020-07-23 06:32:09
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Setting hidden layers and neurons in neuralnet and caret (R)

早过忘川 提交于 2020-07-23 06:31:19
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Setting hidden layers and neurons in neuralnet and caret (R)

心已入冬 提交于 2020-07-23 06:30:42
问题 I would like to cross-validate a neural network using the package neuralnet and caret . The data df can be copied from this post. When running the neuralnet() function, there is an argument called hidden where you can set the hidden layers and neurons in each. Let's say I want 2 hidden layers with 3 and 2 neurons respectively. It would be written as hidden = c(3, 2) . However, as I want to cross-validate it, I decided to use the fantastic caret package. But when using the function train() , I

Get individual models and customized score in GridSearchCV and RandomizedSearchCV [duplicate]

邮差的信 提交于 2020-07-20 04:33:46
问题 This question already has an answer here : Retrieving specific classifiers and data from GridSearchCV (1 answer) Closed 3 days ago . GridSearchCV and RandomizedSearchCV has best_estimator_ that : Returns only the best estimator/model Find the best estimator via one of the simple scoring methods : accuracy, recall, precision, etc. Evaluate based on training sets only I would like to enrich those limitations with My own definition of scoring methods Evaluate further on test set rather than

Python - LightGBM with GridSearchCV, is running forever

人走茶凉 提交于 2020-07-17 11:15:42
问题 Recently, I am doing multiple experiments to compare Python XgBoost and LightGBM. It seems that this LightGBM is a new algorithm that people say it works better than XGBoost in both speed and accuracy. This is LightGBM GitHub. This is LightGBM python API documents, here you will find python functions you can call. It can be directly called from LightGBM model and also can be called by LightGBM scikit-learn. This is the XGBoost Python API I use. As you can see, it has very similar data

module 'sklearn' has no attribute 'cross_validation'

十年热恋 提交于 2020-07-17 07:12:35
问题 I am trying to split my dataset into training and testing dataset, but I am getting this error: X_train,X_test,Y_train,Y_test = sklearn.cross_validation.train_test_split(X,df1['ENTRIESn_hourly']) AttributeError Traceback (most recent call last) <ipython-input-53-5445dab94861> in <module>() ----> 1 X_train,X_test,Y_train,Y_test = sklearn.cross_validation.train_test_split(X,df1['ENTRIESn_hourly']) AttributeError: module 'sklearn' has no attribute 'cross_validation' How can I handle this? 回答1:

Correct way to do cross validation in a pipeline with imbalanced data

泪湿孤枕 提交于 2020-06-27 17:20:20
问题 For the given imbalanced data , I have created a different pipelines for standardization & one hot encoding numeric_transformer = Pipeline(steps = [('scaler', StandardScaler())]) categorical_transformer = Pipeline(steps=['ohe', OneHotCategoricalEncoder()]) After that a column transformer keeping the above pipelines in one from sklearn.compose import ColumnTransformer preprocessor = ColumnTransformer( transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical