neural-network

How to save best model in Keras based on AUC metric?

走远了吗. 提交于 2021-02-18 17:59:28
问题 I would like to save the best model in Keras based on auc and I have this code: def MyMetric(yTrue, yPred): auc = tf.metrics.auc(yTrue, yPred) return auc best_model = [ModelCheckpoint(filepath='best_model.h5', monitor='MyMetric', save_best_only=True)] train_history = model.fit([train_x], [train_y], batch_size=batch_size, epochs=epochs, validation_split=0.05, callbacks=best_model, verbose = 2) SO my model runs nut I get this warning: RuntimeWarning: Can save best model only with MyMetric

How to build a Neural Network with sentence embeding concatenated to pre-trained CNN

醉酒当歌 提交于 2021-02-18 08:48:40
问题 I want to build a neural network that will take the feature map from the last layer of a CNN (VGG or resnet for example), concatenate an additional vector (for example , 1X768 bert vector) , and re-train the last layer on classification problem. So the architecture should be like in: but I want to concat an additional vector to each feature vector (I have a sentence to describe each frame). I have 5 possible labels , and 100 frames in the input frames. Can someone help me as to how to

Prediction is depending on the batch size in Keras

岁酱吖の 提交于 2021-02-18 05:13:51
问题 I am trying to use keras for binary classification of an image. My CNN model is well trained on the training data (giving ~90% training accuracy and ~93% validation accuracy). But during training if I set the batch size=15000 I get the Figure I output and if I set the batch size=50000 I get Figure II as the output. Can someone please tell what is wrong? The prediction should not depend on batch size right? Code I am using for prediction : y=model.predict_classes(patches, batch_size=50000

Prediction is depending on the batch size in Keras

断了今生、忘了曾经 提交于 2021-02-18 05:13:10
问题 I am trying to use keras for binary classification of an image. My CNN model is well trained on the training data (giving ~90% training accuracy and ~93% validation accuracy). But during training if I set the batch size=15000 I get the Figure I output and if I set the batch size=50000 I get Figure II as the output. Can someone please tell what is wrong? The prediction should not depend on batch size right? Code I am using for prediction : y=model.predict_classes(patches, batch_size=50000

Split autoencoder on encoder and decoder keras

只愿长相守 提交于 2021-02-16 14:48:10
问题 I am trying to create an autoencoder for: Train the model Split encoder and decoder Visualise compressed data (encoder) Use arbitrary compressed data to get the output (decoder) from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model from keras import backend as K from keras.datasets import mnist import numpy as np (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_train = x_train[:100,:,:,] x_test = x

Batch size for Stochastic gradient descent is length of training data and not 1?

让人想犯罪 __ 提交于 2021-02-15 07:10:25
问题 I am trying to plot the different learning outcome when using Batch gradient descent, Stochastic gradient descent and mini-batch stochastic gradient descent. Everywhere i look, i read that a batch_size=1 is the same as having a plain SGD and a batch_size=len(train_data) is the same as having the Batch gradient descent. I know that stochastic gradient descent is when you use only one single data sample for every update and batch gradient descent uses the entire training data set to compute the

ValueError: Input 0 is incompatible with layer conv2d_5: expected ndim=4, found ndim=2

烂漫一生 提交于 2021-02-11 15:19:09
问题 I am trying to build a CNN network and wuld like to probe the layer dimention using output_shape. But it's giving me an error as follows: ValueError: Input 0 is incompatible with layer conv2d_5: expected ndim=4, found ndim=2 Below is the code I am trying to execute from keras.layers import Activation model = Sequential() model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28))) print(model.output_shape) 回答1: You can check if by default the number of channels is specified at

How to implement CAM without visualize_cam in this code?

妖精的绣舞 提交于 2021-02-11 14:01:01
问题 I want to make Class activation map, so I have write the code from keras.datasets import mnist from keras.layers import Conv2D, Dense, GlobalAveragePooling2D from keras.models import Model, Input from keras.utils import to_categorical (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train_resized = x_train.reshape((60000, 28, 28, 1)) x_test_resized = x_test.reshape((10000, 28, 28, 1)) y_train_hot_encoded = to_categorical(y_train) y_test_hot_encoded = to_categorical(y_test) inputs =

Patch based image training and combine their probability from an image

被刻印的时光 ゝ 提交于 2021-02-11 13:00:15
问题 Firstly, I have implemented a simple VGG16 network for image classification. model = keras.applications.vgg16.VGG16(include_top = False, weights = None, input_shape = (32,32,3), pooling = 'max', classes = 10) Whose input shape is 32 x 32 . Now, I am trying to implement a patch-based neural network . The main idea is, from the input image, extract 4 image patch like this image, and train the extracted patch image( resizing to 32 x 32 as it is input shape of our model) finally, combine their

Convolution Neural Networks Intuition - Difference in outcome between high kernel filter size vs high number of features

…衆ロ難τιáo~ 提交于 2021-02-11 12:46:12
问题 I wanted to understand architectural intuition behind the differences of: tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)) and tf.keras.layers.Conv2D(32, (7,7), activation='relu', input_shape=(28, 28, 1)) Assuming, As kernel size increases, more complex feature-pattern matching can be performed in the convolution step. As feature size increases, a larger variance of smaller features can define a particular layer. How and when (if possible kindly give scenarios) do