conv-neural-network

Combining two loss function in Keras in Sequential model with ndarray output

痞子三分冷 提交于 2019-12-13 03:18:07
问题 I am training a CNN model in Keras (object detection in image and LiDAR (Kaggle Lyft Competition)). As an output I have a 34 channel gird. So output dimension is: LENGTH x WIDTH X 34. First 10 channels are for different categories of objects (ideally as one hot vector) and rest of 24 channels are coordinates of bounding box in 3D. For first 10 channels I want to use: keras.losses.categorical_crossentropy , and for rest of 24: keras.losses.mean_squared_error Also since numbers of objects

Deconvolutions/Transpose_Convolutions with tensorflow

女生的网名这么多〃 提交于 2019-12-13 02:57:13
问题 I am attempting to use tf.nn.conv3d_transpose , however, I am getting an error indicating that my filter and output shape is not compatible. I have a tensor of size [1,16,16,4,192] I am attempting to use a filter of [1,1,1,192,192] I believe that the output shape would be [1,16,16,4,192] I am using "same" padding and a stride of 1. Eventually, I want to have an output shape of [1,32,32,7,"does not matter"], but I am attempting to get a simple case to work first. Since these tensors are

High training error at the beginning of training a Convolutional neural network

旧巷老猫 提交于 2019-12-13 01:09:27
问题 In the Convolutional neural network, I'm working on training a CNN, and during the training process, especially at the beginning of my training I get extremely high training error. After that, this error starts to go down slowly. After approximately 500 Epochs the training error comes near to zero (e.g. 0.006604). Then, I took the final obtained model to measure its accuracy against the testing data, I've got about 89.50%. Does that seem normal? I mean getting a high training error rate at

Tensorflow: building graph with batch sizes varying in dimension 1?

匆匆过客 提交于 2019-12-12 23:42:32
问题 I'm trying to build a CNN model in Tensorflow where all the inputs within a batch are equal shape, but between batches the inputs vary in dimension 1 (i.e. minibatch sizes are the same but minibatch shapes are not). To make this clearer, I have data (Nx23x1) of various values N that I sort in ascending order first. In each batch (50 samples) I zero-pad every sample so that each N_i equals the max N within its minibatch. Now I have defined Tensorflow placeholder for the batch input: input = tf

Tensorflow model saving and loading

瘦欲@ 提交于 2019-12-12 21:34:07
问题 How can save a tensorflow model with model graph like we do in do keras. Instead of defining the whole graph again in prediction file, can we save whole model ( weight and graph) and import it later In Keras: checkpoint = ModelCheckpoint('RightLane-{epoch:03d}.h5',monitor='val_loss', verbose=0, save_best_only=False, mode='auto') will give one h5 file that we can use for prediction model = load_model("RightLane-030.h5") how to do same in native tensorflow 回答1: Method 1: Freeze graph and

Estimate required resources to serve Keras model

余生颓废 提交于 2019-12-12 18:30:40
问题 I have a Keras model (.hdf5) that I would like to deploy in the cloud for prediction. I now wish to estimate how much resources I need for this (CPU, GPU, RAM, ...). Does anyone have a suggestion for functions / rules of thumb that could help with this? I was unable to find anything useful. Thanks in advance! 回答1: I think the most realistic estimation would be to run the model and see how much resources does it take. top or htop will show you the CPU and RAM load, but in case of GPU memory it

Tensorflow - Saving a model

China☆狼群 提交于 2019-12-12 11:21:35
问题 I have the following code, and getting an error when trying to save the model. What could I be doing wrong, and how can I solve this issue? import tensorflow as tf data, labels = cifar_tools.read_data('C:\\Users\\abc\\Desktop\\Testing') x = tf.placeholder(tf.float32, [None, 150 * 150]) y = tf.placeholder(tf.float32, [None, 2]) w1 = tf.Variable(tf.random_normal([5, 5, 1, 64])) b1 = tf.Variable(tf.random_normal([64])) w2 = tf.Variable(tf.random_normal([5, 5, 64, 64])) b2 = tf.Variable(tf.random

Training a fully convolutional neural network with inputs of variable size takes unreasonably long time in Keras/TensorFlow

怎甘沉沦 提交于 2019-12-12 10:31:28
问题 I am trying to implement a FCNN for image classification that can accept inputs of variable size. The model is built in Keras with TensorFlow backend. Consider the following toy example: model = Sequential() # width and height are None because we want to process images of variable size # nb_channels is either 1 (grayscale) or 3 (rgb) model.add(Convolution2D(32, 3, 3, input_shape=(nb_channels, None, None), border_mode='same')) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,

Convolutional neural networks vs downsampling?

点点圈 提交于 2019-12-12 10:25:36
问题 After reading up on the subject I don't fully understand: Is the 'convolution' in neural networks comparable to a simple downsampling or 'sharpening' function? Can you break this term down into a simple, understandable image/analogy? edit: Rephrase after 1st answer: Can pooling be understood as downsampling of weight matrices? 回答1: Convolutional neural network is a family of models which are proved empirically to work great when it comes to image recognition. From this point of view - CNN is

Pytorch: Getting the correct dimensions for final layer

泪湿孤枕 提交于 2019-12-12 10:02:59
问题 Pytorch newbie here! I am trying to fine-tune a VGG16 model to predict 3 different classes. Part of my work involves converting FC layers to CONV layers. However, the values of my predictions don't fall between 0 to 2 (the 3 classes). Can someone point me to a good resource on how to compute the correct dimensions for the final layer? Here are the original fC layers of VGG16: (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace) (2): Dropout