conv-neural-network

ValueError: Tensor Tensor(…) is not an element of this graph. When using global variable keras model

二次信任 提交于 2019-12-05 18:45:59
I'm running a web server using flask and the error comes up when I try to use vgg16, which is the global variable for keras' pre-trained VGG16 model. I have no idea why this error rises or whether it has anything to do with the Tensorflow backend. Here is my code: vgg16 = VGG16(weights='imagenet', include_top=True) def getVGG16Prediction(img_path): global vgg16 img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) pred = vgg16.predict(x) return x, sort(decode_predictions(pred, top=3)[0]) @app.route("

Does the dropout layer need to be defined in deploy.prototxt in caffe?

筅森魡賤 提交于 2019-12-05 16:58:36
In the AlexNet implementation in caffe, I saw the following layer in the deploy.prototxt file: layer { name: "drop7" type: "Dropout" bottom: "fc7" top: "fc7" dropout_param { dropout_ratio: 0.5 } } Now the key idea of dropout is to randomly drop units (along with their connections) from the neural network during training. Does this mean that I can simply delete this layer from deploy.prototxt, as this file is meant to be used during testing only? Yes. Dropout is not required during Testing. Even if you include a dropout layer, nothing special happens during Testing. See the source code of

Convolutional Neural Networks: How many pixels will be covered by each of the filters?

半城伤御伤魂 提交于 2019-12-05 16:40:41
How can I calculate the area (in the original image) covered by each of the filters in my network? e.g. Lets say the size of the image is WxW pixels. I am using the following network: layer 1 : conv : 5x5 layer 2 : pool : 3x3 layer 3 : conv : 5x5 ..... layer N : conv : 5x5 I want to calculate how much area in the original image will be covered by each filter. e.g. the filter in the layer 1 will cover 5x5 pixels in the original image. A similar problem would be, how many pixels will be covered by each activation? which is essentially the same as, how large an input image has to be in order to

How to accumulate gradients in tensorflow?

好久不见. 提交于 2019-12-05 15:53:19
问题 I have a question similar to this one. Because I have limited resources and I work with a deep model (VGG-16) - used to train a triplet network - I want to accumulate gradients for 128 batches of size one training example, and then propagate the error and update the weights. It's not clear to me how do I do this. I work with tensorflow but any implementation/pseudocode is welcome. 回答1: Let's walk through the code proposed in one of the answers you liked to: ## Optimizer definition - nothing

Expected tensorflow model size from learned variables

流过昼夜 提交于 2019-12-05 11:56:57
When training convolutional neural networks for image classification tasks we generally want our algorithm to learn the filters (and biases) that transform a given image to its correct label. I have a few models I'm trying to compare in terms of model size, number of operations, accuracy, etc. However, the size of the model outputed from tensorflow, concretely the model.ckpt.data file that stores the values of all the variables in the graph, is not the one I expected. In fact, it seems to be three times bigger. To go straight to the problem I'm gonna base my question on this Jupyter notebook.

Random cropping data augmentation convolutional neural networks

被刻印的时光 ゝ 提交于 2019-12-05 08:10:16
I am training a convolutional neural network, but have a relatively small dataset. So I am implementing techniques to augment it. Now this is the first time i am working on a core computer vision problem so am relatively new to it. For augmenting, i read many techniques and one of them that is mentioned a lot in the papers is random cropping. Now i'm trying to implement it ,i've searched a lot about this technique but couldn't find a proper explanation. So had a few queries: How is random cropping actually helping in data augmentation? Is there any library (e.g OpenCV, PIL, scikit-image, scipy

3D coordinates as the output of a Neural Network

爱⌒轻易说出口 提交于 2019-12-05 07:48:33
问题 Neural Networks are mostly used to classify. So, the activation of a neuron in the output layer indicates the class of whatever you are classifying. Is it possible (and correct) to design a NN to get 3D coordinates? This is, three output neurons with values in ranges, for example [-1000.0, 1000.0], each one. 回答1: Yes. You can use a neural network to perform linear regression, and more complicated types of regression, where the output layer has multiple nodes that can be interpreted as a 3-D

How to set proper arguments to build keras Convolution2D NN model [Text Classification]?

喜欢而已 提交于 2019-12-05 07:42:04
问题 I am trying to use 2D CNN to do text classification on Chinese Article and have trouble on setting arguments of keras Convolution2D . I know the basic flow of Convolution2D to cope with image, but stuck by using my dataset with keras. Input data My data is 9800 Chinese Article, max sentence length is 6810,with 200 word2vec size. So the input shape is `(9800, 1, 6810, 200)` Code for building model MAX_FEATURES = 6810 # I just randomly pick one filter, seems this is the problem? nb_filter = 128

Convolutional2D Siamese Network in Keras

扶醉桌前 提交于 2019-12-05 07:12:29
I'm trying to use Keras's Siamese layer in conjunction with a shared Convolution2D layer. I don't need the input to pass through any other layers before the Siamese layer but the Siamese layer requires that input layers be specified. I can't figure out how to create the input layers to match the input of the conv layer. The only concrete example of the Siamese layer being used I could find is in the tests where Dense layers (with vector inputs) are used as input. Basically, I want an input layer that allows me to specify the image dimensions as input so they can be passed on to the shared conv

Multiple pretrained networks in Caffe

会有一股神秘感。 提交于 2019-12-05 05:43:45
Is there a simple way (e.g. without modifying caffe code) to load wights from multiple pretrained networks into one network? The network contains some layers with same dimensions and names as both pretrained networks. I am trying to achieve this using NVidia DIGITS and Caffe. EDIT : I thought it wouldn't be possible to do it directly from DIGITS, as confirmed by answers. Can anyone suggest a simple way to modify the DIGITS code to be able to select multiple pretrained networks? I checked the code a bit, and thought the training script would be a good place to start, but I don't have in-depth