neural-network

Receptive fields for feed forward network

旧城冷巷雨未停 提交于 2019-12-13 03:11:54
问题 I am pretty new to artificial intelligence and neural networks. I have implemented a feed-forward neural network in PyTorch for classification on the MNIST data set. Now I want to visualize the receptive fields of (a subset of) hidden neurons. But I am having some problems with understanding the concept of receptive fields and when I google it all results are about CNNs. So can anyone help me with how I could do this in PyTorch and how to interpret the results? 回答1: I have previously

SURF feature input to neural network in MATLAB

℡╲_俬逩灬. 提交于 2019-12-13 02:46:57
问题 Can I input the SURF feature obtained by the MATLAB command (detectSURFFeature) as input to the neural network to train network in order to classify/detect the object in the image?.if yes how can I cop with the multidimensional data obtained by the descriptor? I am using image set of same resolution and almost similar orientation. I am using only MATLAB. 回答1: One way to do this is to use the bag of features approach. You discretize the space of the SURF descriptors, and then you compute a

Fast and quick pixel matching algorithm

不羁岁月 提交于 2019-12-13 02:23:12
问题 I am stuck in a pixel matching algorithm for finding symbols in an image. I have two images of symbols that I intend to find in an image that has big resolution. Instead of a pixel by pixel matching algorithm, is there a fast algorithm that gives the same result as that of pixel matching algorithm. The result should be similar to: (percentage of pixel matched) divide by (total pixels). My problem is that I wish to find certain symbols in a 1 bit image. The symbol appear with exact similarity

Tensorflow - Loss increases to NaN

落花浮王杯 提交于 2019-12-13 01:53:40
问题 I am going though Udacity's Deep Learning Course. The interesting thing that I am observing is that for same dataset, my 1 layer Neural Network works perfectly fine, but when I add more layers my Loss increases to NaN. I am using following blog post as reference: I am using the following blog post as a reference: http://www.ritchieng.com/machine-learning/deep-learning/tensorflow/regularization/ Here is my code: batch_size = 128 beta = 1e-3 # Network Parameters n_hidden_1 = 1024 # 1st layer

Training a CNN with pre-trained word embeddings is very slow (TensorFlow)

梦想与她 提交于 2019-12-13 01:51:05
问题 I'm using TensorFlow (0.6) to train a CNN on text data. I'm using a method similar to the second option specified in this SO thread (with the exception that the embeddings are trainable). My dataset is pretty small and the vocabulary is around 12,000 words. When I train using random word embeddings everything works nicely. However, when I switch to the pre-trained embeddings from the word2vec site, the vocabulary grows to over 3,000,000 words and training iterations become over 100 times

Extracting Features from VGG

放肆的年华 提交于 2019-12-13 01:15:09
问题 I want to extract features from images in MS COCO dataset using a fine-tuned VGG-19 network. However, it takes about 6~7 seconds per image, roughly 2 hours per 1k images. (even longer for other fine-tuned models) There are 120k images in MS COCO dataset, so it'll take at least 10 days. Is there any way that I can speed up the feature extraction process? 回答1: Well, this is not just a command. First you must check whether your GPU is powerful enough to wrestle with deep CNNs. Knowing your GPU

High training error at the beginning of training a Convolutional neural network

旧巷老猫 提交于 2019-12-13 01:09:27
问题 In the Convolutional neural network, I'm working on training a CNN, and during the training process, especially at the beginning of my training I get extremely high training error. After that, this error starts to go down slowly. After approximately 500 Epochs the training error comes near to zero (e.g. 0.006604). Then, I took the final obtained model to measure its accuracy against the testing data, I've got about 89.50%. Does that seem normal? I mean getting a high training error rate at

Encog/neuroph save Neural Network

╄→гoц情女王★ 提交于 2019-12-12 21:30:48
问题 Im new to the Neural Network field(to tell the truth i just started few days back). I want to use neural network in my OCR application to recognize handwritten text. what i want to know is, is it possible to train the network after the initial training. In other words im going to train few characters in the beginning but i want to add more characters to the network later without affecting the existence of the previously trained data.(suppose i've created the neural network with adequate out

Keras fitting ignoring nan values

白昼怎懂夜的黑 提交于 2019-12-12 19:16:10
问题 I am training a neural network to do regression, (1 input and 1 output). Let's x and y be the usual input and output dataset, respectively. My problem is that the y dataset (not the x ) have some values set to nan, so the fitting goes to nan. I wonder if there is an option to ignore the nan values in the fitting, in a similar way to the numpy functions np.nanmean to calculate the mean ignoring nans and so on. If that option does not exist I suppose I would have to find the nan values and

How can I get layer type in pycaffe?

a 夏天 提交于 2019-12-12 19:03:27
问题 Is it possible at all to get each layer's type (e.g: Convolution, Data, etc) in pycaffe? I searched the examples provided, but I couldn't find anything. currently I'm using layers name to do my job which is extremely bad and limiting . 回答1: It's easy! import caffe net = caffe.Net('/path/to/net.prototxt', '/path/to/weights.caffemodel', caffe.TEST) # get type of 5-th layer print "type of 5-th layer is ", net.layers[5].type To map between layer names and indices you can use this simple trick: