neural-network

OpenCV Neural Network Sigmoid Output

人盡茶涼 提交于 2019-12-28 13:38:29
问题 I have been using OpenCV for a quite time. I decided to check its power for Machine Learning lately. So I ended up with implementing a neural network for face recognition. To summarize my strategy for face recognition : Read images from a csv of some face database. Roll images to a Mat array row wise. Apply PCA for dimensionality reduction. Use projections of PCA to train the network. Predict the test data using the trained network. So everything was OK until the prediction stage. I was using

Tensorflow: Cannot interpret feed_dict key as Tensor

不羁的心 提交于 2019-12-28 12:15:12
问题 I am trying to build a neural network model with one hidden layer (1024 nodes). The hidden layer is nothing but a relu unit. I am also processing the input data in batches of 128. The inputs are images of size 28 * 28. In the following code I get the error in line _, c = sess.run([optimizer, loss], feed_dict={x: batch_x, y: batch_y}) Error: TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_64:0", shape=(128, 784), dtype=float32) is not an element of this graph.

Training feedforward neural network for OCR [closed]

大憨熊 提交于 2019-12-28 11:46:16
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed last year . Currently I'm learning about neural networks and I'm trying to create an application that can be trained to recognize handwritten characters. For this problem I use a feed-forward neural network and it seems to work when I train it to recognize 1, 2 or 3 different characters. But

Keras: How to use fit_generator with multiple inputs

血红的双手。 提交于 2019-12-28 05:33:12
问题 Is it possible to have two fit_generator? I'm creating a model with two inputs, The model configuration is shown below. Label Y uses the same labeling for X1 and X2 data. The following error will continue to occur. Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[0.75686276, 0.75686276, 0.75686276], [0.75686276, 0.75686276, 0

Tensor is not an element of this graph

大兔子大兔子 提交于 2019-12-28 05:21:48
问题 I'm getting this error 'ValueError: Tensor Tensor("Placeholder:0", shape=(1, 1), dtype=int32) is not an element of this graph.' The code is running perfectly fine without with tf.Graph(). as_default(): . However I need to call M.sample(...) multiple times and each time the memory won't be free after session.close() . Probably there is a memory leak but not sure where is it. I want to restore a pre-trained neural network, set it as default graph, and testing it multiple times (like 10000) over

Tensorflow One Hot Encoder?

a 夏天 提交于 2019-12-28 04:54:12
问题 Does tensorflow have something similar to scikit learn's one hot encoder for processing categorical data? Would using a placeholder of tf.string behave as categorical data? I realize I can manually pre-process the data before sending it to tensorflow, but having it built in is very convenient. 回答1: As of TensorFlow 0.8, there is now a native one-hot op, tf.one_hot that can convert a set of sparse labels to a dense one-hot representation. This is in addition to tf.nn.sparse_softmax_cross

How to Create CaffeDB training data for siamese networks out of image directory

牧云@^-^@ 提交于 2019-12-28 04:04:28
问题 I need some help to create a CaffeDB for siamese CNN out of a plain directory with images and label-text-file. Best would be a python-way to do it. The problem is not to walk through the directory and making pairs of images. My problem is more of making a CaffeDB out of those pairs. So far I only used convert_imageset to create a CaffeDB out of an image directory. Thanks for help! 回答1: Why don't you simply make two datasets using good old convert_imagest ? layer { name: "data_a" top: "data_a"

Why plt.imshow() doesn't display the image?

↘锁芯ラ 提交于 2019-12-28 03:51:08
问题 I am a newbie to keras, and when I tried to run my first keras program on my linux, something just didn't go as I wish. Here is my python code: import numpy as np np.random.seed(123) from keras.models import Sequential from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils from keras.datasets import mnist (X_train,y_train),(X_test,y_test) = mnist.load_data() print X_train.shape from matplotlib import

Using pre-trained word2vec with LSTM for word generation

拥有回忆 提交于 2019-12-28 03:24:08
问题 LSTM/RNN can be used for text generation. This shows way to use pre-trained GloVe word embeddings for Keras model. How to use pre-trained Word2Vec word embeddings with Keras LSTM model? This post did help. How to predict / generate next word when the model is provided with the sequence of words as its input? Sample approach tried: # Sample code to prepare word2vec word embeddings import gensim documents = ["Human machine interface for lab abc computer applications", "A survey of user opinion

How to feed caffe multi label data in HDF5 format?

孤人 提交于 2019-12-28 02:04:33
问题 I want to use caffe with a vector label, not integer. I have checked some answers, and it seems HDF5 is a better way. But then I'm stucked with error like: accuracy_layer.cpp:34] Check failed: outer_num_ * inner_num_ == bottom[1]->count() (50 vs. 200) Number of labels must match number of predictions; e.g., if label axis == 1 and prediction shape is (N, C, H, W), label count (number of labels) must be N*H*W , with integer values in {0, 1, ..., C-1}. with HDF5 created as: f = h5py.File('train