neural-network

Is there a way to import a RapidMiner MLP-ANN in OpenCV?

家住魔仙堡 提交于 2019-12-23 02:57:11
问题 I trained and validated a MLP Model in RapidMiner Studio. My Input Values are already normalized to [-1, 1]. As far as I understood, the MLP is already defined by its weights. As you can see here, the ANN has one Hidden Layer: http://i.stack.imgur.com/qhVP0.png Now I'm trying to import this in OpenCV, as I don't want to retrain the whole model. I got all weights per Node + Bias from RapidMiner. OpenCV offers the function CvANN_MLP::load(), where I am able to load a XML or YML file. I tried to

How to modify Caffe network input for C++ API?

微笑、不失礼 提交于 2019-12-23 02:54:17
问题 I'm trying to use the MINST Caffe example via the C++ API, but I'm having a bit of trouble working out how to restructure the network prototxt file I'll deploy after training. I've trained and tested the model with the original file (lenet_train_test.prototxt), but when I want to deploy it and make predictions like in the C++ and OpenCV example, I realise I have to modify the input section to make it similar to the deploy.prototxt file they have. Can I replace the information in the training

Understanding output of Dense layer for higher dimension

你。 提交于 2019-12-23 02:52:57
问题 I don't have problem in understanding output shape of a Dense layer followed by a Flatten layer. Output shape is in accordance of my understanding i.e (Batch size, unit). nn= keras.Sequential() nn.add(keras.layers.Conv2D(8,kernel_size=(2,2),input_shape=(4,5,1))) nn.add(keras.layers.Conv2D(1,kernel_size=(2,2))) nn.add(keras.layers.Flatten()) nn.add(keras.layers.Dense(5)) nn.add(keras.layers.Dense(1)) nn.summary() Output is: _________________________________________________________________

Pybrain Text Classification: data and input

岁酱吖の 提交于 2019-12-23 02:31:38
问题 I have 3 sets of sentences (varying in word counts), but I don't know how to extract features from the text such that the input dimension will remain the same. For example, I've tried bag-of-words but, since the word-count variation causes input-dimension variation, I eventually get errors. I would much appreciate it if you could show me an approach to preparing the string data for the neural network. Thank you! (Python 2.7 in Windows 7) 回答1: How to format the input This is an extraction from

Passing Individual Channels of Tensors to Layers in Keras

烂漫一生 提交于 2019-12-23 02:18:33
问题 I am trying to emulate something equivalent to a SeparableConvolution2D layer for the theano backend (it already exists for the TensorFlow backend). As the first step What I need to do is pass ONE channel from a tensor into the next layer. So say I have a 2D convolution layer called conv1 with 16 filters which produces an output with shape: (batch_size, 16, height, width) I need to select the subtensor with shape (: , 0, : , : ) and pass it to the next layer. Simple enough right? This is my

Mixing numerical and categorical data into keras sequential model with Dense layers

瘦欲@ 提交于 2019-12-23 02:03:32
问题 I have a training set in a Pandas dataframe, and I pass this data frame into model.fit() with df.values . Here is some information about the df: df.values.shape # (981, 5) df.values[0] # array([163, 0.6, 83, 0.52, # array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, # 0,

Neural Network Output :Scaling the output range

一世执手 提交于 2019-12-23 01:41:58
问题 The output layer of my neural network (3 layered) is using sigmoid as activation which outputs only in range [0-1]. However, if I want to train it for outputs that are beyond [0-1], say in thousands, what should I do? For example if I want to train input ----> output 0 0 ------> 0 0 1 ------> 1000 1000 1 ----> 1 1 1 -------> 0 My program works for AND, OR, XOR etc. As input output are all in binary. There were some suggestion to use, Activation: y = lambda*(abs(x) 1/(1+exp(-1 (x))))

Error when checking input: expected dense_input to have shape (21,) but got array with shape (1,)

走远了吗. 提交于 2019-12-22 18:48:10
问题 How to fix the input array to meet the input shape? I tried to transpose the input array, as described here, but an error is the same. ValueError: Error when checking input: expected dense_input to have shape (21,) but got array with shape (1,) import tensorflow as tf import numpy as np model = tf.keras.models.Sequential([ tf.keras.layers.Dense(40, input_shape=(21,), activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(1, activation=tf.nn.softmax) ]) model.compile

Neuralnetwork activation function

老子叫甜甜 提交于 2019-12-22 17:48:41
问题 This is beginner level question. I have several training inputs in binary and for the neural network I am using a sigmoid thresholding function SigmoidFn(Input1*Weights) where SigmoidFn(x) = 1./(1+exp(-1.*x)); The use of the above function will give continuous real numbers. But, I want the output to be in binary since the network is a Hopfield neural net (single layer 5 input nodes and 5 output nodes). The problem which I am facing is I am unable to correctly understand the usage and

How can I have multiple losses in a network in Caffe?

风流意气都作罢 提交于 2019-12-22 14:53:10
问题 If I define multiple loss layers in a network, will there be multiple back propagation happening from those ends to the beginning of the network? I mean, do they even work that way? Suppose I have something like this: Layer1{ } Layer2{ } ... Layer_n{ } Layer_cls1{ bottom:layer_n top:cls1 } Layer_cls_loss1{ type:some_loss bottom:cls1 top:loss1 } Layer_n1{ bottom:layer_n .. } Layer_n2{ } ... layer_n3{ } Layer_cls2{ bottom:layer_n3 top:cls2 } Layer_cls_loss2{ type:some_loss bottom:cls2 top:loss2