neural-network

Keras dimensionality in convolutional layer mismatch

℡╲_俬逩灬. 提交于 2019-12-24 02:22:27
问题 I'm trying to play around with Keras to build my first neural network. I have zero experience and I can't seem to figure out why my dimensionality isn't right. I can't figure it out from their docs what this error is complaining about, or even what layer is causing it. My model takes in a 32byte array of numbers, and is supposed to give a boolean value on the other side. I want a 1D convolution on the input byte array. arr1 is the 32byte array, arr2 is an array of booleans. inputData = np

Neural Network in python: Decision/Classification always gives 0.5

北城以北 提交于 2019-12-24 02:10:00
问题 First of all I wanna say that I am a python beginner and also completely new to neural networks. When I read about it I was very excited and thought I set up a little code from scratch (see code below). But somehow my code is not working properly. I guess there are some major bugs (in the algorithm and the programming?). But I cannot find them at the moment. So, in the handwritten notes you can see my system (and some formulas). I wanna solve a decision problem where I have data in the form

how to load reference .caffemodel for training

不想你离开。 提交于 2019-12-24 01:55:13
问题 I'm using alexnet to train my own dataset. The example code in caffe comes with bvlc_reference_caffenet.caffemodel solver.prototxt train_val.prototxt deploy.prototxt When I train with the following command: ./build/tools/caffe train --solver=models/bvlc_reference_caffenet/solver.prototxt I'd like to start with weights given in bvlc_reference.caffenet.caffemodel. My questions are How do I do that? Is it a good idea to start from the those weights? Would this converge faster? Would this be bad

Use different optimizers depending on a if statement in TENSORFLOW

♀尐吖头ヾ 提交于 2019-12-24 01:47:24
问题 I'm currently trying to implement a neural network with two training steps. First i want to reduce the loss_first_part function and then i want to reduce the loss_second_part. tf.global_variable_initializer().run() for epoch in range(nb_epochs) if epoch < 10 : train_step = optimizer.minimize(loss_first_part) else : train_step = optimizer.minimize(loss_second_part) The problem is that the initializer should be defined after the optimizer.minimize call . Indeed i've the following error

Generating prediction using a back-propagation neural network model on R returns same values for all observation

只谈情不闲聊 提交于 2019-12-24 01:45:06
问题 I'm trying to generate prediction using a trained backpropagation neural network using the neuralnet package on a new data set. I used the 'compute' function but end up with the same value for all observations. What did I do wrong? # the data Var1 <- runif(50, 0, 100) sqrt.data <- data.frame(Var1, Sqrt=sqrt(Var1)) # training the model backnet = neuralnet(Sqrt~Var1, sqrt.data, hidden=2, err.fct="sse", linear.output=FALSE, algorithm="backprop", learningrate=0.01) print (backnet) Call: neuralnet

How to keep the weight value to zero in a particular location using theano or lasagne?

心不动则不痛 提交于 2019-12-24 01:37:08
问题 I'm a theano and lasagne user. I have a problem dealing with the variable length of the input matrix. i.e) x1 = [0, 1, 3] x2 = [1, 2] matrix_embedding = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], [ 0.5, 0.6, 0.7], ] matrix_embedding[x1] = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.5, 0.6, 0.7] ] matrix_embedding[x2] = [ [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], ] So, I try to use the padding. matrix_padding_embedding = [ [ 0.1, 0.2, 0.3], [ 0.4, 0.5, 0.6], [ 0.2, 0.3, 0.5], [ 0.5, 0.6,

Python/Sklearn - Value Error: could not convert string to float

不打扰是莪最后的温柔 提交于 2019-12-24 01:15:31
问题 I'm trying to run a kNN classifier across my dataset using 10-fold CV. I have some experience with models in WEKA but struggling to transfer this over to Sklearn. Below is my code filename = 'train4.csv' names = ['attribute names are here'] df = pandas.read_csv(filename, names=names) num_folds = 10 kfold = KFold(n_splits=10, random_state=7) model = KNeighborsClassifier() results = cross_val_score(model, df.drop('mix1_instrument', axis=1), df['mix1_instrument'], cv=kfold) print(results.mean())

Tensor Flow Mninst example prediction using external image does not work

隐身守侯 提交于 2019-12-24 01:10:15
问题 i am new to neural networks. i have gone through TensorFlow mninst ML Beginners used tensorflow basic mnist tutorial and trying to get prediction using external image I have the updated the mnist example provided by tensorflow On top of that i have added few things : 1. Saving trained models locally 2. loading the saved models. 3. preprocessing the image into 28 * 28. i have attached the image for reference 1. while training the models, save it locally. So i can reuse it at any point of time.

How to organize the Recurrent Neural Network?

雨燕双飞 提交于 2019-12-24 00:55:47
问题 I want to model the following: y(t)=F(x(t-1),x(t-2),...x(t-k)) or lets say a function that its current output is depended on the last k inputs. 1- I know one way is to have a classic Neural Network with k inputs as {x(t-1),x(t-2),...x(t-k)} for each y(t) and train it. Then what's the benefit of using a RNN to solve that problem? 2- Assuming using RNN, should i use only the x(t) (or x(t-1)) and assume the hidden layer(s) can find the relation of y(t) to the past k inputs through having the in

Caffe | data augmentation by random cropping

╄→гoц情女王★ 提交于 2019-12-24 00:52:58
问题 I am trying to train my own network on Caffe, similar to Imagenet model. But I am confused with the crop layer. Till the point I understand about crop layer in Imagenet model, during training it will take random 227x227 image crops and train the network. But during testing it will take the center 227x227 image crop, does not we loose the information from image while we crop the center 227x27 image from 256x256 image? And second question, how can we define the number of crops to be taken