neural-network

Neural Network in TensorFlow works worse than Random Forest and predict the same label each time

戏子无情 提交于 2020-01-14 05:07:06
问题 I am new in DNN and TesorFlow. I have the problem with NN using for binary classification. As input data I have text dataset, which was transformed by TF-IDF into numerical vectors. The number of rows for training dataset is 43 000 The number of features 4235 I tried to use TFlearn library and then Keras io. But the result is the same - NN predict only one label 0 or 1 and give worse Accuracy then Random Forest. I will add the script, which I use for NN building. Please, tell me what is wrong

Inverse_transform method (LabelEncoder)

℡╲_俬逩灬. 提交于 2020-01-14 04:39:05
问题 You can find below the code I found on the internet to build a simple Neural network. Everyhting works fine but as I encoded the y labels the predictions I get give this result: 2 0 1 2 1 2 2 0 2 1 0 0 0 1 1 1 1 1 1 1 2 1 2 1 0 1 0 1 0 2 So now I need to convert it back to the original flower class (Iris-virginica, etc). I need to use the inverse_transform method but can you help out? import pandas as pd from sklearn import preprocessing from sklearn.model_selection import train_test_split

How to batch inputs together for tensorflow?

一世执手 提交于 2020-01-14 03:55:21
问题 I'm trying to batch together the inputs for a neural network I'm working on so I can feed them into tensorflow like in the tensorflow MNIST tutorial. However I can't find anyway of doing this and it isn't covered in the tutorial. input = tf.placeholder(tf.float32, [10, 10]) ... accuracy = tf.reduce_mean(tf.cast(correct, tf.float32)) inputs = #A list containing 50 of the inputs sess.run(accuracy, feed_dict={input: inputs}) This will throw the following error: ValueError: Cannot feed value of

How can I implement a checkerboard stride for Conv2d in pytorch?

。_饼干妹妹 提交于 2020-01-13 20:31:10
问题 I am trying to create a convnet using pytorch to work on an input of 2d matrices. I am using a 3x5 filter and I want it to have a custom stride as follows - on even line numbers I want the filter to start from the element at position 0 (red in the image), on odd line numbers I want it to start on the element of position 1 (blue in the image), and in both cases have a stride of 2 on the x direction. That means that if I have a matrix as in the image as my input, I want the filter to have only

How do I combine two keras generator functions

白昼怎懂夜的黑 提交于 2020-01-13 20:15:49
问题 I am trying to implement a Siamese network in Keras and I want to apply image transformations to the 2 input images using Keras Image Data Generators. As per the example in the docs- https://keras.io/preprocessing/image/, I've tried to implement it like this- datagen_args = dict(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True) in_gen1 = ImageDataGenerator(**datagen_args) in_gen2 = ImageDataGenerator(**datagen_args) train_generator = zip(in_gen1, in_gen2)

How do I combine two keras generator functions

有些话、适合烂在心里 提交于 2020-01-13 20:14:42
问题 I am trying to implement a Siamese network in Keras and I want to apply image transformations to the 2 input images using Keras Image Data Generators. As per the example in the docs- https://keras.io/preprocessing/image/, I've tried to implement it like this- datagen_args = dict(rotation_range=10, width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True) in_gen1 = ImageDataGenerator(**datagen_args) in_gen2 = ImageDataGenerator(**datagen_args) train_generator = zip(in_gen1, in_gen2)

How to make sure your computation graph is differentiable

[亡魂溺海] 提交于 2020-01-13 16:50:11
问题 Some of the Tensorflow operations (e.g. tf.argmax ) are not differentiable (i.e. no gradients are calculated and used in back-propagation). An answer to Tensorflow what operations are differentiable and what are not? suggests searching for RegisterGradient in the Tensorflow code. I also noticed Tensorflow has a tf.NotDifferentiable API call for declaring an operation to be non-differentiable. Is there a warning issued if I use non-differentiable functions? Is there a programmatic way to

How Many Epochs Should a Neural Net Need to Learn to Square? (Testing Results Included)

强颜欢笑 提交于 2020-01-13 14:31:38
问题 Okay, let me preface this by saying that I am well aware that this depends on MANY factors, I'm looking for some general guidelines from people with experience. My goal is not to make a Neural Net that can compute squares of numbers for me, but I thought it would be a good experiment to see if I implemented the Backpropagation algorithm correctly. Does this seem like a good idea? Anyways, I am worried that I have not implemented the learning algorithm (fully) correctly. My Testing (Results):

How Many Epochs Should a Neural Net Need to Learn to Square? (Testing Results Included)

前提是你 提交于 2020-01-13 14:31:23
问题 Okay, let me preface this by saying that I am well aware that this depends on MANY factors, I'm looking for some general guidelines from people with experience. My goal is not to make a Neural Net that can compute squares of numbers for me, but I thought it would be a good experiment to see if I implemented the Backpropagation algorithm correctly. Does this seem like a good idea? Anyways, I am worried that I have not implemented the learning algorithm (fully) correctly. My Testing (Results):

How to reuse same network twice within a new network in CAFFE

泪湿孤枕 提交于 2020-01-13 13:10:14
问题 I have a pretrained network (let's call it N ) I would like to use twice within a new network. Anybody knows how to duplicate it? Then I would like to assign a different learning rate to each copy. For example ( N1 is the 1st copy of N , N2 is the 2nd copy of N ), the new network might look like: N1 --> [joint ip N2 --> layer] I know how to reuse N with a single copy, however, since N1 and N2 will have different (finetune) learning rates, I don't know how can I make 2 copies of N and assign