neural-network

Keras Text Preprocessing - Saving Tokenizer object to file for scoring

风流意气都作罢 提交于 2019-12-23 18:53:39
问题 I've trained a sentiment classifier model using Keras library by following the below steps(broadly). Convert Text corpus into sequences using Tokenizer object/class Build a model using the model.fit() method Evaluate this model Now for scoring using this model, I was able to save the model to a file and load from a file. However I've not found a way to save the Tokenizer object to file. Without this I'll have to process the corpus every time I need to score even a single sentence. Is there a

PyTorch - shape of nn.Linear weights

我怕爱的太早我们不能终老 提交于 2019-12-23 18:52:49
问题 Yesterday I came across this question and for the first time noticed that the weights of the linear layer nn.Linear need to be transposed before applying matmul . Code for applying the weights: output = input.matmul(weight.t()) What is the reason for this? Why are the weights not in the transposed shape just from the beginning, so they don't need to be transposed every time before applying the layer? 回答1: I found an answer here: Efficient forward pass in nn.Linear #2159 It seems like there is

AttributeError: 'History' object has no attribute 'predict' - Fitting a List of train and test data

六月ゝ 毕业季﹏ 提交于 2019-12-23 18:13:13
问题 I am trying a NN model using this example. I am fitting a list of values to a NN model. However, I am getting an AttributeError . This has been asked before and has been answered. Unfortunately, it is not working for me. As shown in the example, I created the following, from keras.models import Sequential from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from keras.layers import Dense def

How to use outputs from previous time steps as input along with other inputs in RNN using tensorflow?

我是研究僧i 提交于 2019-12-23 17:16:16
问题 In the following example, there are three time series and I want to predict another time series y which is a function of the three. How can I use four inputs to predict the time series where the fourth input is the output at previous time step? import tensorflow as tf import numpy as np import pandas as pd #clean computation graph tf.reset_default_graph() tf.set_random_seed(777) # reproducibility np.random.seed(0) import matplotlib.pyplot as plt def MinMaxScaler(data): numerator = data - np

Should I substract imagenet pretrained inception_v3 model mean value at inception_v3.py keras?

梦想的初衷 提交于 2019-12-23 17:06:30
问题 def preprocess_input(x): x /= 255. x -= 0.5 x *= 2. return x  I am using keras inception_v3 imagenet pretrained model(inception_v3.py) to finetune on my own dataset.  When I want to subtract the imagenet mean value [123.68, 116.779, 103.939] and reverse axis RGB to BGR as we often do, I find that the author provided a _preprocess_input()_ function at the end.I am confused about this.   Should I use the provided function preprocess_input() or subtract mean value and reverse axis as usual?  

Writing Custom Python Layer With Learnable Parameters in Caffe

人走茶凉 提交于 2019-12-23 16:43:16
问题 I know that this example is supposed to illustrate how to add trainable parameters in a Python layer using the add_blob() method. However, I am still unable to understand how this can be used to set the dimensions of the blob based on user defined parameters. There is a better example here on how to write a Python layer here. But here, the layer does not contain any trainable parameters. Please explain how to write a custom Python layer with trainable parameters. 回答1: When you add a

Why does prediction using nn.predict in deepnet package in R return constant value?

≯℡__Kan透↙ 提交于 2019-12-23 15:51:28
问题 I work with The CIFAR-10 dataset. Here is the way I prepare data: library(R.matlab) A1 <- readMat("data_batch_1.mat") A2 <- readMat("data_batch_2.mat") A3 <- readMat("data_batch_3.mat") A4 <- readMat("data_batch_4.mat") A5 <- readMat("data_batch_5.mat") meta <- readMat("batches.meta.mat") test <- readMat("test_batch.mat") A <- rbind(A1$data, A2$data, A3$data, A4$data, A5$data) Gtrain <- 0.21*A[,1:1024] + 0.71*A[,1025:2048] +0.07*A[,2049:3072] ytrain <- c(A1$labels, A2$labels, A3$labels, A4

Shuffling two numpy arrays for a NN

北战南征 提交于 2019-12-23 15:44:45
问题 I have two numpy arrays for input data X and output data y. X = np.array(([2, 3], # sample 1 x [16, 4]), dtype=float) # sample 2 x y = np.array(([1, 0], # sample 1 y [0, 1]), dtype=float) # sample 2 y I am wanting to use mini batches in order to train a NN, how can I shuffle both arrays knowing that the corresponding output is still aligned? 回答1: You can have an array of indexes with same shape as the respective arrays and each time shuffle the index array. In that case you can use the

How to test if my implementation of back propagation neural Network is correct

我们两清 提交于 2019-12-23 15:37:58
问题 I am working on an implementation of the back propagation algorithm. What I have implemented so far seems working but I can't be sure that the algorithm is well implemented, here is what I have noticed during training test of my network : Specification of the implementation : A data set containing almost 100000 raw containing (3 variable as input, the sinus of the sum of those three variables as expected output). The network does have 7 layers all the layers use the Sigmoid activation

Why do I get good accuracy with IRIS dataset with a single hidden node?

耗尽温柔 提交于 2019-12-23 14:02:11
问题 I have a minimal example of a neural network with a back-propagation trainer, testing it on the IRIS data set. I started of with 7 hidden nodes and it worked well. I lowered the number of nodes in the hidden layer to 1 (expecting it to fail), but was surprised to see that the accuracy went up. I set up the experiment in azure ml, just to validate that it wasn't my code. Same thing there, 98.3333% accuracy with a single hidden node. Can anyone explain to me what is happening here? 回答1: First,