neural-network

Receptive fields for feed forward network

╄→尐↘猪︶ㄣ 提交于 2020-01-22 02:30:09
问题 I am pretty new to artificial intelligence and neural networks. I have implemented a feed-forward neural network in PyTorch for classification on the MNIST data set. Now I want to visualize the receptive fields of (a subset of) hidden neurons. But I am having some problems with understanding the concept of receptive fields and when I google it all results are about CNNs. So can anyone help me with how I could do this in PyTorch and how to interpret the results? 回答1: I have previously

Would Richardson–Lucy deconvolution work for recovering the latent kernel?

半城伤御伤魂 提交于 2020-01-21 19:46:27
问题 I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation? Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image . My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think? function latent

Would Richardson–Lucy deconvolution work for recovering the latent kernel?

三世轮回 提交于 2020-01-21 19:42:54
问题 I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation? Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image . My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think? function latent

Keras - Autoencoder for Text Analysis

帅比萌擦擦* 提交于 2020-01-21 05:40:27
问题 So I'm trying to create an autoencoder that will take text reviews and find a lower dimensional representation. I'm using keras and I want my loss function to compare the output of the AE to the output of the embedding layer. Unfortunately, it gives me the following error. I'm pretty sure the problem is with my loss function but I can't seem to resolve the issue. Autoencoder print X_train.shape input_i = Input(shape=(200,)) embedding = Embedding(input_dim=weights.shape[0],output_dim=weights

What is “metrics” in Keras?

不羁岁月 提交于 2020-01-20 14:16:53
问题 It is not yet clear for me what metrics are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the model ? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) 回答1: So in order to understand what metrics are, it's good to start by understanding what a loss function is.

How to calculate the number of parameters of an LSTM network?

佐手、 提交于 2020-01-19 02:55:12
问题 Is there a way to calculate the total number of parameters in a LSTM network. I have found a example but I'm unsure of how correct this is or If I have understood it correctly. For eg consider the following example:- from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.layers import Embedding from keras.layers import LSTM model = Sequential() model.add(LSTM(256, input_dim=4096, input_length=16)) model.summary() Output _____________________________

I was training an Ann machine learning model using GridSearchCV and got stuck with an IndexError in gridSearchCV

∥☆過路亽.° 提交于 2020-01-17 15:33:27
问题 My model starts to train and while executing for sometime it gives an error :- IndexError: index 37 is out of bounds for axis 0 with size 37 It executes properly for my model without using gridsearchCV with fixed parameters Here is my code from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import GridSearchCV from keras.models import Sequential from keras.layers import Dense def build_classifier(optimizer, nb_layers,unit): classifier = Sequential() classifier

I was training an Ann machine learning model using GridSearchCV and got stuck with an IndexError in gridSearchCV

為{幸葍}努か 提交于 2020-01-17 15:31:53
问题 My model starts to train and while executing for sometime it gives an error :- IndexError: index 37 is out of bounds for axis 0 with size 37 It executes properly for my model without using gridsearchCV with fixed parameters Here is my code from keras.wrappers.scikit_learn import KerasClassifier from sklearn.model_selection import GridSearchCV from keras.models import Sequential from keras.layers import Dense def build_classifier(optimizer, nb_layers,unit): classifier = Sequential() classifier

Draw divisory MLP line together with chart in MATLAB

混江龙づ霸主 提交于 2020-01-17 09:27:13
问题 I need to plot the divisory line together with the graph below: The code I used to train the MLP neural network is here: circles = [1 1; 2 1; 2 2; 2 3; 2 4; 3 2; 3 3; 4 1; 4 2; 4 3]; crosses = [1 2; 1 3; 1 4; 2 5; 3 4; 3 5; 4 4; 5 1; 5 2; 5 3]; net = feedforwardnet(3); net = train(net, circles, crosses); plot(circles(:, 1), circles(:, 2), 'ro'); hold on plot(crosses(:, 1), crosses(:, 2), 'b+'); hold off; But I'd like to show the line separating the groups in the chart too. How do I proceed?

Gensim Doc2Vec Exception AttributeError: 'str' object has no attribute 'words'

≯℡__Kan透↙ 提交于 2020-01-17 07:49:08
问题 I am learning Doc2Vec model from gensim library and using it as follows: class MyTaggedDocument(object): def __init__(self, dirname): self.dirname = dirname def __iter__(self): for fname in os.listdir(self.dirname): with open(os.path.join(self.dirname, fname),encoding='utf-8') as fin: print(fname) for item_no, sentence in enumerate(fin): yield LabeledSentence([w for w in sentence.lower().split() if w in stopwords.words('english')], [fname.split('.')[0].strip() + '_%s' % item_no]) sentences =