lasagne

Lasagne autoencoder: how do I just use the decoder part?

自作多情 提交于 2019-12-10 18:27:59
问题 Let's say I have an autoencoder in Lasagne, with two encoding layers, and two InverseLayers as a decoder: input = InputLayer(...) l1 = Conv1DLayer(input, ...) l2 = DenseLayer(l1, ...) # decoder part: l2p = InverseLayer(l2, l2) l1p = InverseLayer(l2p, l1) and let's say I've trained this autoencoder to my satisfaction, and just wish to use the decoder; that is, I have data that I want to feed as an input to l2p (the first layer of the decoder part). How do I do this? I can't construct a network

Error while using Conv2DLayer with lasagne NeuralNet

£可爱£侵袭症+ 提交于 2019-12-10 15:55:44
问题 I have windows 8.1 64bit and use recommended here http://deeplearning.net/software/theano/install_windows.html#installing-theano python win-python distribution (python 3.4). I've went through every step of tutorial (excluding CUDA stuff and GPU config), uninstalled everything and did it again but my problem persists. I am trying to build convolutional neural network using Lasagne. Every layer I've tested so far is working - only Conv2DLayer throws errors. Code is as follows: net2 = NeuralNet(

Keras - Text Classification - LSTM - How to input text?

谁说我不能喝 提交于 2019-12-10 14:24:39
问题 Im trying to understand how to use LSTM to classify a certain dataset that i have. I researched and found this example of keras and imdb : https://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py However, im confused about how the data set must be processed to input. I know keras has pre-processing text methods, but im not sure which to use. The x contain n lines with texts and the y classify the text by happiness/sadness. Basically, 1.0 means 100% happy and 0.0 means totally sad.

Can I (selectively) invert Theano gradients during backpropagation?

浪尽此生 提交于 2019-12-05 18:47:45
问题 I'm keen to make use of the architecture proposed in the recent paper "Unsupervised Domain Adaptation by Backpropagation" in the Lasagne/Theano framework. The thing about this paper that makes it a bit unusual is that it incorporates a 'gradient reversal layer', which inverts the gradient during backpropagation: (The arrows along the bottom of the image are the backpropagations which have their gradient inverted). In the paper the authors claim that the approach "can be implemented using any

Realtime Data augmentation in Lasagne

房东的猫 提交于 2019-12-04 16:13:39
I need to do realtime augmentation on my dataset for input to CNN, but i am having a really tough time finding suitable libraries for it. I have tried caffe but the DataTransform doesn't support many realtime augmentations like rotating etc. So for ease of implementation i settled with Lasagne . But it seems that it also doesn't support realtime augmentation. I have seen some posts related to Facial Keypoints detection where he's using Batchiterator of nolearn.lasagne . But i am not sure whether its realtime or not. There's no proper tutorial for it. So finally how should i do realtime

How to implement Weighted Binary CrossEntropy on theano?

China☆狼群 提交于 2019-12-04 03:15:37
How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy the tensor tgt = targets.copy("tgt") # Make it a vector # tgt = tgt.flatten() # tgt = tgt.reshape(3000) #

How can you train multiple neural networks simultaneously in nolearn/lasagne/theano on Python?

拟墨画扇 提交于 2019-12-02 12:21:33
问题 I am writing a calibration pipeline to learn the hyperparameters for neural networks to detect properties of DNA sequences*. This therefore requires training a large number of models on the same dataset with different hyperparameters. I am trying to optimise this to run on GPU. DNA sequence datasets are quite small compared to image datasets (typically 10s or 100s of base-pairs in 4 'channels' to represent the 4 DNA bases, A, C, G and T, compared to 10,000s of pixels in 3 RGB channels), and

Get output from Lasagne (python deep neural network framework)

牧云@^-^@ 提交于 2019-12-02 08:30:40
问题 I loaded the mnist_conv.py example from official github of Lasagne. At the and, I would like to predict my own example. I saw that "lasagne.layers.get_output()" should handle numpy arrays from official documentation, but it doesn't work and I cannot figure out how can I do that. Here's my code: if __name__ == '__main__': output_layer = main() #the output layer from the net exampleChar = np.zeros((28,28)) #the example I would predict outputValue = lasagne.layers.get_output(output_layer,

Get output from Lasagne (python deep neural network framework)

耗尽温柔 提交于 2019-12-02 04:10:11
I loaded the mnist_conv.py example from official github of Lasagne. At the and, I would like to predict my own example. I saw that "lasagne.layers.get_output()" should handle numpy arrays from official documentation, but it doesn't work and I cannot figure out how can I do that. Here's my code: if __name__ == '__main__': output_layer = main() #the output layer from the net exampleChar = np.zeros((28,28)) #the example I would predict outputValue = lasagne.layers.get_output(output_layer, exampleChar) print(outputValue.eval()) but it gives me: TypeError: ConvOp (make_node) requires input be a 4D

How to calculate the number of parameters for convolutional neural network?

≯℡__Kan透↙ 提交于 2019-11-27 10:10:36
I'm using Lasagne to create a CNN for the MNIST dataset. I'm following closely to this example: Convolutional Neural Networks and Feature Extraction with Python . The CNN architecture I have at the moment, which doesn't include any dropout layers, is: NeuralNet( layers=[('input', layers.InputLayer), # Input Layer ('conv2d1', layers.Conv2DLayer), # Convolutional Layer ('maxpool1', layers.MaxPool2DLayer), # 2D Max Pooling Layer ('conv2d2', layers.Conv2DLayer), # Convolutional Layer ('maxpool2', layers.MaxPool2DLayer), # 2D Max Pooling Layer ('dense', layers.DenseLayer), # Fully connected layer (