theano

Theano: change `base_compiledir` to save compiled files in another directory

ぐ巨炮叔叔 提交于 2019-12-06 05:35:26
问题 theano.base_compiledir refers to the directory where the compiled files are stored. Is there a way where I could permanently set theano.base_compiledir to a different location, perhaps by modifying the content of some internal theano files ? http://deeplearning.net/software/theano/library/config.html does explain ways for configuring theano in some aspect but I still couldn't address my question. I am using Ubuntu. Thanks & Cheers! 回答1: As the documentation explains, you can set this, or any

keras + scikit-learn wrapper, appears to hang when GridSearchCV with n_jobs >1

♀尐吖头ヾ 提交于 2019-12-06 04:54:42
问题 UPDATE : I have to re-write this question as after some investigation I realise that this is a different problem. Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only. Code: I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the

Recurrent convolutional BLSTM neural network - arbitrary sequence lengths

时光怂恿深爱的人放手 提交于 2019-12-06 04:50:56
问题 Using Keras + Theano I successfully made a recurrent bidirectional-LSTM neural network that is capable of training on and classifying DNA sequences of arbitrary lengths, using the following model (for fully working code see: http://pastebin.com/jBLv8B72): sequence = Input(shape=(None, ONE_HOT_DIMENSION), dtype='float32') dropout = Dropout(0.2)(sequence) # bidirectional LSTM forward_lstm = LSTM( output_dim=50, init='uniform', inner_init='uniform', forget_bias_init='one', return_sequences=True,

Keras ValueError: I/O operation on closed file

拈花ヽ惹草 提交于 2019-12-06 04:48:30
问题 I am trying to write a single layer network. When it starts to train through model.fit , at some random epoch it will throw the following error: ValueError: I/O operation on closed file Here is how I am using model.fit my_model = model.fit(train_x, train_y, batch_size=100, nb_epoch=20, show_accuracy=True, verbose=1) Please let me know if you have any thoughts or is encountering the same problem. Thanks Here is the full output of the error: Epoch 1/20 47900/60816 [======================>......

Python Numpy Error: ValueError: setting an array element with a sequence

落爺英雄遲暮 提交于 2019-12-06 03:00:00
I am trying to build a dataset similar to mnist.pkl.gz provided in theano logistic_sgd.py implementation. Following is my code snippet. import numpy as np import csv from PIL import Image import gzip, cPickle import theano from theano import tensor as T def load_dir_data(csv_file=""): print(" reading: %s" %csv_file) dataset=[] labels=[] cr=csv.reader(open(csv_file,"rb")) for row in cr: print row[0], row[1] try: image=Image.open(row[0]+'.jpg').convert('LA') pixels=[f[0] for f in list(image.getdata())] dataset.append(pixels) labels.append(row[1]) del image except: print("image not found") ret

深度学习之神经网络与支持向量机

旧城冷巷雨未停 提交于 2019-12-06 02:46:35
从人人上转过来的 前言:本文翻译自deeplearning网站,主要综述了一些论文、算法已经工具箱。 引言: 神经网络( N eural N etwork)与支持向量机( S upport V ector M achines,SVM)是统计学习的代表方法。可以认为神经网络与支持向量机都源自于感知机(Perceptron)。感知机是1958年由Rosenblatt发明的线性分类模型。感知机对线性分类有效,但现实中的分类问题通常是非线性的。 神经网络与支持向量机(包含核方法)都是非线性分类模型。1986年,Rummelhart与McClelland发明了神经网络的学习算法 B ack P ropagation。后来,Vapnik等人于1992年提出了支持向量机。神经网络是多层(通常是三层)的非线性模型, 支持向量机利用核技巧把非线性问题转换成线性问题。 神经网络与支持向量机一直处于“竞争”关系。 Scholkopf是Vapnik的大弟子,支持向量机与核方法研究的领军人物。据Scholkopf说,Vapnik当初发明支持向量机就是想"干掉"神经网络(He wanted to kill Neural Network)。支持向量机确实很有效,一段时间支持向量机一派占了上风。 近年来,神经网络一派的大师Hinton又提出了神经网络的Deep Learning算法(2006年)

Keras Convolution2D Input: Error when checking model input: expected convolution2d_input_1 to have shape

本小妞迷上赌 提交于 2019-12-06 00:38:11
问题 I am working through this great tutorial on creating an image classifier using Keras. Once I have trained the model, I save it to a file and then later reload it into a model in a test script shown below. I get the following exception when I evaluate the model using a new, never-before-seen image: Error: Traceback (most recent call last): File "test_classifier.py", line 48, in <module> score = model.evaluate(x, y, batch_size=16) File "/Library/Python/2.7/site-packages/keras/models.py", line

how to get the outputs from the embedding layer

跟風遠走 提交于 2019-12-05 19:39:44
from keras.models import Sequential from keras.layers.embeddings import Embedding from theano import function model = Sequential() model.add(Embedding(max_features, 128, input_length = maxlen)) I want to get the outputs from the embedding layers. I read through the source in keras but didnt find any suitable function or attribute. Anyone can help me with this? You can get the output of any layer, not just an embedding layer, as described here : from keras import backend as K get_3rd_layer_output = K.function([model.layers[0].input], [model.layers[3].output]) layer_output = get_3rd_layer_output

keras ignoring values in $HOME/.keras/keras.json file

坚强是说给别人听的谎言 提交于 2019-12-05 18:58:11
问题 I know the default backend for Keras has switched from Theano to TensorFlow, but with the dev version of Theano I can train on the GPU with OpenCL (I have an AMD card). However, when I import Keras, it only uses the TensorFlow backend even after I changed the values in the Keras configuration file : ~ $ cat $HOME/.keras/keras.json {"epsilon": 1e-07, "floatx": "float32", "backend": "theano"} ~ $ python -c 'import keras' Using TensorFlow backend. ~ $ KERAS_BACKEND=theano python -c 'import keras

Can I (selectively) invert Theano gradients during backpropagation?

浪尽此生 提交于 2019-12-05 18:47:45
问题 I'm keen to make use of the architecture proposed in the recent paper "Unsupervised Domain Adaptation by Backpropagation" in the Lasagne/Theano framework. The thing about this paper that makes it a bit unusual is that it incorporates a 'gradient reversal layer', which inverts the gradient during backpropagation: (The arrows along the bottom of the image are the backpropagations which have their gradient inverted). In the paper the authors claim that the approach "can be implemented using any