theano

Theano: change `base_compiledir` to save compiled files in another directory

瘦欲@ 提交于 2019-12-04 12:04:26
theano.base_compiledir refers to the directory where the compiled files are stored. Is there a way where I could permanently set theano.base_compiledir to a different location, perhaps by modifying the content of some internal theano files ? http://deeplearning.net/software/theano/library/config.html does explain ways for configuring theano in some aspect but I still couldn't address my question. I am using Ubuntu. Thanks & Cheers! As the documentation explains, you can set this, or any other Theano config flag, permanently by altering either the THEANO_FLAGS environment variable (e.g. in your

Recurrent convolutional BLSTM neural network - arbitrary sequence lengths

走远了吗. 提交于 2019-12-04 11:50:57
Using Keras + Theano I successfully made a recurrent bidirectional-LSTM neural network that is capable of training on and classifying DNA sequences of arbitrary lengths, using the following model (for fully working code see: http://pastebin.com/jBLv8B72 ): sequence = Input(shape=(None, ONE_HOT_DIMENSION), dtype='float32') dropout = Dropout(0.2)(sequence) # bidirectional LSTM forward_lstm = LSTM( output_dim=50, init='uniform', inner_init='uniform', forget_bias_init='one', return_sequences=True, activation='tanh', inner_activation='sigmoid', )(dropout) backward_lstm = LSTM( output_dim=50, init=

keras + scikit-learn wrapper, appears to hang when GridSearchCV with n_jobs >1

帅比萌擦擦* 提交于 2019-12-04 11:05:28
UPDATE : I have to re-write this question as after some investigation I realise that this is a different problem. Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only. Code: I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/ , the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads: grid

Theano import error: No module named cPickle

时光总嘲笑我的痴心妄想 提交于 2019-12-04 10:43:21
问题 >>> import theano Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/2.7/site-packages/theano/__init__.py", line 52, in <module> from theano.gof import ( File "/Library/Python/2.7/site-packages/theano/gof/__init__.py", line 38, in <module> from theano.gof.cc import \ File "/Library/Python/2.7/site-packages/theano/gof/cc.py", line 30, in <module> from theano.gof import link File "/Library/Python/2.7/site-packages/theano/gof/link.py", line 18, in

Keras uses way too much GPU memory when calling train_on_batch, fit, etc

天涯浪子 提交于 2019-12-04 07:59:32
问题 I've been messing with Keras, and like it so far. There's one big issue I have been having, when working with fairly deep networks: When calling model.train_on_batch, or model.fit etc., Keras allocates significantly more GPU memory than what the model itself should need. This is not caused by trying to train on some really large images, it's the network model itself that seems to require a lot of GPU memory. I have created this toy example to show what I mean. Here's essentially what's going

Unsupervised pre-training for convolutional neural network in theano

孤者浪人 提交于 2019-12-04 07:46:04
问题 I would like to design a deep net with one (or more) convolutional layers (CNN) and one or more fully connected hidden layers on top. For deep network with fully connected layers there are methods in theano for unsupervised pre-training, e.g., using denoising auto-encoders or RBMs. My question is: How can I implement (in theano) an unsupervised pre-training stage for convolutional layers? I do not expect a full implementation as an answer, but I would appreciate a link to a good tutorial or a

merging recurrent layers with dense layer in Keras

筅森魡賤 提交于 2019-12-04 07:13:10
I want to build a neural network where the two first layers are feedforward and the last one is recurrent. here is my code : model = Sequential() model.add(Dense(150, input_dim=23,init='normal',activation='relu')) model.add(Dense(80,activation='relu',init='normal')) model.add(SimpleRNN(2,init='normal')) adam =OP.Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08) model.compile(loss="mean_squared_error", optimizer="rmsprop") and I get this error : Exception: Input 0 is incompatible with layer simplernn_11: expected ndim=3, found ndim=2. model.compile(loss='mse', optimizer=adam) It is

anaconda python error importing theano

放肆的年华 提交于 2019-12-04 05:48:16
I'm quite new in python and of course I'm also new with Theano. I'm trying to use it under windows along with anaconda python. I have installed all the compulsory requirements (except CUDA since on this laptop I don't have a NVIDIA GPU). I installed the same GCC and set the path as suggested in the walkthrough page. Still I get the following error: Problem occurred during compilation with the command line below: C:\TDM-GCC-64\bin\g++.exe -shared -g -march=broadwell -mmmx -mno-3dnow -msse -msse2 -msse3 -mssse3 -mno-sse4a -mcx16 -msahf -mmovbe -maes -mno-sha -mpclmul -mpopcnt -mabm -mno-lwp

How to implement Weighted Binary CrossEntropy on theano?

China☆狼群 提交于 2019-12-04 03:15:37
How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy the tensor tgt = targets.copy("tgt") # Make it a vector # tgt = tgt.flatten() # tgt = tgt.reshape(3000) #

Theano: Initialisation of device gpu failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY

纵饮孤独 提交于 2019-12-04 03:01:51
I am running the example kaggle_otto_nn.py of Keras with backend of theano . When I set cnmem=1 , the following error comes out: cliu@cliu-ubuntu:keras-examples$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=1 python kaggle_otto_nn.py Using Theano backend. ERROR (theano.sandbox.cuda): ERROR: Not using GPU. Initialisation of device gpu failed: initCnmem: cnmemInit call failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY. numdev=1 /usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the