theano

pymc3 with custom likelihood function from kernel density estimation

时光总嘲笑我的痴心妄想 提交于 2019-12-10 10:18:54
问题 I'm trying to use pymc3 with a likelihood function derived from some observed data. This observed data doesn't fit any nice, standard distribution, so I want to define my own, based on these observations. One approach is to use kernel density estimation over the observations. This was possible in pymc2, but doesn't play nicely with the Theano variables in pymc3. In my code below I'm just generating some dummy data that is normally distributed. As my prior, I'm essentially assuming a uniform

Converting a theano model built on GPU to CPU?

限于喜欢 提交于 2019-12-10 05:45:06
问题 I have some pickle files of deep learning models built on gpu. I'm trying to use them in production. But when i try to unpickle them on the server, i'm getting the following error. Traceback (most recent call last): File "score.py", line 30, in model = (cPickle.load(file)) File "/usr/local/python2.7/lib/python2.7/site-packages/Theano-0.6.0-py2.7.egg/theano/sandbox/cuda/type.py", line 485, in CudaNdarray_unpickler return cuda.CudaNdarray(npa) AttributeError: ("'NoneType' object has no

How to reuse Theano function with different shared variables without rebuilding graph?

夙愿已清 提交于 2019-12-10 04:24:44
问题 I have a Theano function that is called several times, each time with different shared variables. The way it is implemented now, the Theano function gets redefined every time it is run. I assume, that this make the whole program slow, because every time the Theano functions gets defined the graph is rebuild. def sumprod_shared(T_shared_array1, T_shared_array2): f = theano.function([], (T_shared_array1 * T_shared_array2).sum(axis=0)) return f() for factor in range(10): m1 = theano.shared

Keras: convert pretrained weights between theano and tensorflow

廉价感情. 提交于 2019-12-10 02:30:05
问题 I would like to use this pretrained model. It is in theano layout, my code depends on tensorflow image dimension ordering. There is a guide on converting weights between the formats. But this seems broken. In the section to convert theano to tensorflow, the first instruction is to load the weights into the tensorflow model. Keras backend should be TensorFlow in this case. First, load the Theano-trained weights into your TensorFlow model: model.load_weights('my_weights_theano.h5') This raises

Configuring Theano so that it doesn't directly crash when a GPU memory allocation fails

时光怂恿深爱的人放手 提交于 2019-12-09 23:41:54
问题 When a Theano script tries to obtain more GPU memory than currently available, it immediately crashes: Error allocating 26,214,400 bytes of device memory (out of memory). Driver report 19,365,888 bytes free and 1,073,414,144 bytes total Is there any way to configure Theano so that it doesn't directly crash when a GPU memory allocation fails? E.g., it could instead retry every X seconds, and give up after Y tries. (One use case: I have several Theano scripts running on the same GPU and using

using multiprocessing with theano

六眼飞鱼酱① 提交于 2019-12-09 19:10:18
问题 I'm trying to use theano with cpu-multiprocessing with a neural network library, Keras. I use device=gpu flag and load the keras model. Then for extracting features for over a million images, im using multiprocessing pool. The function looks something like this: from keras import backend as K f = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-3].output,]) def feature_gen(flz): im = imread(flz) cPickle.dump(f([im, 0])[0][0], open(flz, 'wb'), -1) pool = mp.Pool(processes

Purpose of 'givens' variables in Theano.function

∥☆過路亽.° 提交于 2019-12-09 08:16:00
问题 I was reading the code for the logistic function given at http://deeplearning.net/tutorial/logreg.html. I am confused about the difference between inputs & givens variables for a function. The functions that compute mistakes made by a model on a minibatch are: test_model = theano.function(inputs=[index], outputs=classifier.errors(y), givens={ x: test_set_x[index * batch_size: (index + 1) * batch_size], y: test_set_y[index * batch_size: (index + 1) * batch_size]}) validate_model = theano

Getting Keras (with Theano) to work with Celery

爷,独闯天下 提交于 2019-12-09 06:14:28
问题 I have some keras code which works synchronously to predict a given input, I have even made amendments so it can work with standard multi-threading (using locks in a seperate class from this) however when running via asynchronous celery (even with one worker and one task) I get an error on calling predict on the keras model. @app.task def predict_task(param): """Run task.""" json_file = open('keras_model.json', 'r') loaded_model_json = json_file.read() json_file.close() model = model_from

Accessing gradient values of keras model outputs with respect to inputs

送分小仙女□ 提交于 2019-12-09 00:08:10
问题 I made a pretty simple NN model to do some non-linear regressions for me in Keras, as an introduction exercise. I uploaded my jupyter notebookit as a gist here (renders properly on github), which is pretty short and to the point. It just fits the 1D function y = (x - 5)^2 / 25. I know that Theano and Tensorflow are, at their core, graph based derivative (gradient) passing frameworks. And utilizing the gradients of loss functions with respect to weights for gradient step-based optimization are

PyMC3 & Theano - Theano code that works stop working after pymc3 import

余生长醉 提交于 2019-12-08 17:06:13
问题 Some simple theano code that works perfectly, stop working when I import pymc3 Here some snipets in order to reproduce the error: #Initial Theano Code (this works) import theano.tensor as tsr x = tsr.dscalar('x') y = tsr.dscalar('y') z = x + y #Snippet 1 import pymc3 as pm import theano.tensor as tsr x = tsr.dscalar('x') y = tsr.dscalar('y') z = x + y #Snippet 2 import theano.tensor as tsr import pymc3 as pm x = tsr.dscalar('x') y = tsr.dscalar('y') z = x + y #Snippet 3 import pymc3 as pm x =