theano

Theano tensor slicing… how to use boolean to slice?

坚强是说给别人听的谎言 提交于 2019-12-05 18:34:24
In numpy, if I have a boolean array, I can use it to select elements of another array: >>> import numpy as np >>> x = np.array([1, 2, 3]) >>> idx = np.array([True, False, True]) >>> x[idx] array([1, 3]) I need to do this in theano. This is what I tried, but I got an unexpected result. >>> from theano import tensor as T >>> x = T.vector() >>> idx = T.ivector() >>> y = x[idx] >>> y.eval({x: np.array([1,2,3]), idx: np.array([True, False, True])}) array([ 2., 1., 2.]) Can someone explain the theano result and suggest how to get the numpy result? I need to know how to do this in order to properly

How do you free up gpu memory?

白昼怎懂夜的黑 提交于 2019-12-05 18:21:41
When running theano, I get an error: not enough memory. See below. What are some possible actions that can be taken to free up memory? I know I can close applications etc, but I just want see if anyone has other ideas. For example, is it possible to reserve memory? THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python conv_exp.py Using gpu device 0: GeForce GT 650M Trying to run under a GPU. If this is not desired, then modify network3.py to set the GPU flag to False. Error allocating 156800000 bytes of device memory (out of memory). Driver report 64192512 bytes free and 1073414144 bytes

print out the shape of each layer in the net architecture

让人想犯罪 __ 提交于 2019-12-05 14:37:04
In Keras, we can define the network as follows. Are there any way to output the shape after each layer. For instance, I want to print out the shape of inputs after the line defining inputs , then print out the shape of conv1 after the line defining conv1 , etc. inputs = Input((1, img_rows, img_cols)) conv1 = Convolution2D(64, 3, 3, activation='relu', init='lecun_uniform', W_constraint=maxnorm(3), border_mode='same')(inputs) conv1 = Convolution2D(64, 3, 3, activation='relu', init='lecun_uniform', W_constraint=maxnorm(3), border_mode='same')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)

Theano CNN on CPU: AbstractConv2d Theano optimization failed

半城伤御伤魂 提交于 2019-12-05 12:30:09
问题 I'm trying to train a CNN for object detection on images with the CIFAR10 dataset for a seminar at my university but I get the following Error: AssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? I am running Anaconda 2

Keras imageGenerator Exception: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: None

我是研究僧i 提交于 2019-12-05 12:16:02
I'm currently trying to follow the example here using a dataset I generated by myself. The back end is run using Theano. The directory structure is exactly the same: image_sets/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg cat002.jpg ... validation/ dogs/ dog001.jpg dog002.jpg ... cats/ cat001.jpg Here is my code for the keras convolutional neural network. img_width, img_height = 150, 150 img_width, img_height = 150, 150 train_data_dir = './image_sets' validation_data_dir = './validation' nb_train_samples = 267 print nb_train_samples #number of validation images I have nb_validation

Python won't find variable in module

风格不统一 提交于 2019-12-05 11:54:45
I just started playing around with Theano but have a strange problem in Eclipse. I am trying to import the config module to run some example code. The import works fine and I can see what's in the module. Here is the simple code I am trying: from theano import config print config This works fine and I get an output like: floatX (('float64', 'float32')) Doc: Default floating-point precision for python casts Value: float32 ... and some more lines like that. Unfortunately if I use the following code, I get an "undefined variable from import"-error for the floatX: from theano import config print

theano g++ not detected

自作多情 提交于 2019-12-05 11:39:17
问题 I installed theano but when I try to use it I got this error: WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. I installed g++ , and put the correct path in the environment variables, so it is like theano does not detect it. Does anyone know how to solve the problem or which may be the cause? 回答1: I had this occur on OS X

How to reuse Theano function with different shared variables without rebuilding graph?

喜欢而已 提交于 2019-12-05 09:25:00
I have a Theano function that is called several times, each time with different shared variables. The way it is implemented now, the Theano function gets redefined every time it is run. I assume, that this make the whole program slow, because every time the Theano functions gets defined the graph is rebuild. def sumprod_shared(T_shared_array1, T_shared_array2): f = theano.function([], (T_shared_array1 * T_shared_array2).sum(axis=0)) return f() for factor in range(10): m1 = theano.shared(factor * array([[1, 2, 4], [5, 6, 7]])) m2 = theano.shared(factor * array([[1, 2, 4], [5, 6, 7]])) print

Keras 1.0: getting intermediate layer output

萝らか妹 提交于 2019-12-05 09:02:14
I am currently trying to visualize the output of an intermediate layer in Keras 1.0 (which I could do with Keras 0.3) but it does not work anymore. x = model.input y = model.layers[3].output f = theano.function([x], y) But I get the following error: MissingInputError: ("An input of the graph, used to compute DimShuffle{x,x,x,x}(keras_learning_phase), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", keras_learning_phase) Prior to Keras 1.0, with my graph model, I could just do: x = graph.inputs['input'].input y = graph

Keras GRU NN KeyError when fitting : “not in index”

醉酒当歌 提交于 2019-12-05 07:15:34
I'm currently facing an issue while trying to fit my GRU model with my training data. After a quick look on StackOverflow, I found this post to be quite similar to my issue : Simplest Lstm training with Keras io My own model is as follow : nn = Sequential() nn.add(Embedding(input_size, hidden_size)) nn.add(GRU(hidden_size_2, return_sequences=False)) nn.add(Dropout(0.2)) nn.add(Dense(output_size)) nn.add(Activation('linear')) nn.compile(loss='mse', optimizer="rmsprop") history = History() nn.fit(X_train, y_train, batch_size=30, nb_epoch=200, validation_split=0.1, callbacks=[history]) And the