rnn

Making predictions with tensorflow

China☆狼群 提交于 2019-12-13 21:47:51
问题 I'm really a beginner with tensor flow and in all of this field, but I've seen all the lectures of Andrej Karpathy in CS231n class so I'm understanding the code. So this is the code (not mine): https://github.com/nfmcclure/tensorflow_cookbook/tree/master/09_Recurrent_Neural_Networks/02_Implementing_RNN_for_Spam_Prediction # Implementing an RNN in TensorFlow # ---------------------------------- # # We implement an RNN in TensorFlow to predict spam/ham from texts # # https://github.com

AttributeError: module 'tensorflow.python.pywrap_tensorflow' has no attribute 'TFE_Py_RegisterExceptionClass'

北城余情 提交于 2019-12-13 16:04:06
问题 I'm trying to run a example code from tensorflow timeseries contrib, but i'm getting this error. AttributeError: module 'tensorflow.python.pywrap_tensorflow' has no attribute 'TFE_Py_RegisterExceptionClass' I'm using Anaconda. Current environment is Python 3.5 and tensorflow 1.2.1. Also tried tf 1.3, but nothing changed. Here is the code im trying to run: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/timeseries/examples/predict.py I cannot find anything about this

How can I build an RNN without using nn.RNN

泪湿孤枕 提交于 2019-12-13 04:04:17
问题 I need to build an RNN (without using nn.RNN) with following specifications : It should have set of weights [ It is a chanracter RNN. It should have 1 hidden layer Wxh (from input layer to hidden layer ) Whh (from the recurrent connection in the hidden layer) W ho (from hidden layer to output layer) I need to use Tanh for hidden layer I need to use softmax for output layer. I have implemented the code . I am using CrossEntropyLoss() as loss function . Which is giving me error as RuntimeError

What should the generator return if it is used in a multi-model functional API?

╄→гoц情女王★ 提交于 2019-12-13 03:49:59
问题 Following this article, I'm trying to implement a generative RNN. In the mentioned article, the training and validation data are passed as fully loaded np.array s. But I'm trying to use the model.fit_generator method and provide a generator instead. I know that if it was a straightforward model, the generator should return: def generator(): ... yield (samples, targets) But this is a generative model which means there are two models involved: encoder_inputs = Input(shape=(None,)) x = Embedding

GRU same configurations but in two different ways produces two different output in tensorflow

徘徊边缘 提交于 2019-12-11 14:40:04
问题 I would like to do some sequence prediction in tensorflow using GRU. so I have created the same model in 2 different ways as follows: In model 1 I have a 2 GRUs, one after the other, that is, the new_state1 , the final hidden state of the first GRU, acts as the initial state to the second GRU. Therefore, the model outputs new_state1 and new_state2 consequentially. Note that this is not a 2 layer model, but only 1 layer. From the code below, I divided the input and the output into 2 parts

Data Parallelism for RNN in tensorflow

守給你的承諾、 提交于 2019-12-11 14:11:39
问题 Recently, I have used tensorflow to develop an NMT system. I tried to train this system on multi-gpus using data-parallelism method to speed up it. I follow the standard data-parallelism way widely used in tensorflow. For example, if we want to run it on a 8-gpus computer. First, we construct a large batch which contains 8 times the size of batch used in a single GPU. Then we split this large batch equally to 8 mini-batch. We separately train them in different gpus. In the end, we collect

Keras GRUCell missing 1 required positional argument: 'states'

女生的网名这么多〃 提交于 2019-12-11 10:48:56
问题 I try to build a 3-layer RNN with Keras. Part of the code is here: model = Sequential() model.add(Embedding(input_dim = 91, output_dim = 128, input_length =max_length)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(TimeDistributed(Dense(target.shape[2]))) Then I

Tensorflow - Op Type not registered 'CudnnRNN'

风流意气都作罢 提交于 2019-12-11 09:51:51
问题 I am new to tensorflow and trying to set it up. When I try to train a model using CuDNNGRU it seems to load correctly and then gives an error : tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered 'CudnnRNN' I do see a Cudnn_rnn directory in tensorflow/contrib for what that is worth. I have python 3.6 and VS2013. I have tried the following, but still getting an error: Both Cuda 8/9 uninstalling/reinstalling tensorflow/Theano/Keras/TensorFlow Honestly the setup seems

What is difference between 'call' and '__call__' in TensorFlow BasicLSTMCell implementation?

ε祈祈猫儿з 提交于 2019-12-11 08:42:29
问题 I am studying Tensorflow BasicLSTMCell while I found that there are two similar methods within the class: __call__ and call . The two methods has the same parameters and the documentation does not say the difference. Refering the source code does not give me any clue of this. But I am guessing that the the __call__ method is inherited from somewhere, and call overrides __call__ . If this is the case, why not just use __call__ instead of call in the source code? 回答1: I ran into similar problem

ValueError: ConvLSTMCell and dynamic_rnn

泄露秘密 提交于 2019-12-11 07:32:26
问题 I'm trying to build a seq2seq model in tensorflow (1.4) using the tf.contrib.rnn.ConvLSTMCell API together with the tf.nn.dynamic_rnn API, but I got an error with the dimension of the inputs. My code is: # features is an image sequence with shape [600, 400, 10], # so features is a tensor with shape [batch_size, 600, 400, 10] features = tf.transpose(features, [0,3,1,2]) features = tf.reshape(features, [params['batch_size'],10,600,400]) encoder_cell = tf.contrib.rnn.ConvLSTMCell(conv_ndims=2,