问题
I try to build a 3-layer RNN with Keras. Part of the code is here:
model = Sequential()
model.add(Embedding(input_dim = 91, output_dim = 128, input_length =max_length))
model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias))
model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias))
model.add(GRUCell(units = self.neurons, dropout = self.dropval, bias_initializer = bias))
model.add(TimeDistributed(Dense(target.shape[2])))
Then I met this error:
call() missing 1 required positional argument: 'states'
The error details are as follows:
~/anaconda3/envs/hw3/lib/python3.5/site-packages/keras/models.py in add(self, layer)
487 output_shapes=[self.outputs[0]._keras_shape])
488 else:
--> 489 output_tensor = layer(self.outputs[0])
490 if isinstance(output_tensor, list):
491 raise TypeError('All layers in a Sequential model '
~/anaconda3/envs/hw3/lib/python3.5/site-packages/keras/engine/topology.py in __call__(self, inputs, **kwargs)
601
602 # Actually call the layer, collecting output(s), mask(s), and shape(s).
--> 603 output = self.call(inputs, **kwargs)
604 output_mask = self.compute_mask(inputs, previous_mask)
605
回答1:
Don't use Cell classes (i.e.
GRUCellorLSTMCell) in Keras directly. They are computation cells which are wrapped by the corresponding layers. Instead use the Layer classes (i.e.GRUorLSTM):model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias)) model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias))The
LSTMandGRUuse their corresponding cells to perform computations over the all timesteps. Read this SO answer to learn more about their difference.When you are stacking multiple RNN layers on top of each other you need to set their
return_sequencesargument toTruein order to produce the output of each timestep, which in turn is used by the next RNN layer. Note that you may or may not do this on the last RNN layer (it depends on your architecture and the problem you are trying to solve):model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias, return_sequences=True)) model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias, return_sequences=True)) model.add(GRU(units = self.neurons, dropout = self.dropval, bias_initializer = bias))
来源:https://stackoverflow.com/questions/51254706/keras-grucell-missing-1-required-positional-argument-states