How to use tensorflow seq2seq without embeddings?

一世执手 提交于 2019-11-29 15:13:48

问题


I have been working on LSTM for timeseries forecasting by using tensorflow. Now, i want to try sequence to sequence (seq2seq). In the official site there is a tutorial which shows NMT with embeddings . So, how can I use this new seq2seq module without embeddings? (directly using time series "sequences").

# 1. Encoder
encoder_cell = tf.contrib.rnn.BasicLSTMCell(LSTM_SIZE)
encoder_outputs, encoder_state = tf.nn.static_rnn(
  encoder_cell,
  x,
  dtype=tf.float32)

# Decoder
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(LSTM_SIZE)


helper = tf.contrib.seq2seq.TrainingHelper(
    decoder_emb_inp, decoder_lengths, time_major=True)


decoder = tf.contrib.seq2seq.BasicDecoder(
  decoder_cell, helper, encoder_state)

# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
outputs = outputs[-1]

# output is result of linear activation of last layer of RNN
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias

What should be the args for TrainingHelper() if I use input_seq=x and output_seq=label?

decoder_emb_inp ??? decoder_lengths ???

Where input_seq are the first 8 point of the sequence, and output_seq are the last 2 point of the sequence. Thanks on advance!


回答1:


I got it to work for no embedding using a very rudimentary InferenceHelper:

inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[dim],
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

My inputs are floats with the shape [batch_size, time, dim]. For the example below dim would be 1, but this can easily be extended to more dimensions. Here's the relevant part of the code:

projection_layer = tf.layers.Dense(
    units=1,  # = dim
    kernel_initializer=tf.truncated_normal_initializer(
        mean=0.0, stddev=0.1))

# Training Decoder
training_decoder_output = None
with tf.variable_scope("decode"):
    # output_data doesn't exist during prediction phase.
    if output_data is not None:
        # Prepend the "go" token
        go_tokens = tf.constant(go_token, shape=[batch_size, 1, 1])
        dec_input = tf.concat([go_tokens, target_data], axis=1)

        # Helper for the training process.
        training_helper = tf.contrib.seq2seq.TrainingHelper(
            inputs=dec_input,
            sequence_length=[output_size] * batch_size)

        # Basic decoder
        training_decoder = tf.contrib.seq2seq.BasicDecoder(
            dec_cell, training_helper, enc_state, projection_layer)

        # Perform dynamic decoding using the decoder
        training_decoder_output = tf.contrib.seq2seq.dynamic_decode(
            training_decoder, impute_finished=True,
            maximum_iterations=output_size)[0]

# Inference Decoder
# Reuses the same parameters trained by the training process.
with tf.variable_scope("decode", reuse=tf.AUTO_REUSE):
    start_tokens = tf.constant(
        go_token, shape=[batch_size, 1])

    # The sample_ids are the actual output in this case (not dealing with any logits here).
    # My end_fn is always False because I'm working with a generator that will stop giving 
    # more data. You may extend the end_fn as you wish. E.g. you can append end_tokens 
    # and make end_fn be true when the sample_id is the end token.
    inference_helper = tf.contrib.seq2seq.InferenceHelper(
        sample_fn=lambda outputs: outputs,
        sample_shape=[1],  # again because dim=1
        sample_dtype=dtypes.float32,
        start_inputs=start_tokens,
        end_fn=lambda sample_ids: False)

    # Basic decoder
    inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                        inference_helper,
                                                        enc_state,
                                                        projection_layer)

    # Perform dynamic decoding using the decoder
    inference_decoder_output = tf.contrib.seq2seq.dynamic_decode(
        inference_decoder, impute_finished=True,
        maximum_iterations=output_size)[0]

Have a look at this question. Also I found this tutorial to be very useful to understand seq2seq models, although it does use embeddings. So replace their GreedyEmbeddingHelper by an InferenceHelper like the one I posted above.

P.s. I posted the full code at https://github.com/Andreea-G/tensorflow_examples



来源:https://stackoverflow.com/questions/49134432/how-to-use-tensorflow-seq2seq-without-embeddings

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!