How to use the Tensorflow Dataset Pipeline for Variable Length Inputs?

风流意气都作罢 提交于 2021-02-06 12:49:53

问题


I am training a Recurrent Neural Network in Tensorflow over a dataset of sequence of numbers of varying lengths and have been trying to use the tf.data API to create an efficient pipeline. However I can't seem to get this thing to work

My approach

My data set is a NumPy array of shape [10000, ?, 32, 2] which is saved on my disk as a file in the .npy format. Here the ? denotes that elements have variable length in the second dimension. 10000 denotes the number of minibatches in the dataset and 32 denotes the size of a mini-batch.

I am using np.load to open this data set and I am trying to create a tf.data.Dataset object using the from_tensor_slices method but it seems that this only works if all input Tensors have the same shape!

I tried reading the docs but they have only given a very simple example.

My code

So the numpy files have been generated as follows -

dataset = []
for i in xrange(num_items):
  #add an element of shape [?, 32, 2] to the list where `?` takes
  # a random value between [1, 40]
  dataset.append(generate_random_rnn_input())

with open('data.npy', 'w') as f:
  np.save(f, dataset)

The code given below is my attempt to create a tf.data.Dataset object

# dataset_list is a list containing `num_items` number of itesm
# and each item has shape [?, 32, 2]
dataset_list = np.load('data.npy')

# error, this doesn't work!
dataset = tf.data.Dataset.from_tensor_slices(dataset_list)

The error I get is "TypeError: Expected binary or unicode string, got array([[[0.0875, 0. ], ..."

Continued, still need help!

So I tried @mrry's answer and I am now able to created a Dataset object. However, I am not able to iterate through this dataset using iterators as said in the tutorial. This is what my code looks like now -

dataset_list = np.load('data.npy')

dataset = tf.data.Dataset.from_generator(lambda: dataset_list, 
                                         dataset_list[0].dtype,
                                         tf.TensorShape([None, 32, 2]))

dataset = dataset.map(lambda x : tf.cast(x, tf.float32))

iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()

with tf.Session() as sess:
  print sess.run(next_element) # The code fails on this line

The error I get is AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'. I have no absolutely no clue what this means.

This is the complete stack trace -

2018-05-15 04:19:25.559922: W tensorflow/core/framework/op_kernel.cc:1261] Unknown: exceptions.AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'
Traceback (most recent call last):

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 147, in __call__
    ret = func(*args)

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 378, in generator_py_func
    nest.flatten_up_to(output_types, values), flattened_types)

AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'


2018-05-15 04:19:25.559989: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at iterator_ops.cc:891 : Unknown: exceptions.AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'
Traceback (most recent call last):

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 147, in __call__
    ret = func(*args)

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 378, in generator_py_func
    nest.flatten_up_to(output_types, values), flattened_types)

AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'


     [[Node: PyFunc = PyFunc[Tin=[DT_INT64], Tout=[DT_DOUBLE], token="pyfunc_1"](arg0)]]
Traceback (most recent call last):
  File "pipeline_test.py", line 320, in <module>
    tf.app.run()
  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 126, in run
    _sys.exit(main(argv))
  File "pipeline_test.py", line 316, in main
    train(FLAGS.num_training_iterations, FLAGS.report_interval, FLAGS.report_interval_verbose)
  File "pipeline_test.py", line 120, in train
    print(sess.run(next_element))
  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 905, in run
    run_metadata_ptr)
  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1140, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1321, in _do_run
    run_metadata)
  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1340, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: exceptions.AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'
Traceback (most recent call last):

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/ops/script_ops.py", line 147, in __call__
    ret = func(*args)

  File "/home/vastolorde95/virtualenvs/thesis/local/lib/python2.7/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 378, in generator_py_func
    nest.flatten_up_to(output_types, values), flattened_types)

AttributeError: 'numpy.dtype' object has no attribute 'as_numpy_dtype'


     [[Node: PyFunc = PyFunc[Tin=[DT_INT64], Tout=[DT_DOUBLE], token="pyfunc_1"](arg0)]]
     [[Node: IteratorGetNext = IteratorGetNext[output_shapes=[[?,32,2]], output_types=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator)]]

回答1:


As you have noticed, tf.data.Dataset.from_tensor_slices() only works on objects that can be converted to a (dense) tf.Tensor or a tf.SparseTensor. The easiest way to get variable-length NumPy data into a Dataset is to use tf.data.Dataset.from_generator(), as follows:

dataset = tf.data.Dataset.from_generator(lambda: dataset_list, 
                                         tf.as_dtype(dataset_list[0].dtype),
                                         tf.TensorShape([None, 32, 2]))


来源:https://stackoverflow.com/questions/50329855/how-to-use-the-tensorflow-dataset-pipeline-for-variable-length-inputs

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!