I have a 1D tensor that I wish to partition into overlapping blocks. I\'m thinking of something like:
tensor = tf.constant([1, 2, 3, 4, 5, 6, 7])
If you're looking for a way to get each rolling window as an individual tensor (i.e. each time you call window.eval()
your window moves one over. You can use tf.FIFOQueue
as well as tf.train.range_input_producer
to make a queue that does this:
EDIT: updated to work with variable length tensors as requested in your original answer
def window_input_producer(tensor, window_size, capacity=32, num_epochs=None):
num_windows = tf.shape(tensor)[0] - window_size
range_queue = tf.train.range_input_producer(
num_windows,
shuffle=False,
capacity=capacity,
num_epochs=num_epochs
)
index = range_queue.dequeue()
window = tensor[index:index + window_size]
queue = tf.FIFOQueue(capacity=capacity,
dtypes=[tensor.dtype.base_dtype],
shapes=[window_size])
enq = queue.enqueue(window)
tf.train.add_queue_runner(
tf.train.QueueRunner(queue, [enq])
)
return queue.dequeue()