I'm trying to unstack a Tensor because I need a sequence as input for the RNN. I am using variable sequence lengths which prevents me from correctly using tf.unstack.
def MapToSequences(x): # x.get_shape().as_list() = [64, 1, None, 512] x = tf.squeeze(x) # tf.shape(x) = [None, None, None], at runtime would be [64, seqlen, 512] x = tf.transpose(x, perm=[1, 0, 2]) # [seqlen, 64, 512] # Here I'd like to unstack with seqlen as num x = tf.unstack(x) # Cannot infer num from shape (?, ?, ?) return x
I tried using tf.shape(x) to infer the seqlen and use it as num, but I get Expected int for argument 'num' not <tf.Tensor 'strided_slice:0' shape=() dtype=int32>
I believe this may be answered elsewhere, but here's an answer here. You cannot use tf.unstack with non-inferrable dimensions. This is because of how tensorflow is designed with computation graphs defining transformations of Tensors. Each operation adds a node, and each Tensor is an edge between Nodes. When you tf.unstack a Tensor you generate multiple new Tensors (edges). If the number of new tensors created from a tf.unstack operation is undefined then the computation graph has an undefined number of edges which must not be. Operations that don't add multiple new edges to the graph are allowed to have input Tensors with inferred dimensions (most operations).
To get around this one has two choices useful for the case of batched operations, i.e. in the case when you are trying to tf.unstack a Tensor with dimensions (batch_size, ...) and batch_size is inferrable.
Choice 1
I would use the batch_shape argument to keras.topology.Input. The weight Tensors produced will always be interchangable with another model generated with different batch_size.
Unless you need access to the computation graph with that non-inferrable dimension there is no reason why you should not that this route.
Choice 2
A second option, in the case when you know a maximal batch_size, is to use tf.dynamic_partition.
tensor = tf.placeholder(tf.float32,shape=(None,10)) partitions = tf.range(max_batch_size) num_partitions = max_batch_size partitioned = tf.dynamic_partition(tensor, partitions, num_partitions, name='dynamic_unstack')
When you actually give a batch_size it will produce unstacked Tesors for the first batch_size indices, and [] empty Tensors for the rest.