how do I use a very large (>2M) word embedding in tensorflow?
问题 I am running a model with a very big word embedding (>2M words). When I use tf.embedding_lookup, it expects the matrix, which is big. When I run, I subsequently get out of GPU memory error. If I reduce the size of the embedding, everything works fine. Is there a way to deal with larger embedding? 回答1: The recommended way is to use a partitioner to shard this large tensor across several parts: embedding = tf.get_variable("embedding", [1000000000, 20], partitioner=tf.fixed_size_partitioner(3))