Why are Embeddings in PyTorch implemented as Sparse Layers?

泪湿孤枕 提交于 2021-02-07 08:28:00

问题


Embedding Layers in PyTorch are listed under "Sparse Layers" with the limitation:

Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (cuda and cpu), and optim.Adagrad (cpu)

What is the reason for this? For example in Keras I can train an architecture with an Embedding Layer using any optimizer.


回答1:


Upon closer inspection sparse gradients on Embeddings are optional and can be turned on or off with the sparse parameter:

class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False)

Where:

sparse (boolean, optional) – if True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.

And the "Notes" mentioned are what I quoted in the question about a limited number of optimizers being supported for sparse gradients.

Update:

It is theoretically possible but technically difficult to implement some optimization methods on sparse gradients. There is an open issue in the PyTorch repo to add support for all optimizers.

Regarding the original question, I believe Embeddings can be treated as sparse because it is possible to operate on the input indices directly rather than converting them to one-hot encodings for input into a dense layer. This is explained in @Maxim's answer to my related question.



来源:https://stackoverflow.com/questions/47868341/why-are-embeddings-in-pytorch-implemented-as-sparse-layers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!