What is the difference between an Embedding Layer and a Dense Layer?
问题 The docs for an Embedding Layer in Keras say: Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] I believe this could also be achieved by encoding the inputs as one-hot vectors of length vocabulary_size , and feeding them into a Dense Layer. Is an Embedding Layer merely a convenience for this two-step process, or is something fancier going on under the hood? 回答1: Mathematically, the difference is this: An embedding layer performs