I am sorry for my naivety, but I don\'t understand why word embeddings that are the result of NN training process (word2vec) are actually vectors.
Embedding is the p
Each word is mapped to a point in d-dimension space (d is usually 300 or 600 though not necessary), thus its called a vector (each point in d-dim space is nothing but a vector in that d-dim space).
The points have some nice properties (words with similar meanings tend to occur closer to each other) [proximity is measured using cosine distance between 2 word vectors]