word2vec is a open source tool by Google:
For each word it provides a vector of float values, what exactly do they represent?
There is also a p
Fixed width contexts for each word are used as input into a neural network. The output of the network is a vector of float values - aka the word embedding - of a given dimension (typically 50 or 100). The network is trained so as to provide good word embedding given the train/test corpus.
One can easily come up with a fixed size input for any word - say M words to the left and N words to the right of it. How to do so for a sentence or paragraph, whose sizes vary, is not as apparent, or at least it wasn't at first. Without reading the paper first, I'm guessing one can combine the fixed-width embedding of all the words in the sentence/paragraph to come up with a fixed-length vector embedding for a sentence/paragraph.