In the word2vec model, there are two linear transforms that take a word in vocab space to a hidden layer (the \"in\" vector), and then back to the vocab space (the \"out\" vecto
While this might not be a proper answer (can't comment yet) and noone pointed this out, take a look here. The creator seems to answer a similar question. Also that's the place where you have a higher chance for a valid answer.
Digging around in the link he posted in the word2vec source code you could change the syn1 deletion to suit your needs. Just remember to delete it after you're done, since it proves to be a memory hog.