Ensure the gensim generate the same Word2Vec model for different runs on the same data
In LDA model generates different topics everytime i train on the same corpus , by setting the np.random.seed(0) , the LDA model will always be initialized and trained in exactly the same way. Is it the same for the Word2Vec models from gensim ? By setting the random seed to a constant, would the different run on the same dataset produce the same model? But strangely, it's already giving me the same vector at different instances. >>> from nltk.corpus import brown >>> from gensim.models import Word2Vec >>> sentences = brown.sents()[:100] >>> model = Word2Vec(sentences, size=10, window=5, min