When I run gensim\'s LdaMulticore
model on a machine with 12 cores, using:
lda = LdaMulticore(corpus, num_topics=64, workers=10)
First, make sure you have installed a fast BLAS library, because most of the time consuming stuff is done inside low-level routines for linear algebra.
On my machine the gensim.models.ldamodel.LdaMulticore can use up all the 20 cpu cores with workers=4
during training. Setting workers larger than this didn't speed up the training. One reason might be the corpus iterator is too slow to use LdaMulticore effectively.
You can try to use ShardedCorpus to serialize and replace the corpus
, which should be much faster to read/write. Also, simply zipping your large .mm
file so it takes up less space (=less I/O) may help too. E.g.,
mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))
lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4)