gensim LdaMulticore 不是多处理?

2022-01-12 00:00:00 python multiprocessing gensim lda

问题描述

当我在一台 12 核的机器上运行 gensim 的 LdaMulticore 模型时,使用:

When I run gensim's LdaMulticore model on a machine with 12 cores, using:

lda = LdaMulticore(corpus, num_topics=64, workers=10)

我收到一条日志消息,上面写着

I get a logging message that says

using serial LDA version on this node  

几行之后,我看到另一条日志消息显示

A few lines later, I see another loging message that says

training LDA model using 10 processes

当我运行 top 时,我看到 11 个 python 进程已生成,但 9 个正在休眠,即只有一名工人处于活动状态.该机器有 24 个核心,无论如何都不会被压垮.为什么 LdaMulticore 不以并行模式运行?

When I run top, I see 11 python processes have been spawned, but 9 are sleeping, I.e. only one worker is active. The machine has 24 cores, and is not overwhelmed by any means. Why isn't LdaMulticore running in parallel mode?


解决方案

首先,确保您已安装快速的 BLAS 库,因为大部分耗时的东西都是在线性代数的低级例程中完成的.

First, make sure you have installed a fast BLAS library, because most of the time consuming stuff is done inside low-level routines for linear algebra.

在我的机器上 gensim.models.ldamodel.LdaMulticore 可以在训练期间用 workers=4 耗尽所有 20 个 cpu 核心.设置比这更大的工人并没有加快培训速度.一个原因可能是 corpus 迭代器太慢而无法有效使用 LdaMulticore.

On my machine the gensim.models.ldamodel.LdaMulticore can use up all the 20 cpu cores with workers=4 during training. Setting workers larger than this didn't speed up the training. One reason might be the corpus iterator is too slow to use LdaMulticore effectively.

您可以尝试使用 ShardedCorpus 来序列化和替换 corpus,这应该更快读/写.此外,简单地压缩你的大 .mm 文件,这样它占用更少的空间(=less I/O)也可能会有所帮助.例如,

You can try to use ShardedCorpus to serialize and replace the corpus, which should be much faster to read/write. Also, simply zipping your large .mm file so it takes up less space (=less I/O) may help too. E.g.,

mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))
lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4)

相关文章