1. <tfoot id='LfnUu'></tfoot>
  2. <small id='LfnUu'></small><noframes id='LfnUu'>

    <legend id='LfnUu'><style id='LfnUu'><dir id='LfnUu'><q id='LfnUu'></q></dir></style></legend>
    <i id='LfnUu'><tr id='LfnUu'><dt id='LfnUu'><q id='LfnUu'><span id='LfnUu'><b id='LfnUu'><form id='LfnUu'><ins id='LfnUu'></ins><ul id='LfnUu'></ul><sub id='LfnUu'></sub></form><legend id='LfnUu'></legend><bdo id='LfnUu'><pre id='LfnUu'><center id='LfnUu'></center></pre></bdo></b><th id='LfnUu'></th></span></q></dt></tr></i><div id='LfnUu'><tfoot id='LfnUu'></tfoot><dl id='LfnUu'><fieldset id='LfnUu'></fieldset></dl></div>
      <bdo id='LfnUu'></bdo><ul id='LfnUu'></ul>

    1. gensim LdaMulticore 不是多处理?

      时间:2023-05-25

      • <bdo id='c35lJ'></bdo><ul id='c35lJ'></ul>

        1. <legend id='c35lJ'><style id='c35lJ'><dir id='c35lJ'><q id='c35lJ'></q></dir></style></legend>

              <i id='c35lJ'><tr id='c35lJ'><dt id='c35lJ'><q id='c35lJ'><span id='c35lJ'><b id='c35lJ'><form id='c35lJ'><ins id='c35lJ'></ins><ul id='c35lJ'></ul><sub id='c35lJ'></sub></form><legend id='c35lJ'></legend><bdo id='c35lJ'><pre id='c35lJ'><center id='c35lJ'></center></pre></bdo></b><th id='c35lJ'></th></span></q></dt></tr></i><div id='c35lJ'><tfoot id='c35lJ'></tfoot><dl id='c35lJ'><fieldset id='c35lJ'></fieldset></dl></div>
                <tbody id='c35lJ'></tbody>
              <tfoot id='c35lJ'></tfoot>

              • <small id='c35lJ'></small><noframes id='c35lJ'>

                本文介绍了gensim LdaMulticore 不是多处理?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                问题描述

                限时送ChatGPT账号..

                当我在一台 12 核的机器上运行 gensim 的 LdaMulticore 模型时,使用:

                When I run gensim's LdaMulticore model on a machine with 12 cores, using:

                lda = LdaMulticore(corpus, num_topics=64, workers=10)
                

                我收到一条日志消息,上面写着

                I get a logging message that says

                using serial LDA version on this node  
                

                几行之后,我看到另一条日志消息显示

                A few lines later, I see another loging message that says

                training LDA model using 10 processes
                

                当我运行 top 时,我看到 11 个 python 进程已生成,但 9 个正在休眠,即只有一名工人处于活动状态.该机器有 24 个核心,无论如何都不会被压垮.为什么 LdaMulticore 不以并行模式运行?

                When I run top, I see 11 python processes have been spawned, but 9 are sleeping, I.e. only one worker is active. The machine has 24 cores, and is not overwhelmed by any means. Why isn't LdaMulticore running in parallel mode?

                推荐答案

                首先,确保您已安装快速的 BLAS 库,因为大部分耗时的东西都是在线性代数的低级例程中完成的.

                First, make sure you have installed a fast BLAS library, because most of the time consuming stuff is done inside low-level routines for linear algebra.

                在我的机器上 gensim.models.ldamodel.LdaMulticore 可以在训练期间用 workers=4 耗尽所有 20 个 cpu 核心.设置比这更大的工人并没有加快培训速度.一个原因可能是 corpus 迭代器太慢而无法有效使用 LdaMulticore.

                On my machine the gensim.models.ldamodel.LdaMulticore can use up all the 20 cpu cores with workers=4 during training. Setting workers larger than this didn't speed up the training. One reason might be the corpus iterator is too slow to use LdaMulticore effectively.

                您可以尝试使用 ShardedCorpus 来序列化和替换 corpus,这应该更快读/写.此外,简单地压缩你的大 .mm 文件,这样它占用更少的空间(=less I/O)也可能会有所帮助.例如,

                You can try to use ShardedCorpus to serialize and replace the corpus, which should be much faster to read/write. Also, simply zipping your large .mm file so it takes up less space (=less I/O) may help too. E.g.,

                mm = gensim.corpora.MmCorpus(bz2.BZ2File('enwiki-latest-pages-articles_tfidf.mm.bz2'))
                lda = gensim.models.ldamulticore.LdaMulticore(corpus=mm, id2word=id2word, num_topics=100, workers=4)
                

                这篇关于gensim LdaMulticore 不是多处理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                上一篇:检索使用 multiprocessing.Pool.map 启动的进程的退出代 下一篇:如何在 Python 中使用多处理并行求和循环

                相关文章

                最新文章

              • <i id='h4PZK'><tr id='h4PZK'><dt id='h4PZK'><q id='h4PZK'><span id='h4PZK'><b id='h4PZK'><form id='h4PZK'><ins id='h4PZK'></ins><ul id='h4PZK'></ul><sub id='h4PZK'></sub></form><legend id='h4PZK'></legend><bdo id='h4PZK'><pre id='h4PZK'><center id='h4PZK'></center></pre></bdo></b><th id='h4PZK'></th></span></q></dt></tr></i><div id='h4PZK'><tfoot id='h4PZK'></tfoot><dl id='h4PZK'><fieldset id='h4PZK'></fieldset></dl></div>

                <small id='h4PZK'></small><noframes id='h4PZK'>

                  • <bdo id='h4PZK'></bdo><ul id='h4PZK'></ul>

                  1. <tfoot id='h4PZK'></tfoot>
                  2. <legend id='h4PZK'><style id='h4PZK'><dir id='h4PZK'><q id='h4PZK'></q></dir></style></legend>