<small id='bcHsj'></small><noframes id='bcHsj'>

    1. <i id='bcHsj'><tr id='bcHsj'><dt id='bcHsj'><q id='bcHsj'><span id='bcHsj'><b id='bcHsj'><form id='bcHsj'><ins id='bcHsj'></ins><ul id='bcHsj'></ul><sub id='bcHsj'></sub></form><legend id='bcHsj'></legend><bdo id='bcHsj'><pre id='bcHsj'><center id='bcHsj'></center></pre></bdo></b><th id='bcHsj'></th></span></q></dt></tr></i><div id='bcHsj'><tfoot id='bcHsj'></tfoot><dl id='bcHsj'><fieldset id='bcHsj'></fieldset></dl></div>
      <legend id='bcHsj'><style id='bcHsj'><dir id='bcHsj'><q id='bcHsj'></q></dir></style></legend>

        <tfoot id='bcHsj'></tfoot>
        • <bdo id='bcHsj'></bdo><ul id='bcHsj'></ul>

        Python:在使用多处理池时使用队列写入单个文件

        时间:2023-05-26

        <i id='Zj6xR'><tr id='Zj6xR'><dt id='Zj6xR'><q id='Zj6xR'><span id='Zj6xR'><b id='Zj6xR'><form id='Zj6xR'><ins id='Zj6xR'></ins><ul id='Zj6xR'></ul><sub id='Zj6xR'></sub></form><legend id='Zj6xR'></legend><bdo id='Zj6xR'><pre id='Zj6xR'><center id='Zj6xR'></center></pre></bdo></b><th id='Zj6xR'></th></span></q></dt></tr></i><div id='Zj6xR'><tfoot id='Zj6xR'></tfoot><dl id='Zj6xR'><fieldset id='Zj6xR'></fieldset></dl></div>

          <legend id='Zj6xR'><style id='Zj6xR'><dir id='Zj6xR'><q id='Zj6xR'></q></dir></style></legend>
              <bdo id='Zj6xR'></bdo><ul id='Zj6xR'></ul>
              <tfoot id='Zj6xR'></tfoot>
            • <small id='Zj6xR'></small><noframes id='Zj6xR'>

                  <tbody id='Zj6xR'></tbody>
                • 本文介绍了Python:在使用多处理池时使用队列写入单个文件的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  限时送ChatGPT账号..

                  我有数十万个文本文件,我想以各种方式进行解析.我想将输出保存到单个文件而不会出现同步问题.我一直在使用多处理池来执行此操作以节省时间,但我不知道如何组合池和队列.

                  I have hundreds of thousands of text files that I want to parse in various ways. I want to save the output to a single file without synchronization problems. I have been using multiprocessing pool to do this to save time, but I can't figure out how to combine Pool and Queue.

                  以下代码将保存文件名以及文件中连续x"的最大数量.但是,我希望所有进程都将结果保存到同一个文件中,而不是像我的示例中那样保存到不同的文件中.对此的任何帮助将不胜感激.

                  The following code will save the infile name as well as the maximum number of consecutive "x"s in the file. However, I want all processes to save results to the same file, and not to different files as in my example. Any help on this would be greatly appreciated.

                  import multiprocessing
                  
                  with open('infilenamess.txt') as f:
                      filenames = f.read().splitlines()
                  
                  def mp_worker(filename):
                   with open(filename, 'r') as f:
                        text=f.read()
                        m=re.findall("x+", text)
                        count=len(max(m, key=len))
                        outfile=open(filename+'_results.txt', 'a')
                        outfile.write(str(filename)+'|'+str(count)+'
                  ')
                        outfile.close()
                  
                  def mp_handler():
                      p = multiprocessing.Pool(32)
                      p.map(mp_worker, filenames)
                  
                  if __name__ == '__main__':
                      mp_handler()
                  

                  推荐答案

                  多处理池为您实现了一个队列.只需使用将工作人员返回值返回给调用者的池方法.imap 运行良好:

                  Multiprocessing pools implement a queue for you. Just use a pool method that returns the worker return value to the caller. imap works well:

                  import multiprocessing 
                  import re
                  
                  def mp_worker(filename):
                      with open(filename) as f:
                          text = f.read()
                      m = re.findall("x+", text)
                      count = len(max(m, key=len))
                      return filename, count
                  
                  def mp_handler():
                      p = multiprocessing.Pool(32)
                      with open('infilenamess.txt') as f:
                          filenames = [line for line in (l.strip() for l in f) if line]
                      with open('results.txt', 'w') as f:
                          for result in p.imap(mp_worker, filenames):
                              # (filename, count) tuples from worker
                              f.write('%s: %d
                  ' % result)
                  
                  if __name__=='__main__':
                      mp_handler()
                  

                  这篇关于Python:在使用多处理池时使用队列写入单个文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:带有工作进程的 python 池 下一篇:如何将python dict与多处理同步

                  相关文章

                  最新文章

                    <i id='zenZt'><tr id='zenZt'><dt id='zenZt'><q id='zenZt'><span id='zenZt'><b id='zenZt'><form id='zenZt'><ins id='zenZt'></ins><ul id='zenZt'></ul><sub id='zenZt'></sub></form><legend id='zenZt'></legend><bdo id='zenZt'><pre id='zenZt'><center id='zenZt'></center></pre></bdo></b><th id='zenZt'></th></span></q></dt></tr></i><div id='zenZt'><tfoot id='zenZt'></tfoot><dl id='zenZt'><fieldset id='zenZt'></fieldset></dl></div>
                      <bdo id='zenZt'></bdo><ul id='zenZt'></ul>
                    <tfoot id='zenZt'></tfoot>

                      <legend id='zenZt'><style id='zenZt'><dir id='zenZt'><q id='zenZt'></q></dir></style></legend>
                    1. <small id='zenZt'></small><noframes id='zenZt'>