<i id='qk9Lp'><tr id='qk9Lp'><dt id='qk9Lp'><q id='qk9Lp'><span id='qk9Lp'><b id='qk9Lp'><form id='qk9Lp'><ins id='qk9Lp'></ins><ul id='qk9Lp'></ul><sub id='qk9Lp'></sub></form><legend id='qk9Lp'></legend><bdo id='qk9Lp'><pre id='qk9Lp'><center id='qk9Lp'></center></pre></bdo></b><th id='qk9Lp'></th></span></q></dt></tr></i><div id='qk9Lp'><tfoot id='qk9Lp'></tfoot><dl id='qk9Lp'><fieldset id='qk9Lp'></fieldset></dl></div>
<tfoot id='qk9Lp'></tfoot>

      <bdo id='qk9Lp'></bdo><ul id='qk9Lp'></ul>
  • <small id='qk9Lp'></small><noframes id='qk9Lp'>

        <legend id='qk9Lp'><style id='qk9Lp'><dir id='qk9Lp'><q id='qk9Lp'></q></dir></style></legend>

        使用 multiprocessing.Manager.list 而不是真正的列表会

        时间:2023-05-26

          <i id='9mgNT'><tr id='9mgNT'><dt id='9mgNT'><q id='9mgNT'><span id='9mgNT'><b id='9mgNT'><form id='9mgNT'><ins id='9mgNT'></ins><ul id='9mgNT'></ul><sub id='9mgNT'></sub></form><legend id='9mgNT'></legend><bdo id='9mgNT'><pre id='9mgNT'><center id='9mgNT'></center></pre></bdo></b><th id='9mgNT'></th></span></q></dt></tr></i><div id='9mgNT'><tfoot id='9mgNT'></tfoot><dl id='9mgNT'><fieldset id='9mgNT'></fieldset></dl></div>
        1. <small id='9mgNT'></small><noframes id='9mgNT'>

          <tfoot id='9mgNT'></tfoot>
          • <legend id='9mgNT'><style id='9mgNT'><dir id='9mgNT'><q id='9mgNT'></q></dir></style></legend>
              <tbody id='9mgNT'></tbody>

                <bdo id='9mgNT'></bdo><ul id='9mgNT'></ul>
                1. 本文介绍了使用 multiprocessing.Manager.list 而不是真正的列表会使计算花费很长时间的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  限时送ChatGPT账号..

                  我想从这个例子开始尝试使用 multiprocessing 的不同方式:

                  I wanted to try different ways of using multiprocessing starting with this example:

                  $ cat multi_bad.py 
                  import multiprocessing as mp
                  from time import sleep
                  from random import randint
                  
                  def f(l, t):
                  #   sleep(30)
                      return sum(x < t for x in l)
                  
                  if __name__ == '__main__':
                      l = [randint(1, 1000) for _ in range(25000)]
                      t = [randint(1, 1000) for _ in range(4)]
                  #   sleep(15)
                      pool = mp.Pool(processes=4)
                      result = pool.starmap_async(f, [(l, x) for x in t])
                      print(result.get())
                  

                  这里,l 是一个列表,当产生 4 个进程时会被复制 4 次.为了避免这种情况,文档页面提供了使用队列、共享数组或使用 multiprocessing.Manager 创建的代理对象.对于最后一个,我改变了l的定义:

                  Here, l is a list that gets copied 4 times when 4 processes are spawned. To avoid that, the documentation page offers using queues, shared arrays or proxy objects created using multiprocessing.Manager. For the last one, I changed the definition of l:

                  $ diff multi_bad.py multi_good.py 
                  10c10,11
                  <     l = [randint(1, 1000) for _ in range(25000)]
                  ---
                  >     man = mp.Manager()
                  >     l = man.list([randint(1, 1000) for _ in range(25000)])
                  

                  结果看起来仍然正确,但是执行时间急剧增加,以至于我认为我做错了什么:

                  The results still look correct, but the execution time has increased so dramatically that I think I'm doing something wrong:

                  $ time python multi_bad.py 
                  [17867, 11103, 2021, 17918]
                  
                  real    0m0.247s
                  user    0m0.183s
                  sys 0m0.010s
                  
                  $ time python multi_good.py 
                  [3609, 20277, 7799, 24262]
                  
                  real    0m15.108s
                  user    0m28.092s
                  sys 0m6.320s
                  

                  文档确实说这种方式比共享数组慢,但这感觉不对.我也不确定如何对此进行分析以获取有关正在发生的事情的更多信息.我错过了什么吗?

                  The docs do say that this way is slower than shared arrays, but this just feels wrong. I'm also not sure how I can profile this to get more information on what's going on. Am I missing something?

                  附:使用共享数组,我得到的时间低于 0.25 秒.

                  P.S. With shared arrays I get times below 0.25s.

                  附言这是在 Linux 和 Python 3.3 上.

                  P.P.S. This is on Linux and Python 3.3.

                  推荐答案

                  Linux 使用 copy-当子进程被 os.forked 时,on-write.演示:

                  Linux uses copy-on-write when subprocesses are os.forked. To demonstrate:

                  import multiprocessing as mp
                  import numpy as np
                  import logging
                  import os
                  
                  logger = mp.log_to_stderr(logging.WARNING)
                  
                  def free_memory():
                      total = 0
                      with open('/proc/meminfo', 'r') as f:
                          for line in f:
                              line = line.strip()
                              if any(line.startswith(field) for field in ('MemFree', 'Buffers', 'Cached')):
                                  field, amount, unit = line.split()
                                  amount = int(amount)
                                  if unit != 'kB':
                                      raise ValueError(
                                          'Unknown unit {u!r} in /proc/meminfo'.format(u = unit))
                                  total += amount
                      return total
                  
                  def worker(i):
                      x = data[i,:].sum()    # Exercise access to data
                      logger.warn('Free memory: {m}'.format(m = free_memory()))
                  
                  def main():
                      procs = [mp.Process(target = worker, args = (i, )) for i in range(4)]
                      for proc in procs:
                          proc.start()
                      for proc in procs:
                          proc.join()
                  
                  logger.warn('Initial free: {m}'.format(m = free_memory()))
                  N = 15000
                  data = np.ones((N,N))
                  logger.warn('After allocating data: {m}'.format(m = free_memory()))
                  
                  if __name__ == '__main__':
                      main()
                  

                  产生了

                  [WARNING/MainProcess] Initial free: 2522340
                  [WARNING/MainProcess] After allocating data: 763248
                  [WARNING/Process-1] Free memory: 760852
                  [WARNING/Process-2] Free memory: 757652
                  [WARNING/Process-3] Free memory: 757264
                  [WARNING/Process-4] Free memory: 756760
                  

                  这表明最初大约有 2.5GB 的可用内存.在分配 15000x15000 的 float64 数组后,有 763248 KB 可用空间.这大致是有道理的,因为 15000**2*8 字节 = 1.8GB 并且内存下降,2.5GB - 0.763248GB 也大约是 1.8GB.

                  This shows that initially there was roughly 2.5GB of free memory. After allocating a 15000x15000 array of float64s, there was 763248 KB free. This roughly makes sense since 15000**2*8 bytes = 1.8GB and the drop in memory, 2.5GB - 0.763248GB is also roughly 1.8GB.

                  现在每个进程生成后,可用内存再次报告为 ~750MB.可用内存没有显着减少,因此我认为系统必须使用写时复制.

                  Now after each process is spawned, the free memory is again reported to be ~750MB. There is no significant decrease in free memory, so I conclude the system must be using copy-on-write.

                  结论:如果您不需要修改数据,在 __main__ 模块的全局级别定义它是一种方便且(至少在 Linux 上)内存友好的方式来共享它子进程.

                  Conclusion: If you do not need to modify the data, defining it at the global level of the __main__ module is a convenient and (at least on Linux) memory-friendly way to share it among subprocesses.

                  这篇关于使用 multiprocessing.Manager.list 而不是真正的列表会使计算花费很长时间的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  上一篇:在 Pandas 中使用多处理读取 csv 文件的最简单方法 下一篇:具有全局变量的 multiprocessing.Pool

                  相关文章

                  最新文章

                        <bdo id='6L11K'></bdo><ul id='6L11K'></ul>
                      <legend id='6L11K'><style id='6L11K'><dir id='6L11K'><q id='6L11K'></q></dir></style></legend>

                      <small id='6L11K'></small><noframes id='6L11K'>

                      <i id='6L11K'><tr id='6L11K'><dt id='6L11K'><q id='6L11K'><span id='6L11K'><b id='6L11K'><form id='6L11K'><ins id='6L11K'></ins><ul id='6L11K'></ul><sub id='6L11K'></sub></form><legend id='6L11K'></legend><bdo id='6L11K'><pre id='6L11K'><center id='6L11K'></center></pre></bdo></b><th id='6L11K'></th></span></q></dt></tr></i><div id='6L11K'><tfoot id='6L11K'></tfoot><dl id='6L11K'><fieldset id='6L11K'></fieldset></dl></div>
                    1. <tfoot id='6L11K'></tfoot>