• <bdo id='k7WkB'></bdo><ul id='k7WkB'></ul>

    <small id='k7WkB'></small><noframes id='k7WkB'>

    <legend id='k7WkB'><style id='k7WkB'><dir id='k7WkB'><q id='k7WkB'></q></dir></style></legend>

    <i id='k7WkB'><tr id='k7WkB'><dt id='k7WkB'><q id='k7WkB'><span id='k7WkB'><b id='k7WkB'><form id='k7WkB'><ins id='k7WkB'></ins><ul id='k7WkB'></ul><sub id='k7WkB'></sub></form><legend id='k7WkB'></legend><bdo id='k7WkB'><pre id='k7WkB'><center id='k7WkB'></center></pre></bdo></b><th id='k7WkB'></th></span></q></dt></tr></i><div id='k7WkB'><tfoot id='k7WkB'></tfoot><dl id='k7WkB'><fieldset id='k7WkB'></fieldset></dl></div>

        <tfoot id='k7WkB'></tfoot>
      1. 块大小与 Python 中的多处理/pool.map 无关?

        时间:2023-05-25
          <tbody id='GvOsZ'></tbody>

        <i id='GvOsZ'><tr id='GvOsZ'><dt id='GvOsZ'><q id='GvOsZ'><span id='GvOsZ'><b id='GvOsZ'><form id='GvOsZ'><ins id='GvOsZ'></ins><ul id='GvOsZ'></ul><sub id='GvOsZ'></sub></form><legend id='GvOsZ'></legend><bdo id='GvOsZ'><pre id='GvOsZ'><center id='GvOsZ'></center></pre></bdo></b><th id='GvOsZ'></th></span></q></dt></tr></i><div id='GvOsZ'><tfoot id='GvOsZ'></tfoot><dl id='GvOsZ'><fieldset id='GvOsZ'></fieldset></dl></div>
            • <bdo id='GvOsZ'></bdo><ul id='GvOsZ'></ul>

                <small id='GvOsZ'></small><noframes id='GvOsZ'>

                  <legend id='GvOsZ'><style id='GvOsZ'><dir id='GvOsZ'><q id='GvOsZ'></q></dir></style></legend>
                1. <tfoot id='GvOsZ'></tfoot>
                2. 本文介绍了块大小与 Python 中的多处理/pool.map 无关?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                  问题描述

                  限时送ChatGPT账号..

                  我尝试利用 python 的池多处理功能.

                  I try to utilize the pool multiprocessing functionality of python.

                  独立于我如何设置块大小(在 Windows 7 和 Ubuntu 下 - 后者见下文,具有 4 个内核),并行线程的数量似乎保持不变.

                  Independent how I set the chunk size (under Windows 7 and Ubuntu - the latter see below with 4 cores), the amount of parallel threads seems to stay the same.

                  from multiprocessing import Pool
                  from multiprocessing import cpu_count
                  import multiprocessing
                  import time
                  
                  
                  def f(x):
                      print("ready to sleep", x, multiprocessing.current_process())
                      time.sleep(20)
                      print("slept with:", x, multiprocessing.current_process())
                  
                  
                  if __name__ == '__main__':
                      processes = cpu_count()
                      print('-' * 20)
                      print('Utilizing %d cores' % processes)
                      print('-' * 20)
                      pool = Pool(processes)
                      myList = []
                      runner = 0
                      while runner < 40:
                          myList.append(runner)
                          runner += 1
                      print("len(myList):", len(myList))
                  
                      # chunksize = int(len(myList) / processes)
                      # chunksize = processes
                      chunksize = 1
                      print("chunksize:", chunksize)
                      pool.map(f, myList, 1)
                  

                  无论我使用 chunksize = int(len(myList)/processes)chunksize = processes 还是 1(如上例所示).

                  The behaviour is the same whether I use chunksize = int(len(myList) / processes), chunksize = processes or 1 (as in the example above).

                  会不会是chunksize自动设置为核数?

                  Could it be that the chunksize is set automatically to the amount of cores?

                  chunksize = 1 的示例:

                  --------------------
                  Utilizing 4 cores
                  --------------------
                  len(myList): 40
                  chunksize: 10
                  ready to sleep 0 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  ready to sleep 1 <ForkProcess(ForkPoolWorker-2, started daemon)>
                  ready to sleep 2 <ForkProcess(ForkPoolWorker-3, started daemon)>
                  ready to sleep 3 <ForkProcess(ForkPoolWorker-4, started daemon)>
                  slept with: 0 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  ready to sleep 4 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  slept with: 1 <ForkProcess(ForkPoolWorker-2, started daemon)>
                  ready to sleep 5 <ForkProcess(ForkPoolWorker-2, started daemon)>
                  slept with: 2 <ForkProcess(ForkPoolWorker-3, started daemon)>
                  ready to sleep 6 <ForkProcess(ForkPoolWorker-3, started daemon)>
                  slept with: 3 <ForkProcess(ForkPoolWorker-4, started daemon)>
                  ready to sleep 7 <ForkProcess(ForkPoolWorker-4, started daemon)>
                  slept with: 4 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  ready to sleep 8 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  slept with: 5 <ForkProcess(ForkPoolWorker-2, started daemon)>
                  ready to sleep 9 <ForkProcess(ForkPoolWorker-2, started daemon)>
                  slept with: 6 <ForkProcess(ForkPoolWorker-3, started daemon)>
                  ready to sleep 10 <ForkProcess(ForkPoolWorker-3, started daemon)>
                  slept with: 7 <ForkProcess(ForkPoolWorker-4, started daemon)>
                  ready to sleep 11 <ForkProcess(ForkPoolWorker-4, started daemon)>
                  slept with: 8 <ForkProcess(ForkPoolWorker-1, started daemon)>
                  

                  推荐答案

                  Chunksize 不影响使用多少核心,这是由 Pool 的 processes 参数设置的.Chunksize 设置您传递给 Pool.map 的可迭代项的数量,在 Pool 调用的每个工作进程中一次分布任务"(下图显示 Python 3.7.1).

                  Chunksize doesn't influence how many cores are getting used, this is set by the processes parameter of Pool. Chunksize sets how many items of the iterable you pass to Pool.map, are distributed per single worker-process at once in what Pool calls a "task" (figure below shows Python 3.7.1).

                  如果您设置 chunksize=1,工作进程会在新任务中收到一个新项目,只有在完成之前收到的项目之后.对于 chunksize >1一个工人在一个任务中一次得到一整批物品,当它完成时,如果有剩余,它会得到下一批.

                  In case you set chunksize=1, a worker-process gets fed with a new item, in a new task, only after finishing the one received before. For chunksize > 1 a worker gets a whole batch of items at once within a task and when it's finished, it gets the next batch if there are any left.

                  使用 chunksize=1 逐个分发项目增加了调度的灵活性,但同时降低了整体吞吐量,因为滴灌需要更多的进程间通信 (IPC).

                  Distributing items one-by-one with chunksize=1 increases flexibility of scheduling while it decreases overall throughput, because drip feeding requires more inter-process communication (IPC).

                  在我对 Pool 的 chunksize 算法的深入分析这里中,我定义了工作单元 用于将迭代的 一个 项处理为 taskel,以避免命名与 Pool 对单词task"的使用冲突.任务(作为工作单元)由 chunksize taskels 组成.

                  In my in-depth analysis of Pool's chunksize-algorithm here, I define the unit of work for processing one item of the iterable as taskel, to avoid naming conflicts with Pool's usage of the word "task". A task (as unit of work) consists of chunksize taskels.

                  如果您无法预测任务单元需要多长时间完成,例如优化问题,您可以设置 chunksize=1,其中任务单元的处理时间差异很大.此处的滴灌可防止工作进程坐在一堆未触及的项目上,同时处理一个沉重的任务,防止他的任务中的其他项目被分配给空闲的工作进程.

                  You would set chunksize=1 if you cannot predict how long a taskel will need to finish, for example an optimization problem, where the processing time greatly varies across taskels. Drip-feeding here prevents a worker-process sitting on a pile of untouched items, while chrunching on one heavy taskel, preventing the other items in his task to be distributed to idling worker-processes.

                  否则,如果您的所有任务都需要相同的时间来完成,您可以设置 chunksize=len(iterable)//processes,以便任务仅在所有工作人员之间分配一次.请注意,如果 len(iterable)/processes 有剩余,这将产生比进程(进程 + 1)多一个任务.这有可能严重影响您的整体计算时间.在之前链接的答案中阅读更多相关信息.

                  Otherwise, if all your taskels will need the same time to finish, you can set chunksize=len(iterable) // processes, so that tasks are only distributed once across all workers. Note that this will produce one more task than there are processes (processes + 1) in case len(iterable) / processes has a remainder. This has the potential to severely impact your overall computation time. Read more about this in the previously linked answer.

                  仅供参考,这是源代码的一部分,如果未设置,Pool 会在内部计算块大小:

                  FYI, that's the part of source code where Pool internally calculates the chunksize if not set:

                      # Python 3.6, line 378 in `multiprocessing.pool.py`
                      if chunksize is None:
                          chunksize, extra = divmod(len(iterable), len(self._pool) * 4)
                          if extra:
                              chunksize += 1
                      if len(iterable) == 0:
                          chunksize = 0
                  

                  这篇关于块大小与 Python 中的多处理/pool.map 无关?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                  • <bdo id='eKFzn'></bdo><ul id='eKFzn'></ul>
                    <i id='eKFzn'><tr id='eKFzn'><dt id='eKFzn'><q id='eKFzn'><span id='eKFzn'><b id='eKFzn'><form id='eKFzn'><ins id='eKFzn'></ins><ul id='eKFzn'></ul><sub id='eKFzn'></sub></form><legend id='eKFzn'></legend><bdo id='eKFzn'><pre id='eKFzn'><center id='eKFzn'></center></pre></bdo></b><th id='eKFzn'></th></span></q></dt></tr></i><div id='eKFzn'><tfoot id='eKFzn'></tfoot><dl id='eKFzn'><fieldset id='eKFzn'></fieldset></dl></div>

                        <legend id='eKFzn'><style id='eKFzn'><dir id='eKFzn'><q id='eKFzn'></q></dir></style></legend>

                        <small id='eKFzn'></small><noframes id='eKFzn'>

                        <tfoot id='eKFzn'></tfoot>
                            <tbody id='eKFzn'></tbody>