<i id='S7RRk'><tr id='S7RRk'><dt id='S7RRk'><q id='S7RRk'><span id='S7RRk'><b id='S7RRk'><form id='S7RRk'><ins id='S7RRk'></ins><ul id='S7RRk'></ul><sub id='S7RRk'></sub></form><legend id='S7RRk'></legend><bdo id='S7RRk'><pre id='S7RRk'><center id='S7RRk'></center></pre></bdo></b><th id='S7RRk'></th></span></q></dt></tr></i><div id='S7RRk'><tfoot id='S7RRk'></tfoot><dl id='S7RRk'><fieldset id='S7RRk'></fieldset></dl></div>

    <small id='S7RRk'></small><noframes id='S7RRk'>

      • <bdo id='S7RRk'></bdo><ul id='S7RRk'></ul>
      <legend id='S7RRk'><style id='S7RRk'><dir id='S7RRk'><q id='S7RRk'></q></dir></style></legend>
    1. <tfoot id='S7RRk'></tfoot>

    2. 为什么在 python 中对 asyncio 服务器的多个请求的时

      时间:2023-05-25
      <i id='VQeaL'><tr id='VQeaL'><dt id='VQeaL'><q id='VQeaL'><span id='VQeaL'><b id='VQeaL'><form id='VQeaL'><ins id='VQeaL'></ins><ul id='VQeaL'></ul><sub id='VQeaL'></sub></form><legend id='VQeaL'></legend><bdo id='VQeaL'><pre id='VQeaL'><center id='VQeaL'></center></pre></bdo></b><th id='VQeaL'></th></span></q></dt></tr></i><div id='VQeaL'><tfoot id='VQeaL'></tfoot><dl id='VQeaL'><fieldset id='VQeaL'></fieldset></dl></div>
        <tbody id='VQeaL'></tbody>

      <small id='VQeaL'></small><noframes id='VQeaL'>

    3. <tfoot id='VQeaL'></tfoot>
        • <bdo id='VQeaL'></bdo><ul id='VQeaL'></ul>

            <legend id='VQeaL'><style id='VQeaL'><dir id='VQeaL'><q id='VQeaL'></q></dir></style></legend>

              1. 本文介绍了为什么在 python 中对 asyncio 服务器的多个请求的时间会增加?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                问题描述

                限时送ChatGPT账号..

                我用套接字编写了一个 pythonic 服务器.它应该同时(并行)接收请求并并行响应它们.当我向它发送多个请求时,回复的时间比我预期的要长.

                I wrote a pythonic server with socket. that should receives requests at the same time(parallel) and respond them parallel. When i send more than one request to it, the time of answering increase more than i expected.

                服务器:

                import datetime
                import asyncio, timeit
                import json, traceback
                from asyncio import get_event_loop
                
                requestslist = []
                loop = asyncio.get_event_loop()
                
                async def handleData(reader, writer):
                    message = ''
                    clientip = ''
                    data = bytearray()
                    print("Async HandleData", datetime.datetime.utcnow())
                
                
                    try:
                        start = timeit.default_timer()
                        data = await reader.readuntil(separator=b'
                
                ')
                        msg = data.decode(encoding='utf-8')
                        len_csharp_message = int(msg[msg.find('content-length:') + 15:msg.find(';dmnid'):])
                        data = await reader.read(len_csharp_message)
                        message = data.decode(encoding='utf-8')
                
                        clientip = reader._transport._extra['peername'][0]
                        clientport = reader._transport._extra['peername'][1]
                        print('
                Data Received from:', clientip, ':', clientport)
                        if (clientip, message) in requestslist:
                            reader._transport._sock.close()
                
                        else:
                            requestslist.append((clientip, message))
                
                            # adapter_result = parallel_members(message_dict, service, dmnid)
                            adapter_result = '''[{"name": {"data": "data", "type": "str"}}]'''
                            body = json.dumps(adapter_result, ensure_ascii=False)
                            print(body)
                
                            contentlen = len(bytes(str(body), 'utf-8'))
                            header = bytes('Content-Length:{}'.format(contentlen), 'utf-8')
                            result = header + bytes('
                
                {', 'utf-8') + body + bytes('}', 'utf-8')
                            stop = timeit.default_timer()
                            print('total_time:', stop - start)
                            writer.write(result)
                            writer.close()
                        writer.close()
                        # del writer
                    except Exception as ex:
                        writer.close()
                        print(traceback.format_exc())
                    finally:
                        try:
                            requestslist.remove((clientip, message))
                        except:
                            pass
                
                
                def main(*args):
                    print("ready")
                    loop = get_event_loop()
                    coro = asyncio.start_server(handleData, 'localhost', 4040, loop=loop, limit=204800000)
                    srv = loop.run_until_complete(coro)
                    loop.run_forever()
                
                
                if __name__ == '__main__':
                    main()
                

                当我发送单个请求时,需要 0.016 秒.但是对于更多的请求,这次增加.

                When i send single request, it tooke 0.016 sec. but for more request, this time increase.

                cpu 信息:intel xeon x5650

                cpu info : intel xeon x5650

                客户:

                import multiprocessing, subprocess
                import time
                from joblib import Parallel, delayed
                
                
                def worker(file):
                    subprocess.Popen(file, shell=False)
                
                
                def call_parallel (index):
                    print('begin ' , index)
                    p = multiprocessing.Process(target=worker(index))
                    p.start()
                    print('end ' , index)
                
                path = r'python "/test-Client.py"'     # ## client address
                files = [path, path, path, path, path, path, path, path, path, path, path, path]
                Parallel(n_jobs=-1, backend="threading")(delayed(call_parallel)(i) for index,i  in  enumerate(files))
                

                对于这个同步发送 12 个请求的客户端,每个请求的总时间为 0.15 秒.

                for this client that send 12 requests synchronous, total time for per request is 0.15 sec.

                我希望任何数量的请求,时间都是固定的.

                I expect for any number requests, the time be fixed.

                推荐答案

                什么是请求

                单个请求(粗略地说)包括以下步骤:

                What is request

                Single request (roughly saying) consists of the following steps:

                1. 将数据写入网络
                2. 浪费时间等待答复
                3. 从网络阅读答案

                №1/№3 由您的 CPU 处理得非常快.第 2 步 - 从您的 PC 到某个服务器(例如在另一个城市)并通过电线返回的字节旅程:通常需要更多时间.

                №1/№3 processed by your CPU very fast. Step №2 - is a bytes journey from your PC to some server (in another city, for example) and back by wires: it usually takes much more time.

                就处理而言,异步请求并不是真正的并行":它仍然是您的单个 CPU 内核,一次可以处理一件事.但是运行多个异步请求允许您使用某个请求的第 2 步来执行其他请求的第 1 步/第 3 步,而不仅仅是浪费大量时间.这就是为什么多个异步请求通常会比相同数量的同步请求更早完成的原因.

                Asynchronous requests are not really "parallel" in terms of processing: it's still your single CPU core that can process one thing at a time. But running multiple async requests allows you to use step №2 of some request to do steps №1/№3 of other request instead of just wasting huge amount of time. That's a reason why multiple async requests usually would finish earlier then same amount of sync ones.

                但是当您在本地运行时,第 2 步不会花费太多时间:您的 PC 和服务器是同一个东西,字节不会进入网络旅程.在第 2 步中没有时间可以用来启动新请求.一次只有一个 CPU 内核可以处理一件事.

                But when you run things locally, step №2 doesn't take much time: your PC and server are the same thing and bytes don't go to network journey. There is just no time that can be used in step №2 to start new request. Only your single CPU core works processing one thing at a time.

                您应该针对响应延迟的服务器测试请求,以查看您期望的结果.

                You should test requests against server that answers with some delay to see results you expect.

                这篇关于为什么在 python 中对 asyncio 服务器的多个请求的时间会增加?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                上一篇:Windows 上的 Python 2.7,“断言 main_name 不在 sys.mo 下一篇:Python 多处理:在第一个子错误时中止映射

                相关文章

                最新文章

              2. <i id='6vcMx'><tr id='6vcMx'><dt id='6vcMx'><q id='6vcMx'><span id='6vcMx'><b id='6vcMx'><form id='6vcMx'><ins id='6vcMx'></ins><ul id='6vcMx'></ul><sub id='6vcMx'></sub></form><legend id='6vcMx'></legend><bdo id='6vcMx'><pre id='6vcMx'><center id='6vcMx'></center></pre></bdo></b><th id='6vcMx'></th></span></q></dt></tr></i><div id='6vcMx'><tfoot id='6vcMx'></tfoot><dl id='6vcMx'><fieldset id='6vcMx'></fieldset></dl></div>
                • <bdo id='6vcMx'></bdo><ul id='6vcMx'></ul>

                  <small id='6vcMx'></small><noframes id='6vcMx'>

                    <legend id='6vcMx'><style id='6vcMx'><dir id='6vcMx'><q id='6vcMx'></q></dir></style></legend>

                    <tfoot id='6vcMx'></tfoot>