为什么在 python 中对 asyncio 服务器的多个请求的时间会增加?
问题描述
我用套接字编写了一个 pythonic 服务器.它应该同时(并行)接收请求并并行响应它们.当我向它发送多个请求时,回复的时间比我预期的要长.
I wrote a pythonic server with socket. that should receives requests at the same time(parallel) and respond them parallel. When i send more than one request to it, the time of answering increase more than i expected.
服务器:
import datetime
import asyncio, timeit
import json, traceback
from asyncio import get_event_loop
requestslist = []
loop = asyncio.get_event_loop()
async def handleData(reader, writer):
message = ''
clientip = ''
data = bytearray()
print("Async HandleData", datetime.datetime.utcnow())
try:
start = timeit.default_timer()
data = await reader.readuntil(separator=b'
')
msg = data.decode(encoding='utf-8')
len_csharp_message = int(msg[msg.find('content-length:') + 15:msg.find(';dmnid'):])
data = await reader.read(len_csharp_message)
message = data.decode(encoding='utf-8')
clientip = reader._transport._extra['peername'][0]
clientport = reader._transport._extra['peername'][1]
print('
Data Received from:', clientip, ':', clientport)
if (clientip, message) in requestslist:
reader._transport._sock.close()
else:
requestslist.append((clientip, message))
# adapter_result = parallel_members(message_dict, service, dmnid)
adapter_result = '''[{"name": {"data": "data", "type": "str"}}]'''
body = json.dumps(adapter_result, ensure_ascii=False)
print(body)
contentlen = len(bytes(str(body), 'utf-8'))
header = bytes('Content-Length:{}'.format(contentlen), 'utf-8')
result = header + bytes('
{', 'utf-8') + body + bytes('}', 'utf-8')
stop = timeit.default_timer()
print('total_time:', stop - start)
writer.write(result)
writer.close()
writer.close()
# del writer
except Exception as ex:
writer.close()
print(traceback.format_exc())
finally:
try:
requestslist.remove((clientip, message))
except:
pass
def main(*args):
print("ready")
loop = get_event_loop()
coro = asyncio.start_server(handleData, 'localhost', 4040, loop=loop, limit=204800000)
srv = loop.run_until_complete(coro)
loop.run_forever()
if __name__ == '__main__':
main()
当我发送单个请求时,需要 0.016 秒.但是对于更多的请求,这次增加.
When i send single request, it tooke 0.016 sec. but for more request, this time increase.
cpu 信息:intel xeon x5650
cpu info : intel xeon x5650
客户:
import multiprocessing, subprocess
import time
from joblib import Parallel, delayed
def worker(file):
subprocess.Popen(file, shell=False)
def call_parallel (index):
print('begin ' , index)
p = multiprocessing.Process(target=worker(index))
p.start()
print('end ' , index)
path = r'python "/test-Client.py"' # ## client address
files = [path, path, path, path, path, path, path, path, path, path, path, path]
Parallel(n_jobs=-1, backend="threading")(delayed(call_parallel)(i) for index,i in enumerate(files))
对于这个同步发送 12 个请求的客户端,每个请求的总时间为 0.15 秒.
for this client that send 12 requests synchronous, total time for per request is 0.15 sec.
我希望任何数量的请求,时间都是固定的.
I expect for any number requests, the time be fixed.
解决方案
什么是请求
单个请求(粗略地说)包括以下步骤:
What is request
Single request (roughly saying) consists of the following steps:
- 将数据写入网络
- 浪费时间等待答复
- 从网络阅读答案
№1/№3 由您的 CPU 处理得非常快.第 2 步 - 从您的 PC 到某个服务器(例如在另一个城市)并通过电线返回的字节旅程:通常需要更多时间.
№1/№3 processed by your CPU very fast. Step №2 - is a bytes journey from your PC to some server (in another city, for example) and back by wires: it usually takes much more time.
就处理而言,异步请求并不是真正的并行":它仍然是您的单个 CPU 内核,一次可以处理一件事.但是运行多个异步请求允许您使用某个请求的第 2 步来执行其他请求的第 1 步/第 3 步,而不仅仅是浪费大量时间.这就是为什么多个异步请求通常会比相同数量的同步请求更早完成的原因.
Asynchronous requests are not really "parallel" in terms of processing: it's still your single CPU core that can process one thing at a time. But running multiple async requests allows you to use step №2 of some request to do steps №1/№3 of other request instead of just wasting huge amount of time. That's a reason why multiple async requests usually would finish earlier then same amount of sync ones.
但是当您在本地运行时,第 2 步不会花费太多时间:您的 PC 和服务器是同一个东西,字节不会进入网络旅程.在第 2 步中没有时间可以用来启动新请求.一次只有一个 CPU 内核可以处理一件事.
But when you run things locally, step №2 doesn't take much time: your PC and server are the same thing and bytes don't go to network journey. There is just no time that can be used in step №2 to start new request. Only your single CPU core works processing one thing at a time.
您应该针对响应延迟的服务器测试请求,以查看您期望的结果.
You should test requests against server that answers with some delay to see results you expect.
相关文章