使用 json.dumps() 的 MemoryError

2021-12-30 00:00:00 python sqlalchemy json out-of-memory mysql

我想知道在将大型数组编码为 json 时,json.dump()json.dumps() 中哪一个最有效格式.

I would like to know which one of json.dump() or json.dumps() are the most efficient when it comes to encoding a large array to json format.

你能告诉我一个使用 json.dump() 的例子吗?

Can you please show me an example of using json.dump()?

实际上,我正在制作一个 Python CGI,它使用 ORM SQlAlchemy 从 MySQL 数据库中获取大量数据,经过一些用户触发处理后,我将最终输出存储在一个数组中,最终将其转换为 Json.

Actually I am making a Python CGI that gets large amount of data from a MySQL database using the ORM SQlAlchemy, and after some user triggered processing, I store the final output in an Array that I finally convert to Json.

但是当转换为 JSON 时:

But when converting to JSON with :

 print json.dumps({'success': True, 'data': data}) #data is my array

我收到以下错误:

Traceback (most recent call last):
  File "C:/script/cgi/translate_parameters.py", line 617, in     <module>
f.write(json.dumps(mytab,default=dthandler,indent=4))
  File "C:Python27libjson\__init__.py", line 250, in dumps
    sort_keys=sort_keys, **kw).encode(obj)
  File "C:Python27libjsonencoder.py", line 209, in encode
    chunks = list(chunks)
MemoryError

所以,我的猜测是使用 json.dump() 按块转换数据.关于如何做到这一点的任何想法?

So, my guess is using json.dump() to convert data by chunks. Any ideas on how to do this?

或者除了使用 json.dump() 之外的其他想法?

Or other ideas besides using json.dump()?

推荐答案

你可以简单地替换

f.write(json.dumps(mytab,default=dthandler,indent=4))

json.dump(mytab, f, default=dthandler, indent=4)

这应该将数据流"到文件中.

This should "stream" the data into the file.

相关文章