Pickle thread.lock(Pymongo)

问题描述

我有一个具有以下方法的类:

def get_add_new_links(self, max_num_links):
    self.get_links_m2(max_num_links)
    processes = mp.cpu_count()
    pool = mp.Pool(processes=processes)
    func = partial(worker, self)
    with open(os.path.join(self.report_path, "links.txt"), "r") as f:
        reports = pool.map(func, f.readlines())
    pool.close()
    pool.join()

其中get_links_m2是创建文件";links.txt";的另一个方法。工作人员为:

def worker(obje, link):
    doc, rep = obje.get_info_m2(link)
    obje.add_new_active(doc, sure_not_exists=True)
    return rep

方法get_info_m2访问链接并提取一些信息。方法add_new_active将信息添加到MongoDB。

我的代码可能有什么问题?当我运行它时,我得到这个错误(和回溯):

GET_ADD_NEW_LINKS中的第234行文件

reports = pool.map(func, f.readlines())   File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py", line

260,在地图中

return self._map_async(func, iterable, mapstar, chunksize).get()   File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py",

GET中的第608行

raise self._value   File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/pool.py", line

385,In_Handle_Tasks

put(task)   File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/connection.py",

发送中的第206行

self._send_bytes(ForkingPickler.dumps(obj))   File "/home/vladimir/anaconda3/lib/python3.5/multiprocessing/reduction.py",

第50行,转储

cls(buf, protocol).dump(obj) TypeError: can't pickle _thread.lock objects

解决方案

如the docs中所述:

永远不要这样做:

client = pymongo.MongoClient()

# Each child process attempts to copy a global MongoClient
# created in the parent process. Never do this.
def func():
  db = client.mydb
  # Do something with db.

proc = multiprocessing.Process(target=func)
proc.start()

相反,必须在Worker函数内初始化客户端。

相关文章