Spyder中的简单Python多处理函数不输出结果

2022-01-12 00:00:00 python process multiprocessing

问题描述

我在这里有一个非常简单的函数,我试图在其中运行和测试,但是,它不输出任何内容,也没有任何错误.我已经多次检查代码,但没有任何错误.

I have this very simple function right here in which I'm trying to run and test on, however, it doesn't output anything and it doesn't have any errors either. I've checked the code multiple times but it doesn't have any errors.

我打印了作业,结果如下:

I printed jobs and here's what I got:

[<Process(Process-12, stopped[1])>, 
<Process(Process-13, stopped[1])>,
<Process(Process-14, stopped[1])>, 
<Process(Process-15, stopped[1])>,
<Process(Process-16, stopped[1])>]

代码如下:

import multiprocessing

def worker(num):
    print "worker ", num
    return

jobs = []
for i in range(5):
    p = multiprocessing.Process(target = worker, args = (i,))
    jobs.append(p)
    p.start()

这是我期待的结果,但它没有输出任何东西:

Here's the result I'm expecting but it's not outputting anything:

Worker: 0
Worker: 1
Worker: 2
Worker: 3
Worker: 4


解决方案

评论显示 OP 使用 Windows 和 Spyder.由于 Spyder 重定向 stdout 而 Windows 不支持 forking,新的子进程不会打印到 Spyder 控制台.这仅仅是因为新子进程的 stdout 是 Python 的 vanilla stdout,也可以在 sys.__stdout__ 中找到.

The comments revealed that OP uses Windows as well as Spyder. Since Spyder redirects stdout and Windows does not support forking, a new child process won't print into the Spyder console. This is simply due to the fact that stdout of the new child process is Python's vanilla stdout, which can also be found in sys.__stdout__.

有两种选择:

  1. 使用 logging 模块.这将包括创建所有消息并将其记录到一个或多个文件中.使用单个日志文件可能会导致输出稍微乱码的问题,因为进程会同时写入文件.每个进程使用一个文件可以解决这个问题.

  1. Using the logging module. This would encompass creating and logging all messages to one or several files. Using a single log-file may lead to the problem that the output is slightly garbled since the processes would write concurrently to the file. Using a single file per process could solve this.

不在子进程中使用print,而只是将结果返回给主进程.通过使用 queue (或 multiprocessing.Manager().Queue() 因为分叉是不可能的)或更简单地依靠 多处理池的 map 功能,请参见下面的示例.

Not using print within the child processes, but simply returning the result to the main process. Either by using a queue (or multiprocessing.Manager().Queue() since forking is not possible) or more simply by relying on the multiprocessing Pool's map functionality, see example below.

带有池的多处理示例:

import multiprocessing

def worker(num):
    """Returns the string of interest"""
    return "worker %d" % num

def main():
    pool = multiprocessing.Pool(4)
    results = pool.map(worker, range(10))

    pool.close()
    pool.join()

    for result in results:
        # prints the result string in the main process
        print(result)

if __name__ == '__main__':
    # Better protect your main function when you use multiprocessing
    main()

哪个打印(在主进程中)

which prints (in the main process)

worker 0
worker 1
worker 2
worker 3
worker 4
worker 5
worker 6
worker 7
worker 8
worker 9

<小时>

如果您迫不及待地等待 map 函数完成,您可以使用 imap_unordered 并稍微更改命令的顺序立即打印结果:


If you are to impatient to wait for the map function to finish, you can immediately print your results by using imap_unordered and slightly changing the order of the commands:

def main():
    pool = multiprocessing.Pool(4)
    results = pool.imap_unordered(worker, range(10))

    for result in results:
        # prints the result string in the main process as soon as say are ready
        # but results are now no longer in order!
        print(result)

    # The pool should join after printing all results
    pool.close()
    pool.join()

相关文章