故意在python中创建一个孤儿进程

问题描述

我有一个名为 MyScript 的 Python 脚本(类似于 unix,基于 RHEL),它有两个函数,分别称为 A 和 B.我希望它们在不同的独立进程中运行(分离 B 和 A):

I have a python script (unix-like, based on RHEL), called MyScript, that has two functions, called A and B. I'd like them to run in different, independent processes (detach B and A):

  • 启动脚本 MyScript
  • 执行函数A
  • 生成一个新进程,将数据从函数 A 传递到 B
  • 当函数 B 运行时,继续函数 A
  • 当函数 A 完成时,即使 B 仍在运行,也要退出 MyScript

我认为我应该使用多处理来创建一个守护进程,但是 documentation 表明这不是正确的用例.所以,我决定生成一个子进程和 child^2 进程(孩子的孩子),然后强制孩子终止.虽然这种解决方法似乎有效,但看起来真的很难看.

I thought I should use multiprocessing to create a daemon process, but the documentation suggests that's not the right usecase. So, I decided to spawn a child process and child^2 process (the child's child), and then force the child to terminate. While this workaround appears to work, it seems really ugly.

你能帮我让它更 Pythonic 吗?subprocess 模块是否有可以对函数进行操作的方法?示例代码如下.

Can you help me make it more pythonic? Does the subprocess module have a method that will operate on a function? Sample code below.

import multiprocessing
import time
import sys
import os

def parent_child():
    p = multiprocessing.current_process()
    print 'Starting parent child:', p.name, p.pid
    sys.stdout.flush()
    cc = multiprocessing.Process(name='childchild', target=child_child)
    cc.daemon = False
    cc.start()
    print 'Exiting parent child:', p.name, p.pid
    sys.stdout.flush()

def child_child():
    p = multiprocessing.current_process()
    print 'Starting child child:', p.name, p.pid
    sys.stdout.flush()
    time.sleep(30)
    print 'Exiting child child:', p.name, p.pid
    sys.stdout.flush()

def main():
    print 'starting main', os.getpid()
    d = multiprocessing.Process(name='parentchild', target=parent_child)
    d.daemon = False
    d.start()
    time.sleep(5)
    d.terminate()
    print 'exiting main', os.getpid()

main()


解决方案

这只是原始代码的随机版本,它将功能移动到单个调用 spawn_detached(callable).即使在程序退出后,它也会保持分离的进程运行:

Here is just a random version of your original code that moves the functionality into a single call spawn_detached(callable). It keeps the detached process running even after the program exits:

import time
import os
from multiprocessing import Process, current_process

def spawn_detached(callable):
    p = _spawn_detached(0, callable)
    # give the process a moment to set up
    # and then kill the first child to detach
    # the second.
    time.sleep(.001)
    p.terminate()

def _spawn_detached(count, callable):
    count += 1
    p = current_process()
    print 'Process #%d: %s (%d)' % (count, p.name, p.pid)

    if count < 2:
        name = 'child'
    elif count == 2:
        name = callable.func_name
    else:
        # we should now be inside of our detached process
        # so just call the function
        return callable()

    # otherwise, spawn another process, passing the counter as well
    p = Process(name=name, target=_spawn_detached, args=(count, callable)) 
    p.daemon = False 
    p.start()
    return p

def operation():
    """ Just some arbitrary function """
    print "Entered detached process"
    time.sleep(15)
    print "Exiting detached process"


if __name__ == "__main__":
    print 'starting main', os.getpid()
    p = spawn_detached(operation)
    print 'exiting main', os.getpid()

相关文章