使用子进程在 python 脚本中调用带有输入的 python 脚本
问题描述
我有一个脚本 a.py
并且在执行它时会询问用户某些查询并以 json 格式构建输出.使用 python 子进程,我可以从另一个名为 b.py
的脚本中调用此脚本.一切都按预期工作,除了我无法在变量中获得输出?我在 Python 3 中这样做.
I have a script a.py
and while executing it will ask certain queries to user and frame the output in json format. Using python subprocess, I am able to call this script from another script named b.py
. Everything is working as expected except that I am not able to get the output in a variable? I am doing this in Python 3.
解决方案
使用 subprocess
模块从另一个脚本调用 Python 脚本并传递一些输入并获取其输出:
To call a Python script from another one using subprocess
module and to pass it some input and to get its output:
#!/usr/bin/env python3
import os
import sys
from subprocess import check_output
script_path = os.path.join(get_script_dir(), 'a.py')
output = check_output([sys.executable, script_path],
input='
'.join(['query 1', 'query 2']),
universal_newlines=True)
这里定义了get_script_dir()
函数.
where get_script_dir()
function is defined here.
更灵活的替代方法是导入模块 a
并调用函数以获取结果(确保 a.py
使用 if __name__=="__main__"
守卫,避免在导入时运行不需要的代码):
A more flexible alternative is to import module a
and to call a function, to get the result (make sure a.py
uses if __name__=="__main__"
guard, to avoid running undesirable code on import):
#!/usr/bin/env python
import a # the dir with a.py should be in sys.path
result = [a.search(query) for query in ['query 1', 'query 2']]
您可以使用 mutliprocessing
在单独的进程中运行每个查询(如果执行查询是 CPU 密集型的,那么它可能会提高时间性能):
You could use mutliprocessing
to run each query in a separate process (if performing a query is CPU-intensive then it might improve time performance):
#!/usr/bin/env python
from multiprocessing import freeze_support, Pool
import a
if __name__ == "__main__":
freeze_support()
pool = Pool() # use all available CPUs
result = pool.map(a.search, ['query 1', 'query 2'])
相关文章