是否可以从 Scrapy spider 运行另一个蜘蛛?

2022-01-12 00:00:00 python scrapy multiprocessing

问题描述

现在我有 2 只蜘蛛,我想做的是

For now I have 2 spiders, what I would like to do is

  1. Spider 1 转到 url1 并且如果出现 url2 ,用 url2<调用蜘蛛 2/代码>.也使用管道保存url1的内容.
  2. 蜘蛛2url2做点什么.
  1. Spider 1 goes to url1 and if url2 appears, call spider 2 with url2. Also saves the content of url1 by using pipeline.
  2. Spider 2 goes to url2 and do something.

由于两种蜘蛛的复杂性,我想将它们分开.

Due to the complexities of both spiders I would like to have them separated.

我使用 scrapy crawl 的尝试:

def parse(self, response):
    p = multiprocessing.Process(
        target=self.testfunc())
    p.join()
    p.start()

def testfunc(self):
    settings = get_project_settings()
    crawler = CrawlerRunner(settings)
    crawler.crawl(<spidername>, <arguments>)

它会加载设置但不会抓取:

It does load the settings but doesn't crawl:

2015-08-24 14:13:32 [scrapy] INFO: Enabled extensions: CloseSpider, LogStats, CoreStats, SpiderState
2015-08-24 14:13:32 [scrapy] INFO: Enabled downloader middlewares: DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, HttpAuthMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-08-24 14:13:32 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-08-24 14:13:32 [scrapy] INFO: Spider opened
2015-08-24 14:13:32 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

文档中有一个关于从脚本启动的示例,但我想做的是在使用 scrapy crawl 命令时启动另一个蜘蛛.

The documentations has a example about launching from script, but what I'm trying to do is launch another spider while using scrapy crawl command.

完整代码

from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
from multiprocessing import Process
import scrapy
import os


def info(title):
    print(title)
    print('module name:', __name__)
    if hasattr(os, 'getppid'):  # only available on Unix
        print('parent process:', os.getppid())
    print('process id:', os.getpid())


class TestSpider1(scrapy.Spider):
    name = "test1"
    start_urls = ['http://www.google.com']

    def parse(self, response):
        info('parse')
        a = MyClass()
        a.start_work()


class MyClass(object):

    def start_work(self):
        info('start_work')
        p = Process(target=self.do_work)
        p.start()
        p.join()

    def do_work(self):

        info('do_work')
        settings = get_project_settings()
        runner = CrawlerRunner(settings)
        runner.crawl(TestSpider2)
        d = runner.join()
        d.addBoth(lambda _: reactor.stop())
        reactor.run()
        return

class TestSpider2(scrapy.Spider):

    name = "test2"
    start_urls = ['http://www.google.com']

    def parse(self, response):
        info('testspider2')
        return

我希望是这样的:

  1. scrapy 抓取测试1(例如,当 response.status_code 为 200 时:)
  2. 在test1中,调用scrapy crawl test2


解决方案

我不会深入给出,因为这个问题真的很老,但我会继续从官方 Scrappy 文档中删除这个片段......你非常接近!哈哈

I won't go in depth given since this question is really old but I'll go ahead drop this snippet from the official Scrappy docs.... You are very close! lol

import scrapy
from scrapy.crawler import CrawlerProcess

class MySpider1(scrapy.Spider):
    # Your first spider definition
    ...

class MySpider2(scrapy.Spider):
    # Your second spider definition
    ...

process = CrawlerProcess()
process.crawl(MySpider1)
process.crawl(MySpider2)
process.start() # the script will block here until all crawling jobs are finished

https://doc.scrapy.org/en/latest/topics/实践.html

然后使用回调,你可以在你的蜘蛛之间传递项目做你所说的逻辑函数

And then using callbacks you can pass items between your spiders do do w.e logic functions your talking about

相关文章