1. <legend id='YCTlj'><style id='YCTlj'><dir id='YCTlj'><q id='YCTlj'></q></dir></style></legend>

  2. <small id='YCTlj'></small><noframes id='YCTlj'>

  3. <tfoot id='YCTlj'></tfoot>

      • <bdo id='YCTlj'></bdo><ul id='YCTlj'></ul>

      <i id='YCTlj'><tr id='YCTlj'><dt id='YCTlj'><q id='YCTlj'><span id='YCTlj'><b id='YCTlj'><form id='YCTlj'><ins id='YCTlj'></ins><ul id='YCTlj'></ul><sub id='YCTlj'></sub></form><legend id='YCTlj'></legend><bdo id='YCTlj'><pre id='YCTlj'><center id='YCTlj'></center></pre></bdo></b><th id='YCTlj'></th></span></q></dt></tr></i><div id='YCTlj'><tfoot id='YCTlj'></tfoot><dl id='YCTlj'><fieldset id='YCTlj'></fieldset></dl></div>

      是否可以从 Scrapy spider 运行另一个蜘蛛?

      时间:2023-05-26

      <legend id='YiBE9'><style id='YiBE9'><dir id='YiBE9'><q id='YiBE9'></q></dir></style></legend>

          <tbody id='YiBE9'></tbody>
        • <small id='YiBE9'></small><noframes id='YiBE9'>

              <bdo id='YiBE9'></bdo><ul id='YiBE9'></ul>
              <tfoot id='YiBE9'></tfoot>
              1. <i id='YiBE9'><tr id='YiBE9'><dt id='YiBE9'><q id='YiBE9'><span id='YiBE9'><b id='YiBE9'><form id='YiBE9'><ins id='YiBE9'></ins><ul id='YiBE9'></ul><sub id='YiBE9'></sub></form><legend id='YiBE9'></legend><bdo id='YiBE9'><pre id='YiBE9'><center id='YiBE9'></center></pre></bdo></b><th id='YiBE9'></th></span></q></dt></tr></i><div id='YiBE9'><tfoot id='YiBE9'></tfoot><dl id='YiBE9'><fieldset id='YiBE9'></fieldset></dl></div>
              2. 本文介绍了是否可以从 Scrapy spider 运行另一个蜘蛛?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

                问题描述

                限时送ChatGPT账号..

                现在我有 2 只蜘蛛,我想做的是

                For now I have 2 spiders, what I would like to do is

                1. Spider 1 转到 url1 并且如果出现 url2 ,用 url2<调用蜘蛛 2/代码>.也使用管道保存url1的内容.
                2. 蜘蛛2url2做点什么.
                1. Spider 1 goes to url1 and if url2 appears, call spider 2 with url2. Also saves the content of url1 by using pipeline.
                2. Spider 2 goes to url2 and do something.

                由于两种蜘蛛的复杂性,我想将它们分开.

                Due to the complexities of both spiders I would like to have them separated.

                我使用 scrapy crawl 的尝试:

                def parse(self, response):
                    p = multiprocessing.Process(
                        target=self.testfunc())
                    p.join()
                    p.start()
                
                def testfunc(self):
                    settings = get_project_settings()
                    crawler = CrawlerRunner(settings)
                    crawler.crawl(<spidername>, <arguments>)
                

                它会加载设置但不会抓取:

                It does load the settings but doesn't crawl:

                2015-08-24 14:13:32 [scrapy] INFO: Enabled extensions: CloseSpider, LogStats, CoreStats, SpiderState
                2015-08-24 14:13:32 [scrapy] INFO: Enabled downloader middlewares: DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, HttpAuthMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
                2015-08-24 14:13:32 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
                2015-08-24 14:13:32 [scrapy] INFO: Spider opened
                2015-08-24 14:13:32 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
                

                文档中有一个关于从脚本启动的示例,但我想做的是在使用 scrapy crawl 命令时启动另一个蜘蛛.

                The documentations has a example about launching from script, but what I'm trying to do is launch another spider while using scrapy crawl command.

                完整代码

                from scrapy.crawler import CrawlerRunner
                from scrapy.utils.project import get_project_settings
                from twisted.internet import reactor
                from multiprocessing import Process
                import scrapy
                import os
                
                
                def info(title):
                    print(title)
                    print('module name:', __name__)
                    if hasattr(os, 'getppid'):  # only available on Unix
                        print('parent process:', os.getppid())
                    print('process id:', os.getpid())
                
                
                class TestSpider1(scrapy.Spider):
                    name = "test1"
                    start_urls = ['http://www.google.com']
                
                    def parse(self, response):
                        info('parse')
                        a = MyClass()
                        a.start_work()
                
                
                class MyClass(object):
                
                    def start_work(self):
                        info('start_work')
                        p = Process(target=self.do_work)
                        p.start()
                        p.join()
                
                    def do_work(self):
                
                        info('do_work')
                        settings = get_project_settings()
                        runner = CrawlerRunner(settings)
                        runner.crawl(TestSpider2)
                        d = runner.join()
                        d.addBoth(lambda _: reactor.stop())
                        reactor.run()
                        return
                
                class TestSpider2(scrapy.Spider):
                
                    name = "test2"
                    start_urls = ['http://www.google.com']
                
                    def parse(self, response):
                        info('testspider2')
                        return
                

                我希望是这样的:

                1. scrapy 抓取测试1(例如,当 response.status_code 为 200 时:)
                2. 在test1中,调用scrapy crawl test2

                推荐答案

                我不会深入给出,因为这个问题真的很老,但我会继续从官方 Scrappy 文档中删除这个片段......你非常接近!哈哈

                I won't go in depth given since this question is really old but I'll go ahead drop this snippet from the official Scrappy docs.... You are very close! lol

                import scrapy
                from scrapy.crawler import CrawlerProcess
                
                class MySpider1(scrapy.Spider):
                    # Your first spider definition
                    ...
                
                class MySpider2(scrapy.Spider):
                    # Your second spider definition
                    ...
                
                process = CrawlerProcess()
                process.crawl(MySpider1)
                process.crawl(MySpider2)
                process.start() # the script will block here until all crawling jobs are finished
                

                https://doc.scrapy.org/en/latest/topics/实践.html

                然后使用回调,你可以在你的蜘蛛之间传递项目做你所说的逻辑函数

                And then using callbacks you can pass items between your spiders do do w.e logic functions your talking about

                这篇关于是否可以从 Scrapy spider 运行另一个蜘蛛?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持html5模板网!

                <i id='VmOGD'><tr id='VmOGD'><dt id='VmOGD'><q id='VmOGD'><span id='VmOGD'><b id='VmOGD'><form id='VmOGD'><ins id='VmOGD'></ins><ul id='VmOGD'></ul><sub id='VmOGD'></sub></form><legend id='VmOGD'></legend><bdo id='VmOGD'><pre id='VmOGD'><center id='VmOGD'></center></pre></bdo></b><th id='VmOGD'></th></span></q></dt></tr></i><div id='VmOGD'><tfoot id='VmOGD'></tfoot><dl id='VmOGD'><fieldset id='VmOGD'></fieldset></dl></div>
                  <tbody id='VmOGD'></tbody>

                <small id='VmOGD'></small><noframes id='VmOGD'>

                  <tfoot id='VmOGD'></tfoot>
                  • <bdo id='VmOGD'></bdo><ul id='VmOGD'></ul>
                    <legend id='VmOGD'><style id='VmOGD'><dir id='VmOGD'><q id='VmOGD'></q></dir></style></legend>