扫码一下
查看教程更方便
要执行我们的蜘蛛程序,请在 first_scrapy 目录中运行以下命令
$ scrapy crawl first
其中,first 是创建蜘蛛时指定的蜘蛛名称。
蜘蛛爬行后,可以看到如下输出
2022-08-09 18:13:07-0400 [scrapy] info: scrapy started (bot: tutorial)
2022-08-09 18:13:07-0400 [scrapy] info: optional features available: ...
2022-08-09 18:13:07-0400 [scrapy] info: overridden settings: {}
2022-08-09 18:13:07-0400 [scrapy] info: enabled extensions: ...
2022-08-09 18:13:07-0400 [scrapy] info: enabled downloader middlewares: ...
2022-08-09 18:13:07-0400 [scrapy] info: enabled spider middlewares: ...
2022-08-09 18:13:07-0400 [scrapy] info: enabled item pipelines: ...
2022-08-09 18:13:07-0400 [scrapy] info: spider opened
2022-08-09 18:13:08-0400 [scrapy] debug: crawled (200)
(referer: none)
2022-08-09 18:13:09-0400 [scrapy] debug: crawled (200)
(referer: none)
2022-08-09 18:13:09-0400 [scrapy] info: closing spider (finished)
正如我们在输出中看到的那样,对于每个 url 都有一个日志行,其中 (referer: none) 声明这些 url 是起始 url,并且它们没有引荐来源网址。 接下来,我们应该会看到在 first_scrapy
目录中创建了两个名为 books.html 和 resources.html 的新文件。