
https://github.com/easypi/docker-scrapyd/actions/workflows/build.yaml/badge.svg](https://github.com/EasyPi/docker-scrapyd)
https://github.com/scrapy/scrapy is an open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.
https://github.com/scrapy/scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API.
https://github.com/scrapy/scrapyd-client is a client for scrapyd. It provides the scrapyd-deploy utility which allows you to deploy your project to a Scrapyd server.
https://github.com/scrapinghub/scrapy-splash provides Scrapy+JavaScript integration using Splash.
https://github.com/scrapinghub/scrapyrt allows you to easily add HTTP API to your existing Scrapy project.
https://github.com/scrapinghub/spidermon is a framework to build monitors for Scrapy spiders.
https://github.com/scrapinghub/scrapy-poet is the web-poet Page Object pattern implementation for Scrapy.
This image is based on debian:bullseye, seven latest stable python packages are installed:
Please use this as base image for your own project.
:warning: Scrapy (since 2.0.0) has dropped support for Python 2.7, which reached end-of-life on 2020-01-01.
yamlversion: "3.8" services: scrapyd: image: vimagick/scrapyd ports: - "6800:6800" volumes: - ./data:/var/lib/scrapyd - /usr/local/lib/python3.9/dist-packages restart: unless-stopped scrapy: image: vimagick/scrapyd command: bash volumes: - .:/code working_dir: /code restart: unless-stopped scrapyrt: image: vimagick/scrapyd command: scrapyrt -i 0.0.0.0 -p 9080 ports: - "9080:9080" volumes: - .:/code working_dir: /code restart: unless-stopped
bash$ docker-compose up -d scrapyd $ docker-compose logs -f scrapyd $ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items . $ tree items └── myproject └── myspider └── ad6153ee5b0711e68bc70242ac110005.jl
bash$ mkvirtualenv -p python3 webbot $ pip install scrapy scrapyd-client $ scrapy startproject myproject $ cd myproject $ setvirtualenvproject $ scrapy genspider myspider mydomain.com $ scrapy edit myspider $ scrapy list $ vi scrapy.cfg $ scrapyd-client deploy $ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider $ firefox http://localhost:6800
File: scrapy.cfg
ini[settings] default = myproject.settings [deploy] url = http://localhost:6800/ project = myproject
bash$ cat > stackoverflow_spider.py << _EOF_ import scrapy class StackOverflowSpider(scrapy.Spider): name = 'stackoverflow' start_urls = ['http://stackoverflow.com/questions?sort=votes'] def parse(self, response): for href in response.css('.question-summary h3 a::attr(href)'): full_url = response.urljoin(href.extract()) yield scrapy.Request(full_url, callback=self.parse_question) def parse_question(self, response): yield { 'title': response.css('h1 a::text').extract()[0], 'votes': response.css('.question div[itemprop="upvoteCount"]::text').extract()[0], 'body': response.css('.question .postcell').extract()[0], 'tags': response.css('.question .post-tag::text').extract(), 'link': response.url, } _EOF_ $ docker-compose run --rm scrapy >>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.jl >>> cat top-stackoverflow-questions.jl >>> exit
bash$ git clone https://github.com/scrapy/quotesbot.git . $ docker-compose up -d scrapyrt $ curl -s 'http://localhost:9080/crawl.json?spider_name=toscrape-css&callback=parse&url=http://quotes.toscrape.com/&max_requests=5' | jq -c '.items[]'






探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务