
mcp/firecrawl🔥 Official Firecrawl MCP Server - Adds powerful web scraping and search to Cursor, Claude and any other LLM clients.
What is an MCP Server?
| Attribute | Details |
|---|---|
| Docker Image | mcp/firecrawl |
| Author | firecrawl |
| Repository | [***] |
| Attribute | Details |
|---|---|
| Dockerfile | [***] |
| Commit | d757025e2e4758eb073a8b26a***c64221755 |
| Docker Image built by | Docker Inc. |
| Docker Scout Health Score | !Docker Scout Health Score |
| Verify Signature | COSIGN_REPOSITORY=mcp/signatures cosign verify mcp/firecrawl --key [***] |
| Licence | MIT License |
| Tools provided by this Server | Short Description |
|---|---|
firecrawl_check_crawl_status | Check the status of a crawl job. |
firecrawl_crawl | Starts a crawl job on a website and extracts content from all pages. |
firecrawl_extract | Extract structured information from web pages using LLM capabilities. |
firecrawl_map | Map a website to discover all indexed URLs on the site. |
firecrawl_scrape | Scrape content from a single URL with advanced options. |
firecrawl_search | Search the web and optionally extract content from search results. |
firecrawl_check_crawl_statusCheck the status of a crawl job.
Usage Example:
json{ "name": "firecrawl_check_crawl_status", "arguments": { "id": "550e8400-e29b-41d4-a716-446655440000" } }
Returns: Status and progress of the crawl job, including results if available.
| Parameters | Type | Description |
|---|---|---|
id | string |
firecrawl_crawlStarts a crawl job on a website and extracts content from all pages.
Best for: Extracting content from multiple related pages, when you need comprehensive coverage. Not recommended for: Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. Common mistakes: Setting limit or maxDiscoveryDepth too high (causes token overflow) or too low (causes missing pages); using crawl for a single page (use scrape instead). Using a /* wildcard is not recommended. Prompt Example: "Get all blog posts from the first two levels of example.com/blog." Usage Example:
json{ "name": "firecrawl_crawl", "arguments": { "url": "[***]", "maxDiscoveryDepth": 5, "limit": 20, "allowExternalLinks": false, "deduplicateSimilarURLs": true, "sitemap": "include" } }
Returns: Operation ID for status checking; use firecrawl_check_crawl_status to check progress.
| Parameters | Type | Description |
|---|---|---|
url | string | |
allowExternalLinks | boolean optional | |
allowSubdomains | boolean optional | |
crawlEntireDomain | boolean optional | |
deduplicateSimilarURLs | boolean optional | |
delay | number optional | |
excludePaths | array optional | |
ignoreQueryParameters | boolean optional | |
includePaths | array optional | |
limit | number optional | |
maxConcurrency | number optional | |
maxDiscoveryDepth | number optional | |
prompt | string optional | |
scrapeOptions | object optional | |
sitemap | string optional | |
webhook | string optional |
firecrawl_extractExtract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for: Extracting specific structured data like prices, names, details from web pages. Not recommended for: When you need the full content of a page (use scrape); when you're not looking for specific structured data. Arguments:
json{ "name": "firecrawl_extract", "arguments": { "urls": ["[***]", "[***]"], "prompt": "Extract product information including name, price, and description", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } }
Returns: Extracted structured data as defined by your schema.
| Parameters | Type | Description |
|---|---|---|
urls | array | |
allowExternalLinks | boolean optional | |
enableWebSearch | boolean optional | |
includeSubdomains | boolean optional | |
prompt | string optional | |
schema | object optional |
firecrawl_mapMap a website to discover all indexed URLs on the site.
Best for: Discovering URLs on a website before deciding what to scrape; finding specific sections of a website. Not recommended for: When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping). Common mistakes: Using crawl to discover URLs instead of map. Prompt Example: "List all URLs on example.com." Usage Example:
json{ "name": "firecrawl_map", "arguments": { "url": "[***]" } }
Returns: Array of URLs found on the site.
| Parameters | Type | Description |
|---|---|---|
url | string | |
ignoreQueryParameters | boolean optional | |
includeSubdomains | boolean optional | |
limit | number optional | |
search | string optional | |
sitemap | string optional |
firecrawl_scrapeScrape content from a single URL with advanced options. This is the most powerful, fastest and most reliable scraper tool, if available you should always default to using this tool for any web scraping needs.
Best for: Single page content extraction, when you know exactly which page contains the information. Not recommended for: Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract). Common mistakes: Using scrape for a list of URLs (use batch_scrape instead). If batch scrape doesnt work, just use scrape and call it multiple times. Other Features: Use 'branding' format to extract brand identity (colors, fonts, typography, spacing, UI components) for design analysis or style replication. Prompt Example: "Get the content of the page at [***]" Usage Example:
json{ "name": "firecrawl_scrape", "arguments": { "url": "[***]", "formats": ["markdown"], "maxAge": *** } }
Performance: Add maxAge parameter for 500% faster scrapes using cached data. Returns: Markdown, HTML, or other formats as specified.
| Parameters | Type | Description |
|---|---|---|
url | string | |
actions | array optional | |
excludeTags | array optional | |
formats | array optional | |
includeTags | array optional | |
location | object optional | |
maxAge | number optional | |
mobile | boolean optional | |
onlyMainContent | boolean optional | |
parsers | array optional | |
removeBase64Images | boolean optional | |
skipTlsVerification | boolean optional | |
storeInCache | boolean optional | |
waitFor | number optional |
firecrawl_searchSearch the web and optionally extract content from search results. This is the most powerful web search tool available, and if available you should always default to using this tool for any web search needs.
The query also supports search operators, that you can use if needed to refine the search:
| Operator | Functionality | Examples |
|---|---|---|
"" | Non-fuzzy matches a string of text | "Firecrawl" |
- | Excludes certain keywords or negates other operators | -bad, -site:firecrawl.dev |
site: | Only returns results from a specified website | site:firecrawl.dev |
inurl: | Only returns results that include a word in the URL | inurl:firecrawl |
allinurl: | Only returns results that include multiple words in the URL | allinurl:git firecrawl |
intitle: | Only returns results that include a word in the title of the page | intitle:Firecrawl |
allintitle: | Only returns results that include multiple words in the title of the page | allintitle:firecrawl playground |
related: | Only returns results that are related to a specific domain | related:firecrawl.dev |
imagesize: | Only returns images with exact dimensions | imagesize:1920x1080 |
larger: | Only returns images larger than specified dimensions | larger:1920x1080 |
Best for: Finding specific information across multiple websites, when you don't know which website has the information; when you need the most relevant content for a query. Not recommended for: When you need to search the filesystem. When you already know which website to scrape (use scrape); when you need comprehensive coverage of a single website (use map or crawl. Common mistakes: Using crawl or map for open-ended questions (use search instead). Prompt Example: "Find the latest research papers on AI published in 2023." Sources: web, images, news, default to web unless needed images or news. Scrape Options: Only use scrapeOptions when you think it is absolutely necessary. When you do so default to a lower limit to avoid timeouts, 5 or lower. Optimal Workflow: Search first using firecrawl_search without formats, then after fetching the results, use the scrape tool to get the content of the relevantpage(s) that you want to scrape
Usage Example without formats (Preferred):
json{ "name": "firecrawl_search", "arguments": { "query": "top AI companies", "limit": 5, "sources": [ "web" ] } }
Usage Example with formats:
json{ "name": "firecrawl_search", "arguments": { "query": "latest AI research papers 2023", "limit": 5, "lang": "en", "country": "us", "sources": [ "web", "images", "news" ], "scrapeOptions": { "formats": ["markdown"], "onlyMainContent": true } } }
Returns: Array of search results (with optional scraped content).
| Parameters | Type | Description |
|---|---|---|
query | string | |
filter | string optional | |
limit | number optional | |
location | string optional | |
scrapeOptions | object optional | |
sources | array optional | |
tbs | string optional |
json{ "mcpServers": { "firecrawl": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "FIRECRAWL_API_URL", "-e", "FIRECRAWL_RETRY_MAX_ATTEMPTS", "-e", "FIRECRAWL_RETRY_INITIAL_DELAY", "-e", "FIRECRAWL_RETRY_MAX_DELAY", "-e", "FIRECRAWL_RETRY_BACKOFF_FACTOR", "-e", "FIRECRAWL_CREDIT_WARNING_THRESHOLD", "-e", "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD", "-e", "FIRECRAWL_API_KEY", "mcp/firecrawl" ], "env": { "FIRECRAWL_API_URL": "[***]", "FIRECRAWL_RETRY_MAX_ATTEMPTS": "5", "FIRECRAWL_RETRY_INITIAL_DELAY": "2000", "FIRECRAWL_RETRY_MAX_DELAY": "30000", "FIRECRAWL_RETRY_BACKOFF_FACTOR": "3", "FIRECRAWL_CREDIT_WARNING_THRESHOLD": "2000", "FIRECRAWL_CREDIT_CRITICAL_THRESHOLD": "500", "FIRECRAWL_API_KEY": "YOUR-API-KEY" } } } }
Why is it safer to run MCP Servers with Docker?






探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
在 Linux 系统配置镜像服务
在 Docker Desktop 配置镜像
Docker Compose 项目配置
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
MacOS OrbStack 容器配置
在宝塔面板一键配置镜像
Synology 群晖 NAS 配置
飞牛 fnOS 系统配置镜像
极空间 NAS 系统配置服务
爱快 iKuai 路由系统配置
绿联 NAS 系统配置镜像
QNAP 威联通 NAS 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
无需登录使用专属域名
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
免费版仅支持 Docker Hub 访问,不承诺可用性和速度;专业版支持更多镜像源,保证可用性和稳定速度,提供优先客服响应。
专业版支持 docker.io、gcr.io、ghcr.io、registry.k8s.io、nvcr.io、quay.io、mcr.microsoft.com、docker.elastic.co 等;免费版仅支持 docker.io。
当返回 402 Payment Required 错误时,表示流量已耗尽,需要充值流量包以恢复服务。
通常由 Docker 版本过低导致,需要升级到 20.x 或更高版本以支持 V2 协议。
先检查 Docker 版本,版本过低则升级;版本正常则验证镜像信息是否正确。
使用 docker tag 命令为镜像打上新标签,去掉域名前缀,使镜像名称更简洁。
来自真实用户的反馈,见证轩辕镜像的优质服务