alexsuntop/mineru Docker 镜像 - 轩辕镜像

mineru
alexsuntop/mineru
MinerU镜像基于官方Dockerfile构建,支持部署vLLM后端服务器、文档解析API服务及Gradio WebUI界面,支持本地模型加载与GPU资源配置,适用于AI文档解析与交互场景。
6 收藏0 次下载activealexsuntop镜像

MinerU image, use official Dockerfile.

Source repo: Sun-ZhenXing/compose-anything.

docker-compose.yaml:

yaml
x-default: &default
  restart: unless-stopped
  volumes:
    - &localtime /etc/localtime:/etc/localtime:ro
    - &timezone /etc/timezone:/etc/timezone:ro
  logging:
    driver: json-file
    options:
      max-size: 100m

x***u-vllm: &mineru-vllm
  <<: *default
  image: ${MINERU_DOCKER_IMAGE:-alexsuntop/mineru:latest}
  environment:
    MINERU_MODEL_SOURCE: local
  ulimits:
    memlock: -1
    stack: 67108864
  ipc: host
  deploy:
    resources:
      limits:
        cpus: '8.0'
        memory: 4G
      reservations:
        cpus: '2.0'
        memory: 2G
        devices:
          - driver: nvidia
            device_ids: [ '0' ]
            capabilities: [ gpu ]

services:
  mineru-vllm-server:
    <<: *mineru-vllm
    container_name: mineru-vllm-server
    profiles: ["vllm-server"]
    ports:
      - 30000:30000
    entrypoint: mineru-vllm-server
    command:
      --host 0.0.0.0
      --port 30000
      # --data-parallel-size 2  # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
      # --gpu-memory-utilization 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:30000/health || exit 1"]


  mineru-api:
    <<: *mineru-vllm
    container_name: mineru-api
    profiles: ["api"]
    ports:
      - 8000:8000
    entrypoint: mineru-api
    command:
      --host 0.0.0.0
      --port 8000
      # parameters for vllm-engine
      # --data-parallel-size 2  # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
      # --gpu-memory-utilization 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.


  mineru-gradio:
    <<: *mineru-vllm
    container_name: mineru-gradio
    profiles: ["gradio"]
    ports:
      - 7860:7860
    entrypoint: mineru-gradio
    command:
      --server-name 0.0.0.0
      --server-port 7860
      --enable-vllm-engine true  # Enable the vllm engine for Gradio
      # --enable-api false  # If you want to disable the API, set this to false
      # --max-convert-pages 20  # If you want to limit the number of pages for conversion, set this to a specific number
      # parameters for vllm-engine
      # --data-parallel-size 2  # If using multiple GPUs, increase throughput using vllm's multi-GPU parallel mode
      # --gpu-memory-utilization 0.5  # If running on a single GPU and encountering VRAM shortage, reduce the KV cache size by this parameter, if VRAM issues persist, try lowering it further to `0.4` or below.

VLM backend server:

bash
docker compose --profile vllm-server up -d

Document parse API:

bash
docker compose --profile api up -d

Gradio WebUI:

bash
docker compose --profile gradio up -d

Test vLLM backend:

bash
pip install mineru
mineru -p demo.pdf -o ./output -b vlm-http-client -u http://localhost:30000

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

oldzhang的头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像 · 专业版提供 SLA 级可用性指标 · 免费版为公共服务,可能存在不可用情况。请提交工单获取在线技术支持,欢迎加入官方QQ群:13763429 进行技术交流。
面向开发者与科研用户,提供开源镜像的搜索和访问支持。所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何内容。
商务:17300950906
©2024-2025 源码跳动