
mitakad/sglang本镜像基于NVIDIA Jetson Orin AGX(SM 87)构建,集成了SGLang框架,支持在边缘设备上高效部署大语言模型推理服务。镜像内置实验性NCCL支持的PyTorch,可实现多节点并行推理,适用于资源受限环境下的AI模型部署。
通过jetson-containers运行镜像:
bashjetson-containers run IMAGE_NAME
bashSGLANG_ENABLE_SPEC_V2=1 SGLANG_DISABLE_CUDNN_CHECK=1 \ sglang serve --host 0.0.0.0 --port 8000 \ --model-path Qwen/Qwen3.5-35B-A3B-GPTQ-Int4 \ --tp-size 1 \ --mem-fraction-static 0.9 \ --context-length 2048 \ --reasoning-parser qwen3 \ --tool-call-parser qwen3_coder \ --speculative-algo NEXTN \ --speculative-num-steps 3 \ --speculative-eagle-topk 1 \ --speculative-num-draft-tokens 4 \ --quantization moe_wna16 \ --mamba-scheduler-strategy extra_buffer
bashpython -m sglang.launch_server --host 0.0.0.0 --port 8000 \ --model-path Qwen/Qwen3-0.6B-GPTQ-Int8 \ --tp-size 1 \ --mem-fraction-static 0.8 \ --context-length 2048 \ --reasoning-parser qwen3 \ --attention-backend flashinfer \ --quantization gptq
注意:Qwen/Qwen3.5-35B-A3B-FP8模型在推理时可能出现超时问题
bashpython -m sglang.launch_server --host 0.0.0.0 --port 8000 \ --model-path Qwen/Qwen3-0.6B-FP8 \ --tp-size 1 \ --mem-fraction-static 0.8 \ --context-length 2048 \ --reasoning-parser qwen3 \ --attention-backend flashinfer \ --quantization fp8
bashpython3 -m sglang.launch_server --host 0.0.0.0 --port 8000 \ --model-path Qwen/Qwen3-4B-Instruct-2507 \ --mem-fraction-static 0.5 \ --context-length 8192
bashcurl --location 'http://localhost:8000/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data '{ "model": "Qwen/Qwen3-4B-Instruct-2507", "messages": [ { "role": "user", "content": "Why is the sky blue?" } ] }'
Built with: ENABLE_DISTRIBUTED_JETSON_NCCL=1 PYTORCH_FORCE_BUILD=on CUDA_VERSION=12.6 PYTHON_VERSION=3.10 LSB_RELEASE=22.04 PYTORCH_VERSION=2.9 jetson-containers build sglang:0.5.4-builder testing SGLang... ✅ Memory cleared Python: 3.12.12 (main, Oct 14 2025, 21:26:46) [Clang 20.1.4 ] CUDA available: True GPU 0: Orin GPU 0 Compute Capability: 8.7 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.9, V12.9.86 CUDA Driver Version: 540.4.0 PyTorch: 2.9.0 sglang: 0.5.3.post3 sgl_kernel: 0.3.16.post3 flashinfer_python: 0.4.1 triton: 3.4.0 transformers: 4.57.1 torchao: 0.9.0 numpy: 2.3.4 aiohttp: 3.13.1 fastapi: 0.119.1 hf_transfer: 0.1.9 huggingface_hub: 0.35.3 interegular: 0.3.3 modelscope: 1.31.0 orjson: 3.11.3 outlines: 1.2.7 packaging: 25.0 psutil: 7.1.1 pydantic: 2.12.3 python-multipart: 0.0.20 pyzmq: 27.1.0 uvicorn: 0.38.0 uvloop: 0.22.1 vllm: Module Not Found xgrammar: 0.1.25 openai: 2.6.0 ***en: 0.12.0 anthropic: 0.71.0 litellm: Module Not Found decord: Module Not Found ulimit soft: *** SGLang OK
Built with: ENABLE_DISTRIBUTED_JETSON_NCCL=1 PYTORCH_FORCE_BUILD=on CUDA_VERSION=12.9 PYTHON_VERSION=3.12 LSB_RELEASE=24.04 PYTORCH_VERSION=2.9 jetson-containers build sglang:0.5.4-builder testing SGLang... ✅ Memory cleared Python: 3.12.12 (main, Oct 14 2025, 21:26:46) [Clang 20.1.4 ] CUDA available: True GPU 0: Orin GPU 0 Compute Capability: 8.7 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.9, V12.9.86 CUDA Driver Version: 540.4.0 PyTorch: 2.9.0 sglang: 0.5.4 sgl_kernel: 0.3.16.post3 flashinfer_python: 0.4.1 triton: 3.4.0 transformers: 4.57.1 torchao: 0.9.0 numpy: 2.3.4 aiohttp: 3.13.1 fastapi: 0.120.0 hf_transfer: 0.1.9 huggingface_hub: 0.36.0 interegular: 0.3.3 modelscope: 1.31.0 orjson: 3.11.4 outlines: 1.2.7 packaging: 25.0 psutil: 7.1.1 pydantic: 2.12.3 python-multipart: 0.0.20 pyzmq: 27.1.0 uvicorn: 0.38.0 uvloop: 0.22.1 vllm: Module Not Found xgrammar: 0.1.25 openai: 2.6.1 ***en: 0.12.0 anthropic: 0.71.0 litellm: 1.79.0 decord2: 2.0.0 ulimit soft: *** SGLang OK
SGLang Jetson平台官方文档:[]
多节点推理说明:[]




探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务