
localai/localai
:bulb: Get help - ❓FAQ 💭Discussions :speech_balloon: *** :book: Documentation website
💻 Quickstart 🖼️ Models 🚀 Roadmap 🥽 Demo 🌍 Explorer 🛫 Examples
 API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.
🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:
|
|
| Talk Interface | Generate Audio |
|---|---|
| !Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk | !Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low |
| Models Overview | Generate Images |
|---|---|
| !Screenshot 2025-03-31 at 12-01-20 LocalAI - Models | !Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev |
| Chat Interface | Home |
|---|---|
| !Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5 | !Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8) |
| Login | Swarm |
|---|---|
| !Screenshot 2025-03-31 at 12-09-59 | !Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard |
Run the installer script:
bashcurl [***] | sh
Or run with docker:
bashdocker run -ti --name local-ai -p 8080:8080 localai/localai:latest-cpu
bashdocker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
bashdocker run -ti --name local-ai -p 8080:8080 localai/localai:latest
bashdocker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
To load models:
bash# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting [***] local-ai run llama-3.2-1b-instruct:q4_k_m # Start LocalAI with the phi-2 model directly from huggingface local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf # Install and run a model from the Ollama OCI registry local-ai run ollama://gemma:2b # Run a model from a configuration file local-ai run [***] # Install and run a model from a standard OCI registry (e.g., Docker Hub) local-ai run oci://localai/phi-2:latest
For more information, see 💻 Getting started
manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务