
beloved70020/bge-m3beloved70020/bge-m3)This Docker image provides a ready-to-use, high-performance API service for generating text embeddings using the BAAI/bge-m3 model. It's built with FastAPI and Uvicorn, offering an OpenAI-compatible /v1/embeddings endpoint for an immediate, out-of-the-box experience. The service inside the container listens on port 8080.
BAAI/bge-m3 model for high-quality text embeddings.8080./v1/embeddings endpoint.FlagEmbedding library for efficient inference. Supports FP16 on CUDA-enabled GPUs for faster processing (if the host environment provides GPU access to the container)./health endpoint to monitor service status and the loaded BAAI/bge-m3 model.This image is specifically built and configured for:
BAAI/bge-m3The model is downloaded and cached within the image during its build process.
The service inside the container listens on port 8080. Map your desired host port to the container's port 8080.
To run the service, mapping host port 8080 to the container's port 8080:
bashdocker run -d -p 8080:8080 -v /dev/shm:/dev/shm beloved70020/bge-m3:latest
If you want to use a different host port (e.g., 8100):
bashdocker run -d -p 8100:8080 -v /dev/shm:/dev/shm beloved70020/bge-m3:latest
/v1/embeddingsThis endpoint is compatible with the OpenAI embeddings API. Ensure the model field in your request is BAAI/bge-m3.
Request:
POST /v1/embeddingsContent-Type: application/jsonjson{ "input": "Your text string goes here", // or a list of strings "model": "BAAI/bge-m3 // Must be BAAI/bge-m3 for this service }
Example using cURL (assuming service is mapped to host port 8080):
bashcurl -X POST http://localhost:8080/v1/embeddings \ -H "Content-Type: application/json" \ -d '{ "input": "Hello, world!", "model": "BAAI/bge-m3" }'
Response:
json{ "object": "list", "data": [ { "object": "embedding", "embedding": [0.0123, -0.0456, ...], // Embedding vector from BAAI/bge-m3 "index": 0 } ], "model": "BAAI/bge-m3", "usage": { "prompt_tokens": 4, // Example token count "total_tokens": 4 } }
GET /healthExample using cURL (assuming service is mapped to host port 8080):
bashcurl http://localhost:8080/health
Response:
json{ "status": "ok", "models_loaded": { "embedding": true }, "model_ids": { "embedding": "BAAI/bge-m3" // Confirms the specific model loaded } }
This image is designed for an out-of-the-box experience and does not require users to set any environment variables for standard operation. Internal environment variables such as PORT (fixed to 8080), EMBEDDING_MODEL_ID (fixed to BAAI/bge-m3), HF_HOME, HOST, etc., are pre-configured for optimal functionality and are not intended for user modification.






manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务