openeuler/codetransThe offical OPEA docker images
Maintained by: openEuler CloudNative SIG
Where to get help: openEuler CloudNative SIG, openEuler
Current OPEA docker images are built on the openEuler. This repository is free to use and exempted from per-user rate limits.
OPEA is an open platform project that lets you create open, multi-provider, robust, and composable GenAI solutions that harness the best innovation across the ecosystem.
The OPEA platform includes:
Detailed framework of composable building blocks for state-of-the-art generative AI systems including LLMs, data stores, and prompt engines
Architectural blueprints of retrieval-augmented generative AI component stack structure and end-to-end workflows
A four-step assessment for grading generative AI systems around performance, features, trustworthiness, and enterprise-grade readiness
Read more about OPEA at opea.dev and explore the OPEA technical documentation at opea-project.github.io
The tag of each CodeTrans docker image is consist of the version of CodeTrans and the version of basic image. The details are as follows
| Tags | Currently | Architectures |
|---|---|---|
| 1.0-oe2403lts | CodeTrans 1.0 on openEuler 24.03-LTS | amd64 |
| 1.2-oe2403lts | CodeTrans 1.2 on openEuler 24.03-LTS | amd64 |
The CodeTrans service can be effortlessly deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.
Currently we support two ways of deploying CodeTrans services with docker compose:
Start services using the docker image on docker hub:
bashdocker pull openeuler/codetrans:latest
Start services using the docker images built from source.
By default, the LLM model is set to a default value as listed below:
| Service | Model |
|---|---|
| LLM | mistralai/Mistral-7B-Instruct-v0.3 |
To set up environment variables for deploying CodeTrans services, follow these steps:
Set the required environment variables:
bash# Example: host_ip="192.168.1.1" export host_ip="External_Public_IP" # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" export no_proxy="Your_No_Proxy" export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
If you are in a proxy environment, also set the proxy-related environment variables:
bashexport http_proxy="Your_HTTP_Proxy" export https_proxy="Your_HTTPs_Proxy"
Set up other environment variables:
Get
set_env.shhere: set_env.sh
bashsource set_env.sh
Get
compose.ymlhere: compose.yml
bashdocker compose -f compose.yml up -d
It will automatically download the docker image on docker hub:
bashdocker pull openeuler/codetrans:latest docker pull openeuler/codetran***:latest
Use cURL command on terminal
bashcurl [***]{host_ip}:7777/v1/codetrans \ -H "Content-Type: application/json" \ -d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
Access via frontend
To access the frontend, open the following URL in your browser: http://{host_ip}:5173.
By default, the UI runs on port 5173 internally.

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务