专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像
专业版
轩辕镜像 官方专业版
轩辕镜像
专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题轩辕镜像免费版
其他
关于我们网站地图
热门搜索:
text-generation-webui

dustynv/text-generation-webui

dustynv

https://github.com/dusty-nv/jetson-containers/packages/llm/text-generation-webui

下载次数: 0状态:社区镜像维护者:dustynv仓库类型:镜像最近更新:2 年前
轩辕镜像,不浪费每一次拉取。点击查看
镜像简介
标签下载
镜像标签列表与下载命令
轩辕镜像,不浪费每一次拉取。点击查看

text-generation-webui

CONTAINERS IMAGES RUN BUILD

  • text-generation-webui from https://github.com/oobabooga/text-generation-webui (found under /opt/text-generation-webui)
  • includes CUDA-optimized model loaders for: https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llama_cpp https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/exllama https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/auto_gptq https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/transformers
  • see the tutorial at the Jetson Generative AI Lab

[!WARNING]
If you're using the llama.cpp loader, the model format has changed from GGML to GGUF. Existing GGML models can be converted using the convert-llama-ggmlv3-to-gguf.py script in https://github.com/ggerganov/llama.cpp (or you can often find the GGUF conversions on HuggingFace Hub)

This container has a default run command that will automatically start the webserver like this:

bash
cd /opt/text-generation-webui && python3 server.py \
  --model-dir=/data/models/text-generation-webui \
  --listen --verbose

To launch the container, run the command below, and then navigate your browser to http://HOSTNAME:7860

bash
./run.sh $(./autotag text-generation-webui)

Command-Line Options

While the server and models are dynamically configurable from within the webui at runtime, see here for optional command-line settings:

  • https://github.com/oobabooga/text-generation-webui/tree/main#basic-settings

For example, after you've downloaded a model, you can load it directly at startup like so:

bash
./run.sh $(./autotag text-generation-webui) /bin/bash -c \
  "cd /opt/text-generation-webui && python3 server.py \
	--model-dir=/data/models/text-generation-webui \
	--model=llama-2-13b-chat.Q4_K_M.gguf \
	--loader=llamacpp \
	--n-gpu-layers=128 \
	--listen --chat --verbose

Downloading Models

See https://github.com/oobabooga/text-generation-webui/tree/main#downloading-models for instructions for downloading models - you can do this from within the webui, or by running their https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py script:

bash
./run.sh --workdir=/opt/text-generation-webui $(./autotag text-generation-webui) /bin/bash -c \
  'python3 download-model.py --output=/data/models/text-generation-webui TheBloke/Llama-2-7b-Chat-GPTQ'

This will download specified model from HuggingFace Hub and place it under the /data/models/text-generation-webui mounted directory (which is where you should store models so they aren't lost when the container exits)

Tips and Tricks

  • The fastest model loader to use is currently https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llama_cpp with 4-bit quantized GGUF models
    • Remember to set n-gpu-layers to 128 in the loader settings
    • If you're using Llama-2-70B, set n_gqa to 8 (otherwise an error will occur)
    • Tested using the recommended Q4_K_M model quantizations
  • Unless you loaded a model fine-tuned for chat, use text completion mode in the Default or Notebook tab
  • If you're using a Llama-2 chat model, use the Instruct chat mode and set the Instruction Template to Llama-v2 (in the Parameters tab)
    • This will make sure the correct chat prompt format is being used for Llama-2
    • You can alter your agent's behavior/personality/ect by tweaking this prompt template (the Answer the questions. part)

Things to do with your LLM

>> What games do you like to play?

I'm a large language model, so I can play text-based games and answer questions on a wide variety of topics. Here are some game ideas:

- 20 Questions:  You or I can think of an object, and the other player tries to guess what it is by asking up to 20 yes-or-no questions.  The goal is to guess the object in as few questions as possible.
- Hangman: I can think of a word or phrase and you try to guess it by suggesting letters. Each incorrect guess will bring the hangman one step closer to being complete!
- Word association: I'll give you a word and you have to come up with a word that is related to it. We can keep playing back and forth as long as you'd like!
- Trivia: I can provide questions on a wide range of topics, from history to science to pop culture. See how many questions you can answer correctly!
- Storytelling: I can start telling a story and then stop at a cliffhanger. You can then try to guess what happens next or even take over the storytelling and continue it in your own
- Jokes: I love to tell jokes and make people laugh with my "Dad humor"!  Knock knock!  *giggles*
CONTAINERS
text-generation-webui:main
   Aliasestext-generation-webui
   RequiresL4T ['>=34.1.0']
   Dependencieshttps://github.com/dusty-nv/jetson-containers/tree/master/packages/build/build-essential https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cuda https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cudnn https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/python https://github.com/dusty-nv/jetson-containers/tree/master/packages/numpy https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/cmake/cmake_pip https://github.com/dusty-nv/jetson-containers/tree/master/packages/onnx https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torchvision https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/huggingface_hub https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/rust https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/transformers https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/auto_gptq https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/exllama https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llama_cpp
   Dockerfilehttps://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui/Dockerfile
   Imageshttps://hub.docker.com/r/dustynv/text-generation-webui/tags (2023-12-18, 8.1GB)
text-generation-webui:1.7
   RequiresL4T ['>=34.1.0']
   Dependencieshttps://github.com/dusty-nv/jetson-containers/tree/master/packages/build/build-essential https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cuda https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cudnn https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/python https://github.com/dusty-nv/jetson-containers/tree/master/packages/numpy https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/cmake/cmake_pip https://github.com/dusty-nv/jetson-containers/tree/master/packages/onnx https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torchvision https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/huggingface_hub https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/rust https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/transformers https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/auto_gptq https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/exllama https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llama_cpp
   Dockerfilehttps://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui/Dockerfile
   Imageshttps://hub.docker.com/r/dustynv/text-generation-webui/tags (2023-12-05, 6.4GB)
text-generation-webui:6a7cd01
   RequiresL4T ['>=34.1.0']
   Dependencieshttps://github.com/dusty-nv/jetson-containers/tree/master/packages/build/build-essential https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cuda https://github.com/dusty-nv/jetson-containers/tree/master/packages/cuda/cudnn https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/python https://github.com/dusty-nv/jetson-containers/tree/master/packages/numpy https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/cmake/cmake_pip https://github.com/dusty-nv/jetson-containers/tree/master/packages/onnx https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torchvision https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/huggingface_hub https://github.com/dusty-nv/jetson-containers/tree/master/packages/build/rust https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/transformers https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/auto_gptq https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/exllama https://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/llama_cpp
   Dockerfilehttps://github.com/dusty-nv/jetson-containers/tree/master/packages/llm/text-generation-webui/Dockerfile
CONTAINER IMAGES
Repository/TagDateArchSize
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2023-12-05arm646.4GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2023-12-18arm648.1GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2024-02-01arm646.6GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2024-02-03arm646.6GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2024-02-01arm646.6GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2024-04-12arm646.4GB
  https://hub.docker.com/r/dustynv/text-generation-webui/tags2024-02-03arm648.3GB

Container images are compatible with other minor versions of JetPack/L4T:
    • L4T R32.7 containers can run on other versions of L4T R32.7 (JetPack 4.6+)
    • L4T R35.x containers can run on other versions of L4T R35.x (JetPack 5.1+)

RUN CONTAINER

To start the container, you can use https://github.com/dusty-nv/jetson-containers/tree/master/docs/run.md and https://github.com/dusty-nv/jetson-containers/tree/master/docs/run.md#autotag, or manually put together a docker run command:

bash
# automatically pull or build a compatible container image
jetson-containers run $(autotag text-generation-webui)

# or explicitly specify one of the container images above
jetson-containers run dustynv/text-generation-webui:r35.4.1-cp310

# or if using 'docker run' (specify image and mounts/ect)
sudo docker run --runtime nvidia -it --rm --network=host dustynv/text-generation-webui:r35.4.1-cp310

https://github.com/dusty-nv/jetson-containers/tree/master/docs/run.md forwards arguments to docker run with some defaults added (like --runtime nvidia, mounts a /data cache, and detects devices)
https://github.com/dusty-nv/jetson-containers/tree/master/docs/run.md#autotag finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.

To mount your own directories into the container, use the -v or --volume flags:

bash
jetson-containers run -v /path/on/host:/path/in/container $(autotag text-generation-webui)

To launch the container running a command, as opposed to an interactive shell:

bash
jetson-containers run $(autotag text-generation-webui) my_app --abc xyz

You can pass any options to it that you would to docker run, and it'll print out the full command that it constructs before executing it.

BUILD CONTAINER

If you use https://github.com/dusty-nv/jetson-containers/tree/master/docs/run.md#autotag as shown above, it'll ask to build the container for you if needed. To manually build it, first do the https://github.com/dusty-nv/jetson-containers/tree/master/docs/setup.md, then run:

bash
jetson-containers build text-generation-webui

The dependencies from above will be built into the container, and it'll be tested during. Run it with https://github.com/dusty-nv/jetson-containers/tree/master/jetson_containers/build.py for build options.

镜像拉取方式

您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。

轩辕镜像加速拉取命令点我查看更多 text-generation-webui 镜像标签

docker pull docker.xuanyuan.run/dustynv/text-generation-webui:<标签>

使用方法:

  • 登录认证方式
  • 免认证方式

DockerHub 原生拉取命令

docker pull dustynv/text-generation-webui:<标签>

更多 text-generation-webui 镜像推荐

dustynv/ros logo

dustynv/ros

dustynv
为NVIDIA Jetson平台提供预配置的ROS环境,支持机器人应用的快速开发、部署与运行,适配Jetson硬件加速能力。
22 次收藏10万+ 次下载
1 年前更新
dustynv/ollama logo

dustynv/ollama

dustynv
GitHub仓库dusty-nv/jetson-containers中的Ollama LLM包,是为NVIDIA Jetson嵌入式平台设计的容器化解决方案,旨在简化大型语言模型(LLM)的部署与运行流程,支持多种主流LLM模型,充分利用Jetson设备的硬件加速能力,适用于边缘AI计算、智能终端开发等场景,为开发者提供便捷高效的本地化LLM部署工具。
8 次收藏10万+ 次下载
9 个月前更新
dustynv/nano_llm logo

dustynv/nano_llm

dustynv
用于在NVIDIA Jetson边缘设备上部署和运行大语言模型的Docker镜像,针对嵌入式硬件优化,支持本地高效推理。
5万+ 次下载
1 年前更新
dustynv/jetson-inference logo

dustynv/jetson-inference

dustynv
jetson-inference是为NVIDIA Jetson平台设计的深度学习推理容器,集成CUDA、PyTorch、TensorRT、OpenCV等关键依赖,支持多种JetPack/L4T版本,便于快速部署和运行计算机视觉等推理任务。
16 次收藏5万+ 次下载
1 年前更新
dustynv/llama_cpp logo

dustynv/llama_cpp

dustynv
包含llama.cpp(启用CUDA支持)和llama-cpp-python的Docker镜像,支持GGUF/GGML模型格式,适用于本地大语言模型推理任务。
5 次收藏5万+ 次下载
12 个月前更新
dustynv/vllm logo

dustynv/vllm

dustynv
NVIDIA Jetson 平台优化的大语言模型推理服务框架
5 次收藏5万+ 次下载
9 个月前更新

查看更多 text-generation-webui 相关镜像

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

Docker 配置

登录仓库拉取

通过 Docker 登录认证访问私有仓库

专属域名拉取

无需登录使用专属域名

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

Harbor 镜像源配置

Harbor Proxy Repository 对接专属域名

Portainer 镜像源配置

Portainer Registries 加速拉取

Nexus 镜像源配置

Nexus3 Docker Proxy 内网缓存

系统配置

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

MacOS OrbStack

MacOS OrbStack 容器配置

Docker Compose

Docker Compose 项目配置

NAS 设备

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

极空间

极空间 NAS 系统配置服务

网络设备

爱快路由

爱快 iKuai 路由系统配置

宝塔面板

在宝塔面板一键配置镜像

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

使用与功能问题

配置了专属域名后,docker search 为什么会报错?

docker search 限制

Docker Hub 上有的镜像,为什么在轩辕镜像网站搜不到?

站内搜不到镜像

机器不能直连外网时,怎么用 docker save / load 迁镜像?

离线 save/load

docker pull 拉插件报错(plugin v1+json)怎么办?

插件要用 plugin install

WSL 里 Docker 拉镜像特别慢,怎么排查和优化?

WSL 拉取慢

轩辕镜像安全吗?如何用 digest 校验镜像没被篡改?

安全与 digest

第一次用轩辕镜像拉 Docker 镜像,要怎么登录和配置?

新手拉取配置

轩辕镜像合规吗?轩辕镜像的合规是怎么做的?

镜像合规机制

错误码与失败问题

docker pull 提示 manifest unknown 怎么办?

manifest unknown

docker pull 提示 no matching manifest 怎么办?

no matching manifest(架构)

镜像已拉取完成,却提示 invalid tar header 或 failed to register layer 怎么办?

invalid tar header(解压)

Docker pull 时 HTTPS / TLS 证书验证失败怎么办?

TLS 证书失败

Docker pull 时 DNS 解析超时或连不上仓库怎么办?

DNS 超时

docker 无法连接轩辕镜像域名怎么办?

域名连通性排查

Docker 拉取出现 410 Gone 怎么办?

410 Gone 排查

出现 402 或「流量用尽」提示怎么办?

402 与流量用尽

Docker 拉取提示 UNAUTHORIZED(401)怎么办?

401 认证失败

遇到 429 Too Many Requests(请求太频繁)怎么办?

429 限流

docker login 提示 Cannot autolaunch D-Bus,还算登录成功吗?

D-Bus 凭证提示

为什么会出现「单层超过 20GB」或 413,无法加速拉取?

413 与超大单层

账号 / 计费 / 权限

轩辕镜像免费版和专业版有什么区别?

免费版与专业版区别

轩辕镜像支持哪些 Docker 镜像仓库?

支持的镜像仓库

镜像拉取失败还会不会扣流量?

失败是否计费

麒麟 V10 / 统信 UOS 提示 KYSEC 权限不够怎么办?

KYSEC 拦截脚本

如何在轩辕镜像申请开具发票?

申请开票

怎么修改轩辕镜像的网站登录和仓库登录密码?

修改登录密码

如何注销轩辕镜像账户?要注意什么?

注销账户

配置与原理类

写了 registry-mirrors,为什么还是走官方或仍然报错?

mirrors 不生效

怎么用 docker tag 去掉镜像名里的轩辕域名前缀?

去掉域名前缀

如何拉取指定 CPU 架构的镜像(如 ARM64、AMD64)?

指定架构拉取

用轩辕镜像拉镜像时快时慢,常见原因有哪些?

拉取速度原因

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
dustynv/text-generation-webui
博客Docker 镜像公告与技术博客
热门查看热门 Docker 镜像推荐
安装一键安装 Docker 并配置镜像源
镜像拉取问题咨询请 提交工单,官方技术交流群:1072982923。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
镜像拉取问题咨询请提交工单,官方技术交流群:。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
商务合作:点击复制邮箱
©2024-2026 源码跳动
商务合作:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.