专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

raaftech/elasticsearch Docker 镜像 - 轩辕镜像 | Docker 镜像高效稳定拉取服务

热门搜索:openclaw🔥nginx🔥redis🔥mysqlopenjdkcursorweb2apimemgraphzabbixetcdubuntucorednsjdk
elasticsearch
raaftech/elasticsearch
raaftech
Elasticsearch on Red Hat's OpenJDK 8 image for OpenShift, Can run standalone.
下载次数: 0状态:社区镜像维护者:raaftech仓库类型:镜像最近更新:6 年前
轩辕镜像,不浪费每一次拉取。点击查看
镜像简介版本下载
轩辕镜像,不浪费每一次拉取。点击查看

Elasticsearch with Docker and Kubernetes or OpenShift

Although Elasticsearch has some great documentation about using Elasticsearch in a Dockerized environment, it focuses mainly on Docker Compose for anything beyond a single instance. Later, @pires has done some great work to get Elasticsearch to play nice with Kubernetes.

This project, inspired by the work done by @pires, allows you to run your own large-scale Elasticsearch production environment on Kubernetes or Openshift, simplifies the Kubernetes aspect of things a little (amongst others also the elimination of the requirement to run privileged initContainers) and does some extra magic to make various older and newer (latest) versions of Elasticsearch play nice with regards to the introduction and deprecation of certain environment variables.

In the sections below, you'll find out how to build and run this project's Docker image standalone and how to use the included kubernetes files to deploy an n-scale cluster, tested on Kubernetes 1.10+ and OpenShift 3.9.

As of this writing (2019-04-08) these Dockerfiles have been used with Elasticsearch 6.4.3, 6.5.4, 6.6.2 and 6.7.1.

Table of Contents

  • Pre-Requisites
  • Building Docker images
  • Deployment using Kubernetes
  • Using OpenShift
  • Environment Variables and Arguments

Pre-requisites

You need a reasonably recent version of Docker to build and run the Docker image. To run locally, in standalone mode, without the need to actually serve a large number of requests, you should be able to get away with about 4G of memory and a core or two for computation.

To run on Kubernetes, you need a Kubernetes cluster. I tested with version 1.10 and 1.12 and the Kubernetes services included with OpenShift 3.9. Memory and compute requirements might vary wildly, but to give you an idea: We're running a fairly simple 12 node Elasticsearch cluster with 3 masters, 3 data nodes, 3 ingest nodes and 3 client nodes, totalling about 12 cores, 60GB of ram and 100GB of storage.

Speaking about storage, bear in mind that there are significant known issues with Elasticsearch data on GlusterFS backed storage. The GlusterFS team is aware of these issues and is tracking them on this GitHub issue page. Red Hat is also aware of these issues as documented here, here and here.

If you are using GlusterFS before version 4.1, you must use the GlusterFS Block variant of GlusterFS storage or you will run into lockfile modification issues due to a known issue with ctime/mtime/utime variance (see links in the previous paragraph for more details). If you're running a version of GlusterFS 4.1 or newer, make sure you enable the ctime feature on your data volumes.

A small note about memory settings: This setup assumes a JVM of version 1.8.0 R191 or newer. The reason for that is that these versions of the JVM have the +EnableContainerSupport and +InitialRAMPercentage and +MaxRAMPercentage features, which we use to dynamically provide the dockerized JVM with the right amount of heap based on the memory assigned to the Pod via the Kubernetes YAML file. In other words, you don't need to tell the JVM explicitly how much memory to use, it does that automatically based on the max memory assigned to the pod.

Finally, I'm assuming a fairly recent modern OS environment where you have the docker and kubectl commands available via your PATH environment variable and know how to get by using either cmd.exe, powershell, ksh or bash.

Building Docker images

Everything is essentially built around a minimal Linux + OpenJDK image, on which we extract the standard Elasticsearch tar distribution, which is installed and started by custom setup.sh and run.sh scripts.

The Dockerfile inherits from Red Hat's redhat-openjdk-18/openjdk18-openshift image. Essentially, any image with an OpenJDK of version 8 or higher (yes, Elasticsearch 6.2 and higher can actually run with OpenJDK 9/10/11) and the bash and curl commands could run this. The advantage of the Red Hat images is that they promise to keep these updated as oposed to the state of affairs with the regular OpenJDK images.

Specifically, Red Hat's "OpenJDK Life Cycle and Support Policy" document mentions: "Q: Do the lifecycle dates apply to the OpenJDK images available in OpenShift? A: Yes. The lifecycle for OpenJDK 8 applies the the container image available in the Red Hat Container Catalog, and the OpenJDK 11 lifecycle will apply when it is released."

Alright, without further ado, let's get that standalone Docker image up + running! First off, you should git clone this repository somewhere, start your command shell of choice and cd into the cloned repository's directory. Run the following:

  • docker build -t someorg/elasticsearch .
  • docker run -d -p 9200:9200 -p 9300:9300 --rm --name es someorg/elasticsearch
  • docker logs -f es (<ctrl-c> when you've seen enough)
  • curl http://localhost:9200 (returns the main info json)

And to clean up afterwards:

  • docker stop es
  • docker container prune (optional, not needed when --rm was passed to docker run)
  • docker volume prune (optional, cleans up the volume entries)
  • docker rmi someorg/elasticsearch (removes the previously built image)
  • docker rmi registry.access.redhat.com/redhat-openjdk-18/openjdk18-openshift (removes the parent image)

If you completed the steps above, congratulations, you ran your first (I'm assuming) Elasticsearch! This instance did not get configured with any of the options that the various environment variables make possible, as the default values of those variables essentially enable a standalone Elasticsearch node with all bells + whistles.

The changing of those environment variables and a few Docker specific settings is what makes it possible for a single Docker image to assume different roles in the Elasticsearch environment and it's this that is key to running this setup in a Kubernetes environment. For more info on that, read on!

Deployment using Kubernetes

in the kubernetes subdirectory you'll find a selection of yaml files that set up various resources within a Kubernetes environment. In this case, there are four types of resources:

  • route: An HTTP endpoint that exposes the service for the outside world to consume;
  • service: A port definition that defines which ports should be open amongst pods;
  • statefulset: A set of pods that have identity and state associated with them;
  • deployment: A set of pods that have no persistent identity and usually no state;

Essentially, the resources in a Kubernetes environment represent the things you care about when configuring a service on a system: how is the service accessed externally (routes), what ports do the service(s) use when talking to each other (services), which systems have data and identity that should be consistent across multiple lifecycles/restarts (statefulset) and which systems are simply interchangable workers without state or identity I care about (deployments).

The yaml files in the kubernetes directory set up the routes and services and parameterize the Docker image to run in a specific way for each role of an Elasticsearch node in the cluster. If you read the yaml files, you'll see them setting various environment variables in a certain way: this configures the Docker image to assume a certain set of Elasticsearch responsibilities.

When you have your Kubernetes environment set-up and available for interaction with the kubectl command, cd into the kubernetes subdirectory, take a look at the default sizings in the statefulset and deployment files (In particular, look at the size of the storage claims, the number of cpu cores and the amount of memory assigned) and create your cluster as follows:

  • kubectl create -f service-es-transport.yaml
  • kubectl create -f service-es-http.yaml
  • kubectl create -f route-es-http.yaml
  • kubectl create -f statefulset-es-master.yaml
  • kubectl create -f statefulset-es-data.yaml
  • kubectl create -f deployment-es-ingest.yaml
  • kubectl create -f deployment-es-client.yaml

Note that the defaults currently defined in the yaml files are sized for a medium scale real-world deployment; That means about 60Gib of RAM and about 12 cores of CPU available in your cluster. If you're just playing around, feel free to lower these to whatever you think you can get away with. Bear in mind that as a general rule, you need to assign double the amount of ram to a pod compared to the amount of ram you assign to the JVM using the -Xms and -Xmx parameters. Finally, keep in mind that your persistent storage classes in your Kubernetes might be named differently than the ones mentioned in the yaml files. To check names that would work in your cluster, issue a kubectl get sc command which will show the available storage classes to you.

Using OpenShift

OpenShift v3 and later are based on Kubernetes. OpenShift adds a ton of nice features related to image building and versioning, authentication and isolation and definitely worth to check out. You can read more about OpenShift on the okd.io site.

OpenShift tries to keep its Kubernetes related parts as compatible as feasibly possible and so you can run this cluster setup on your OpenShift environment by issuing an oc login and simply replacing the kubectl part of the commands above with oc, for examle: oc create -f service-es-transport.yaml, etc.

Environment Variables and Arguments

As mentioned before, the Docker image can be parameterized at build and runtime with various arguments and environment variables. Arguments (the ARG keyword in a Dockerfile) are things which exist at build time (i.e, during docker build). Environment variables exist during build and runtime (i.e, also during docker run).

ARG PROXY_URL

Default: none

Specifies a proxy url that can be used during build time to make curl use a proxy when fetching the neccessary artifacts during a setup.sh run. Example value: [***].

ARG NO_PROXY

Default: none

Allows one to explicitly specify a comma-separated list of IP addresses and (partial) hostnames which should not be accessed using a proxy. You can partially specify a hostname as follows: .example.com, which would match all hosts ending in .example.com. Example value: localhost,127.0.0.1,.example.com.

ENV HOME

Default: /elasticsearch

The home directory of the Elasticsearch installation. Don't change this, will definitely give unexpected results if changed.

ENV PATH

Default: /elasticsearch/bin:$PATH

The default path for the image, prefixed with the bin directory of the Elasticsearch installation. If you change this, be sure to keep the /elasticsearch/bin directory as a first entry.

ENV ES_ALLOW_MMAPFS

Default: true

Since Elasticsearch 6.5.0, does not work on versions prior to 6.5.0. Allows or disallows mmapfs as an index backend. Can be set to false when you don't have root permissions on the underlying platform to set vm.max_map_count to be at least 262144. Be sure to also set ES_INDEX_STORE_TYPE to niofs or simplefs.

When Elasticsearch boots up and detects it is not just running on localhost, it will invoke the bootchecks mechanism to make sure various things are sanely configured. One of those checks is for the value of vm.max_map_count to be at least 262144. Prior to Elasticsearch version 6.5.0, there was no way to avoid that bootcheck, not even when you'd set index.store.type to anything other than mmapfs; the reason for that is that index.store.type simply defines a default and can be overridden during index creation.

The value of vm.max_map_count is only relevant when index.store.type is mmapfs and when node.store.allow_mmapfs is true which is the resulting default on at least Linux and macOS when index.store.type is set to fs or mmapfs.

ENV ES_JAVA_OPTS

Default: -Xms1g -Xmx1g -XX:ParallelGCThreads=1

Can be used to set a selection of JVM parameters. The default as shown above sets the minimum and maximum heap sizes to an equal amount, disabling the dynamic growth and shrink functions within the JVM which can incur a performance penalty and sets the ParallelGCThreads option to 1, guaranteeing at most 1 concurrent garbage collection threads running at any given moment. This last setting is specific to the CMS (ConcurrentMarkSweep) garbage collector which is configured in the jvm.options configuration file.

Note that these settings can also be set in the jvm.options file, but setting them here allows you to override them on a per-instance basis.

ENV ES_ARCHIVE_BASEURL

Default: [***]

Controls where setup.sh retrieves its installation payloads from. When you specify a SNAPSHOT version of Elasticsearch in ES_VERSION, you need to set this to [***]. For regular stable releases, the default is fine.

ENV ES_ARCHIVE_KEYID

Default: 46095ACC8548582C1A2699A9D27D666CD88E42B4

The public key id which Elastic Co uses to sign their released artifacts. The setup.sh script uses this id and the downloaded hash file to retrieve the associated public key from a PGP keyserver and afterwards determine if the downloaded artifact is valid.

ENV ES_CLUSTER_NAME

Default: elasticsearch-default

The name of your Elasticsearch instance.

ENV ES_DISCOVERY_SERVICE

Default: none

This effectively sets discovery.zen.ping.unicast.hosts in the Elasticsearch configuration file. Zen Discovery is the default built-in discovery module for cluster nodes in Elasticsearch. The Kubernetes configuration files set this value to es-transport which is the name of the Kubernetes Service that defines an inter-node transport port 9300 for nodes to talk to each other.

Note: the documentation from Elastic Co tells us that this value can be a list of hosts. I surmise that Kubernetes actually creates a DNS entry for a service name and that would resolve to multiple hosts (DNS round-robin style), but unsure of this (Edit: The Kubernetes Documentation on Services seems to indeed indicate that that's the case).

ENV ES_HTTP_CORS_ALLOW_ORIGIN

Default: *

Which origins to allow (see ES_HTTP_CORS_ENABLE below for more details on cross-origin resource sharing). If you prepend and append a / to the value, this will be treated as a regular expression, allowing you to support HTTP and HTTPs. for example using /https?:\/\/localhost(:[0-9]+)?/ would return the request header appropriately in both cases.

The default in our case: * is a valid value but is ***ed a security risk as your Elasticsearch instance is open to cross origin requests from anywhere and it is strongly suggested you change this.

Also check out Elastic Co's page documenting the HTTP Module for more details.

ENV ES_HTTP_CORS_ENABLE

Default: true

Enable or disable cross-origin resource sharing, i.e. whether a client on another origin can execute requests against Elasticsearch. For more details, see the rather excellent *** page on Cross-Origin Resource Sharing.

ES_INDEX_AUTO_CREATE

Default: true

By default, POST'ing a document to a non-existent index automatically creates the index if it has not been created before. Automatic index creation can be disabled by setting this environment variable to .kibana*,.logstash*,.management*,.monitoring*,.security*,.triggered_watches*,.watcher-history*,.watches*. In such a case, an index needs to be created explicitly before POST'ing documents to it. Notice that it cannot simply be false - the value has to be a whitelist of indexes that can be auto-created; the list in this example is an up-to-date list of well-known system indices.

ENV ES_INDEX_STORE_TYPE

Default: fs

The default leaves the selection of an index store type implementation to Elasticsearch. On Linux and macOS, that would be mmapfs and on Windows it's simplefs.

Note that this value simply specifies the default index store type and does not actually restrict the specification of index store types at index creation time. Also see ES_ALLOW_MMAPFS.

See the reference documentation page on the Store Module for more detailed information about the possible values here.

ENV ES_MAX_LOCAL_STORAGE_NODES

Default: 1

This setting limits how many Elasticsearch processes can interact with a local data path and should in almost all cases be set to 1. If you're running multiple Elasticsearch processes locally, and you want them to all access the same data path, you can increase this number. Should usually not be higher than 1 in production. Check the Node Data Path Settings in the Elasticsearch reference guide for more details.

ENV ES_MEMORY_LOCK

Default: false

From the Elasticsearch reference guide: When the JVM does a major garbage collection it touches every page of the heap. If any of those pages are swapped out to disk they will have to be swapped back in to memory. That causes lots of disk thrashing that Elasticsearch would much rather use to service requests.

There are several ways to configure a system to disallow swapping. One way is by requesting the JVM to lock the heap in memory through mlockall (Unix) or virtuallock (Windows). This is done via the Elasticsearch setting bootstrap.memory_lock.

However, there are cases where this setting can be passed to Elasticsearch but Elasticsearch is not able to lock the heap (e.g., if the elasticsearch user does not have memlock unlimited). The memory lock check verifies that if the bootstrap.memory_lock setting is enabled and that the JVM was successfully able to lock the heap.

The default in our case is false which disables the check. If you know you have memlock unlimited you can set this value to true

ENV ES_NETWORK_HOST

Default: _site_

Elasticsearch will bind to this hostname or IP address and publish (advertise) this host to other nodes in the cluster. Accepts an IP address, hostname, a special value, or an array of any combination of these.

Special values are: _[networkInterface]_ (addresses of a network interface, for example _en0_), _local_ (any loopback addresses on the system, for example 127.0.0.1), _site_ (any site-local addresses on the system, for example 192.168.0.1, 172.16.0.1 or 10.0.0.1) and _global_ (any globally-scoped addresses on the system, for example 8.8.8.8).

ENV ES_NODE_DATA

Default: true

Will this Elasticsearch instance fulfill a Data Node role?

ENV ES_NODE_INGEST

Default: true

Will this Elasticsearch instance fulfill an Ingest Node role?

ENV ES_NODE_MASTER

Default: true

Will this Elasticsearch instance fulfill a Master Node role?

ENV ES_NUMBER_OF_MASTERS

Default: 1

Sets discovery.zen.minimum_master_nodes. This is the minimum number of master eligible nodes that need to join a newly elected master in order for an election to complete and for the elected node to accept its masterness.

The same setting also controls the minimum number of active master eligible nodes that should be a part of any active cluster. If this requirement is not met the active master node will step down and a new master election will begin.

This setting must be set to a quorum of your master eligible nodes ((master_eligible_nodes/2)+1). It is recommended to avoid having only two master eligible nodes, since a quorum of two is two. Therefore, a loss of either master eligible node will result in an inoperable cluster. In practice, three master eligible nodes and a minimum_master_nodes of two is a good option.

Be sure to read the documentation on Avoiding Split Brain.

ENV ES_REPO_LOCATIONS

Default: none

The shared file system repository ("type": "fs") uses the shared file system to store snapshots. In order to register the shared file system repository it is necessary to mount the same shared filesystem to the same location on all master and data nodes. This location (or one of its parent directories) must be registered in the path.repo setting on all master and data nodes.

Setting ES_REPO_LOCATIONS to /mnt/backups would result in path.repo being set to /mnt/backups in the Elasticsearch configuration.

See the documentation related to the Shared File System Repository in the Elasticsearch reference guide for more information.

ENV ES_SHARD_ALLOCATION_AWARENESS_ENABLED

Default: false

When running nodes on multiple VMs on the same physical server, on multiple racks, or across multiple zones or domains, it is more likely that two nodes on the same physical server, in the same rack, or in the same zone or domain will crash at the same time, rather than two unrelated nodes crashing simultaneously.

If Elasticsearch is aware of the physical configuration of your hardware, it can ensure that the primary shard and its replica shards are spread across different physical servers, racks, or zones, to minimise the risk of losing all shard copies at the same time.

See the reference documentation on Shard Allocation Awareness in the Elasticsearch reference guide for more information about this.

ENV ES_SHARD_ALLOCATION_AWARENESS_ATTRIBUTE_KEY

Default: none

Specifies the attribute key name. Ends up as node.attr.<some-key-name> in the Elasticsearch configuration.

ENV ES_SHARD_ALLOCATION_AWARENESS_ATTRIBUTE_VALUE

Default: none

Specifies the attribute value for the attribute key name specified with ES_SHARD_ALLOCATION_AWARENESS_ATTRIBUTE_KEY. Ends up as the value of node.attr.<some-key-name>.

When this value is a path to a file within the container, the last line of that file will be the value instead.

ENV ES_VERSION

Default: whatever is the latest stable

Specifies the Elasticsearch version. Only has effect at docker build time. You can specify a stable version like 6.7.1 or 6.6.2. Snapshot versions can be specified as 7.0.0-SNAPSHOT. Note that snapshots are not on the regular downloadable artifacts server. See ES_ARCHIVE_BASEURL for details about how to change where the artifacts are fetched from.

查看更多 elasticsearch 相关镜像 →
elasticsearch logo
elasticsearch
Docker 官方镜像
Elasticsearch是一款功能强大的开源搜索与分析引擎,它基于Lucene构建,具备分布式、高扩展、实时处理的特性,能够高效存储、检索和分析各类结构化与非结构化数据,广泛应用于日志分析、全文搜索、业务智能等场景,通过简化数据探索流程,帮助用户快速从海量数据中获取有价值的洞察,让复杂数据的分析与利用变得简单高效。
6.6千 次收藏5亿+ 次下载
14 天前更新
bitnami/elasticsearch logo
bitnami/elasticsearch
bitnami
Bitnami为Elasticsearch提供的安全镜像,是预先配置且经过安全加固的解决方案,集成开源搜索引擎Elasticsearch的核心功能,可支持日志分析、全文搜索、实时数据处理等多种应用场景,通过优化性能、简化部署流程并遵循安全最佳实践,助力用户快速构建安全可靠的Elasticsearch运行环境,适用于开发、测试及生产环境,有效降低配置复杂度与潜在安全风险。
84 次收藏1亿+ 次下载
5 个月前更新
bitnamicharts/elasticsearch logo
bitnamicharts/elasticsearch
bitnamicharts
Bitnami提供的Elasticsearch Helm chart,用于在Kubernetes环境中便捷部署和管理分布式搜索引擎Elasticsearch。
1 次收藏500万+ 次下载
7 个月前更新
demisto/elasticsearch logo
demisto/elasticsearch
demisto
暂无描述
10万+ 次下载
1 个月前更新
itzg/elasticsearch logo
itzg/elasticsearch
itzg
Provides an easily configurable Elasticsearch node.
72 次收藏50万+ 次下载
8 年前更新
mcp/elasticsearch logo
mcp/elasticsearch
mcp
通过自然语言对话与Elasticsearch索引进行交互
16 次收藏1万+ 次下载
8 个月前更新

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

Docker 配置

登录仓库拉取

通过 Docker 登录认证访问私有仓库

专属域名拉取

无需登录使用专属域名

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

系统配置

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

MacOS OrbStack

MacOS OrbStack 容器配置

Docker Compose

Docker Compose 项目配置

NAS 设备

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

极空间

极空间 NAS 系统配置服务

网络设备

爱快路由

爱快 iKuai 路由系统配置

宝塔面板

在宝塔面板一键配置镜像

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

使用与功能问题

docker search 报错:专属域名下仅支持 Docker Hub 查询

docker search 报错问题

网页搜不到镜像:Docker Hub 有但轩辕镜像搜索无结果

镜像搜索不到

离线传输镜像:无法直连时用 docker save/load 迁移

离线传输镜像

Docker 插件安装错误:application/vnd.docker.plugin.v1+json

Docker 插件安装错误

WSL 下 Docker 拉取慢:网络与挂载目录影响及优化

WSL 拉取镜像慢

轩辕镜像是否安全?镜像完整性校验(digest)说明

镜像安全性

如何用轩辕镜像拉取镜像?登录方式与专属域名配置

如何拉取镜像

错误码与失败问题

manifest unknown 错误:镜像不存在或标签错误

manifest unknown 错误

TLS/SSL 证书验证失败:Docker pull 时 HTTPS 证书错误

TLS 证书验证失败

DNS 解析超时:无法解析镜像仓库地址或连接超时

DNS 解析超时

410 Gone 错误:Docker 版本过低导致协议不兼容

410 错误:版本过低

402 Payment Required 错误:流量耗尽错误提示

402 错误:流量耗尽

401 UNAUTHORIZED 错误:身份认证失败或登录信息错误

身份认证失败错误

429 Too Many Requests 错误:请求频率超出专业版限制

429 限流错误

Docker login 凭证保存错误:Cannot autolaunch D-Bus(不影响登录)

凭证保存错误

账号 / 计费 / 权限

免费版与专业版区别:功能、限额与使用场景对比

免费版与专业版区别

支持的镜像仓库:Docker Hub、GCR、GHCR、K8s 等列表

轩辕镜像支持的镜像仓库

拉取失败是否扣流量?计费规则说明

拉取失败流量计费

KYSEC 权限不够:麒麟 V10/统信 UOS 下脚本执行被拦截

KYSEC 权限错误

如何申请开具发票?(增值税普票/专票)

开具发票

如何修改网站与仓库登录密码?

修改网站和仓库密码

配置与原理类

registry-mirrors 未生效:仍访问官方仓库或报错的原因

registry-mirrors 未生效

如何去掉镜像名称中的轩辕域名前缀?(docker tag)

去掉域名前缀

如何拉取指定架构镜像?(ARM64/AMD64 等多架构)

拉取指定架构镜像

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
raaftech/elasticsearch
博客公告Docker 镜像公告与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
镜像拉取问题咨询请 提交工单,官方技术交流群:1072982923。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
镜像拉取问题咨询请提交工单,官方技术交流群:。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.