专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像 官方专业版轩辕镜像 官方专业版官方专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题
其他
关于我们网站地图

官方QQ群: 1072982923

nearpod/kafka Docker 镜像 - 轩辕镜像 | Docker 镜像高效稳定拉取服务

热门搜索:openclaw🔥nginx🔥redis🔥mysqlopenjdkcursorweb2apimemgraphzabbixetcdubuntucorednsjdk
kafka
nearpod/kafka
自动构建
nearpod
A kafka image meant to run in a Kubernetes StatefulSet.
下载次数: 0状态:自动构建维护者:nearpod仓库类型:镜像最近更新:8 年前
轩辕镜像,快一点,稳很多。点击查看
镜像简介版本下载
轩辕镜像,快一点,稳很多。点击查看

docker-infra

Special Notes

  • This image is based on Alpine Linux. Especially for JVM images this could have unintended consequences and everything may break. YMMV
  • This project uses Zulu, a fully tested, compatibility verified, and trusted binary distribution of the OpenJDK 9, 8, and earlier platforms. Zulu terms of use

kafka

This project contains a Docker image meant to facilitate the deployment of Apache Kafka on Kubernetes using StatefulSets.

Heavily inspired from: Kubernetes Kafka K8SKafka

Kubernetes Kafka K8SKafka

This project contains a Docker image meant to facilitate the deployment of Apache Kafka on Kubernetes using StatefulSets.

Limitations
  1. Persistent Volumes must be used. emptyDirs will likely result in a loss of data.
  2. Storage media I/O isolation is not generally possible at this time. *** using Pod Anti-Affinity rules to place noisy neighbors on separate Nodes.
Docker Image

The docker image contained in this repository is comprised of a base Ubuntu 16.04 image using the latest release of the OpenJDK JRE based on the 1.8 JVM (JDK 8u111), the latest stable release of Kafka (10.2.0) using Scala 2.11. Ubuntu is a much larger image than BusyBox or Alpine, but these images contain mucl or ulibc. This requires a custom version of OpenJDK to be built against a libc runtime other than glibc. While there are smaller Kafka images based on Alpine and BusyBox, the interactions between Kafka, the JVM, and glibc are better understood and easier to debug.

The image is built such that the Kafka JVM process is designated to run as a non-root user. By default, this user is kafka and has UID 1000 and GID 1000. The Kafka package is installed into the /opt/kafka directory, all configuration is installed into /opt/kafka/config and all executables are in /opt/kafka/bin. Due to the implementation of the scripts in /opt/kafka/bin, it is not feasible to symbolically link them into the /user/bin directory. As such, the /opt/kafka/bin directory is added to the PATH environment variable.

ZooKeeper

Kafka requires an installation of Apache Zookeeper for broker configuration storage and coordination. A example of how to deploy a ZooKeeper ensemble on Kubernetes can be found here. For testing purposes an ensemble of 1-3 servers is sufficient. For production use, you should *** deploying at least 5 servers so that you can tolerate the loss of one server during the planned maintenance of another. If you are running ZooKeeper on Kubernetes, it is best to use a separate ensemble for each Kafka cluster. For production use, you should ensure that each ZooKeeper server has at least 2 GiB of heap with at least 4 GiB of reserved memory for the Pod. As ZooKeeper is not particularly CPU intensive, 2 CPUs per server should be sufficient for most use cases. If you are running Kubernetes on a Cloud Provider (e.g. GCP, Azure, or AWS), you should provision a fast storage class for the ZooKeeper PVs. As the PVs are backed by network attached storage, there is little to be gained from isolating the write ahead log from the snapshots directory.

Headless Service

The Kafka Stateful Set requires a Headless Service to control the network domain for the Kafka brokers. The yaml below creates a Headless Service that allows brokers to be discovered and exposes the 9093 port for client connections.

yaml
apiVersion: v1
kind: Service
metadata:
  name: kafka-svc
  labels:
    app: kafka
spec:
  ports:
  - port: 9093
    name: server
  clusterIP: None
  selector:
    app: kafka
StatefulSet

The Kafka StatefulSet deploys a configurable number of replicas on the Kubernetes cluster. The StatefulSet serviceName must match the Headless Service and specify the desired number of brokers.

yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: kafka
spec:
  serviceName: kafka-svc
  replicas: 3
  ...
Configuration

This section details the configuration of the Kafka cluster.

Broker Configuration

The configuration for each broker is generated by overriding the default configuration with command line flags. The high and medium importance configuration parameters form the Kafka documentation.

yaml
kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
          --override listeners=PLAINTEXT://:9093 \
          --override zookeeper.connect=zk-0.zk-svc.default.svc.cluster.local:2181,zk-1.zk-svc.default.svc.cluster.local:2181,zk-2.zk-svc.default.svc.cluster.local:2181 \
          --override auto.create.topics.enable=true \
          --override auto.leader.rebalance.enable=true \
          --override background.threads=10 \
          --override compression.type=producer \
          --override delete.topic.enable=false \
          --override leader.imbalance.check.interval.seconds=300 \
          --override leader.imbalance.per.broker.percentage=10 \
          --override log.flush.interval.messages=9223372036854775807 \
          --override log.flush.offset.checkpoint.interval.ms=60000 \
          --override log.flush.scheduler.interval.ms=9223372036854775807 \
          --override log.retention.bytes=-1 \
          --override log.retention.hours=168 \
          --override log.roll.hours=168 \
          --override log.roll.jitter.hours=0 \
          --override log.segment.bytes=*** \
          --override log.segment.delete.delay.ms=60000 \
          --override message.max.bytes=*** \
          --override min.insync.replicas=1 \
          --override num.io.threads=8 \
          --override num.network.threads=3 \
          --override num.recovery.threads.per.data.dir=1 \
          --override num.replica.fetchers=1 \
          --override offse***tadata.max.bytes=4096 \
          --override offsets.commit.required.acks=-1 \
          --override offsets.commit.timeout.ms=5000 \
          --override offsets.load.buffer.size=5242880 \
          --override offsets.retention.check.interval.ms=600000 \
          --override offsets.retention.minutes=1440 \
          --override offsets.topic.compression.codec=0 \
          --override offsets.topic.num.partitions=50 \
          --override offsets.topic.replication.factor=3 \
          --override offsets.topic.segment.bytes=*** \
          --override queued.max.requests=500 \
          --override quota.consumer.default=9223372036854775807 \
          --override quota.producer.default=9223372036854775807 \
          --override replica.fetch.min.bytes=1 \
          --override replica.fetch.wait.max.ms=500 \
          --override replica.high.watermark.checkpoint.interval.ms=5000 \
          --override replica.lag.time.max.ms=*** \
          --override replica.socket.receive.buffer.bytes=65536 \
          --override replica.socket.timeout.ms=30000 \
          --override request.timeout.ms=30000 \
          --override socket.receive.buffer.bytes=*** \
          --override socket.request.max.bytes=*** \
          --override socket.send.buffer.bytes=*** \
          --override unclean.leader.election.enable=true \
          --override zookeeper.session.timeout.ms=6000 \
          --override zookeeper.set.acl=false \
          --override broker.id.generation.enable=true \
          --override connections.max.idle.ms=600000 \
          --override controlled.shutdown.enable=true \
          --override controlled.shutdown.max.retries=3 \
          --override controlled.shutdown.retry.backoff.ms=5000 \
          --override controller.socket.timeout.ms=30000 \
          --override default.replication.factor=1 \
          --override fetch.purgatory.purge.interval.requests=1000 \
          --override group.max.session.timeout.ms=300000 \
          --override group.min.session.timeout.ms=6000 \
          --override inter.broker.protocol.version=0.10.2-IV0 \
          --override log.cleaner.backoff.ms=*** \
          --override log.cleaner.dedupe.buffer.size=*** \
          --override log.cleaner.delete.retention.ms=86400000 \
          --override log.cleaner.enable=true \
          --override log.cleaner.io.buffer.load.factor=0.9 \
          --override log.cleaner.io.buffer.size=524288 \
          --override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
          --override log.cleaner.min.cleanable.ratio=0.5 \
          --override log.cleaner.min.compaction.lag.ms=0 \
          --override log.cleaner.threads=1 \
          --override log.cleanup.policy=delete \
          --override log.index.interval.bytes=4096 \
          --override log.index.size.max.bytes=*** \
          --override log.message.timestamp.difference.max.ms=9223372036854775807 \
          --override log.message.timestamp.type=CreateTime \
          --override log.preallocate=false \
          --override log.retention.check.interval.ms=300000 \
          --override max.connections.per.ip=2147483647 \
          --override num.partitions=1 \
          --override producer.purgatory.purge.interval.requests=1000 \
          --override replica.fetch.backoff.ms=1000 \
          --override replica.fetch.max.bytes=*** \
          --override replica.fetch.response.max.bytes=*** \
          --override reserved.broker.max.id=1000
  • Note that the broker.id is extracted from the ordinal index of the StatefulSet's Pods.
  • The listeners configuration must specify the port indicated by the headless service (9093 in this case).
  • The zookeeper.connect string is a comma separated list of the host:port pairs of the ZooKeeper servers in the ensemble.

OS Image tuning

For production use, it is important to configure the base OS image to allow for a sufficient number of file descriptors for your workload.

  • For each broker, (partition) * (partition_size/segment_size) determines the number of files the Broker will have open at any give time. You must ensure that this will not result in the Broker process dying because it has exhausted its allowable number of file descriptors.

CPU

Typical production Kafka broker deployments run on dual processor Xeon's with multiple hardware threads per core. However, CPU is unlikely to be your bottleneck. An 8 CPU deployment should be more than sufficient for good performance. You should start by simulating your workload with 2-4 CPUs and titrating up from there. It is highly unlikely that CPU will be the bottleneck for your deployment.

Memory

Kafka utilizes the OS page cache heavily to buffer data. To fully understand the interaction of Kafka and Linux containers you should read this and this. In particular, its is important to understand the accounting and isolation offered for the page cache for a mem cgroup. If your primary concern is isolation and performance you should do the following.

  • Determine the number of seconds of data you want to buffer t (time).
  • Determine the total write throughput of the deployment tp (storage/time).
  • tp * t gives the memory storage requirement that you should reserve. This should be set as the memory request for the container.

Disk

Disk throughput is the most common bottleneck that users encounter with Kafka. Given that Persistent Volumes are backed by network attached storage, the throughput is, in most cases, capped on a per Node basis without respect to the number of Persistent Volumes that are attached to the Node. For instance, if you are deploying Kafka onto a GKE or GCP based Kubernetes cluster, and if you use the standard PD type, your maximum sustained per instance throughput is 120 MB/s (Write) and 180 MB/s (Read). If you have multiple applications, each with a Persistent Volume mounted, these numbers represent the total achievable throughput. If you find that you have contention you should *** using Pod Anti-Affinity rules to ensure that noisy neighbors are not collocated on the same Node.

Pod Affinity

The Kafka Pod in the StatefulSet's PodTemplateSpec contains a Pod Anti-Affinity and a Pod Anti-Affinity rule.

yaml
    affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - kafka
              topologyKey: "kubernetes.io/hostname"
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
             - weight: 1
               podAffinityTerm:
                 labelSelector:
                    matchExpressions:
                      - key: "app"
                        operator: In
                        values:
                        - zk
                 topologyKey: "kubernetes.io/hostname"

The Pod Anti-Affinity rule ensures that two Kafka Broker's will never be launched on the same Node. This isn't strictly necessary, but it helps provide stronger availability garauntees in the presence of Node failure, and it helps alleviate disk throughput bottlenecks. The Pod Affinity rule attempts to collocate Kafka and ZooKeeper on the same Nodes. You will likely have more Kafka brokers than ZooKeeper servers, but the Kubernetes scheduler will attempt to, where possible, collocate Kafka brokers and ZooKeeper servers while respecting the hard spreading enforced by the Pod Anti-Affinity rule. This optimization attempts to minimize the amount of network I/O between the ZooKeeper ensemble and the Kafka cluster. However, if disk contention becomes an issue, it is equally valid to express a Pod Anti-Affinity rule to ensure that ZooKeeper servers and Kafka brokers are not scheduled onto the same node.

Testing

The easies way to test your deployment is to use the create a topic and use the console producer and consumer. Use kubectl exec to execute a bash shell on one of the brokers.

shell
> kubectl exec -ti kafka-0 -- bash

From the command line create a topic using kafka-topics.sh

shell
> kafka-topics.sh --create \
--topic test \
--zookeeper zk-0.zk-svc.default.svc.cluster.local:2181,zk-1.zk-svc.default.svc.cluster.local:2181,zk-2.zk-svc.default.svc.cluster.local:2181 \
--partitions 3 \
--replication-factor 2

Run the console consumer as below.

shell
> kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093

Use kubectl exec to execute a bash shell on another one of the brokers. You can use the same broker, but using a different broker will demonstrate that the system is working across multiple Nodes.

shell
> kubectl exec -ti kafka-1 -- bash

Run the console producer and generate a few messages by typing into stdin. Every time you press Enter you will flush a message to the consumer.

shell
> kafka-console-producer.sh --topic test --broker-list localhost:9093
hello
I like kafka
goodbye

You will see the messages on the console in which the console consumer is running.

shell
> kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093
hello
I like kafka
goodbye
Horizontal Scaling

You can use kubectl scale to horizontally scale your cluster. The below will scale the number of brokers to two.

shell
> kubectl scale statefulset kafka --replicas=2

You should note that, when you scale a Kafka cluster up or down you will have to use kafka-reassign-partitions.sh to ensure that your data is correctly replicated and assigned after scaling.

查看更多 kafka 相关镜像 →
bitnami/kafka logo
bitnami/kafka
bitnami
比特纳米Kafka安全镜像是一款针对分布式流处理平台Kafka的预配置、安全加固容器镜像,集成行业最佳安全实践,涵盖漏洞扫描、最小权限原则、加密通信支持及合规性检查,旨在简化Kafka部署流程,确保数据传输与存储安全性,适用于企业级流数据处理场景,帮助用户快速搭建安全可靠的Kafka集群。
956 次收藏1亿+ 次下载
6 个月前更新
bitnamicharts/kafka logo
bitnamicharts/kafka
bitnamicharts
Bitnami为Apache Kafka提供的Helm Chart是一款预配置的Kubernetes包管理工具,旨在简化分布式流处理平台Apache Kafka在Kubernetes集群中的部署、配置与全生命周期运维管理,集成了高可用性集群设置、安全认证机制、Prometheus监控指标及自动伸缩策略等核心功能,帮助用户无需手动处理复杂的集群参数配置,即可快速搭建稳定、可扩展且符合生产级标准的Kafka服务,适用于从开发测试到大规模生产环境的各类场景。
4 次收藏1000万+ 次下载
6 个月前更新
ubuntu/kafka logo
ubuntu/kafka
Ubuntu 官方镜像
Apache Kafka 是一个分布式事件流平台,它支持高吞吐量、低延迟的实时数据流处理与传输,可广泛应用于消息传递、日志聚合、实时分析、数据集成等场景,其长期维护轨道由 Canonical 负责,以确保平台在稳定性、安全性及功能迭代方面获得持续支持,为企业级用户提供可靠的事件流处理解决方案。
60 次收藏100万+ 次下载
1 个月前更新
apache/kafka logo
apache/kafka
Apache 软件基金会镜像
Apache Kafka是一个开源的分布式流处理平台,旨在提供高吞吐量、低延迟的实时数据流传递服务,支持发布/订阅消息模式,能够持久化存储海量数据流并确保数据可靠性,具备水平扩展能力和容错机制,广泛应用于日志收集、事件驱动架构、实时数据集成及流处理系统等场景,为企业级应用提供高效、稳定的数据流传输与处理解决方案。
198 次收藏1000万+ 次下载
29 天前更新
manageiq/kafka logo
manageiq/kafka
manageiq
Kafka container for ManageIQ
1万+ 次下载
7 年前更新
adobe/kafka logo
adobe/kafka
adobe
暂无描述
5.5千+ 次下载
5 个月前更新

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

Docker 配置

登录仓库拉取

通过 Docker 登录认证访问私有仓库

专属域名拉取

无需登录使用专属域名

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

系统配置

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

MacOS OrbStack

MacOS OrbStack 容器配置

Docker Compose

Docker Compose 项目配置

NAS 设备

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

极空间

极空间 NAS 系统配置服务

网络设备

爱快路由

爱快 iKuai 路由系统配置

宝塔面板

在宝塔面板一键配置镜像

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

使用与功能问题

docker search 报错:专属域名下仅支持 Docker Hub 查询

docker search 报错问题

网页搜不到镜像:Docker Hub 有但轩辕镜像搜索无结果

镜像搜索不到

离线传输镜像:无法直连时用 docker save/load 迁移

离线传输镜像

Docker 插件安装错误:application/vnd.docker.plugin.v1+json

Docker 插件安装错误

WSL 下 Docker 拉取慢:网络与挂载目录影响及优化

WSL 拉取镜像慢

轩辕镜像是否安全?镜像完整性校验(digest)说明

镜像安全性

如何用轩辕镜像拉取镜像?登录方式与专属域名配置

如何拉取镜像

错误码与失败问题

manifest unknown 错误:镜像不存在或标签错误

manifest unknown 错误

TLS/SSL 证书验证失败:Docker pull 时 HTTPS 证书错误

TLS 证书验证失败

DNS 解析超时:无法解析镜像仓库地址或连接超时

DNS 解析超时

410 Gone 错误:Docker 版本过低导致协议不兼容

410 错误:版本过低

402 Payment Required 错误:流量耗尽错误提示

402 错误:流量耗尽

401 UNAUTHORIZED 错误:身份认证失败或登录信息错误

身份认证失败错误

429 Too Many Requests 错误:请求频率超出专业版限制

429 限流错误

Docker login 凭证保存错误:Cannot autolaunch D-Bus(不影响登录)

凭证保存错误

账号 / 计费 / 权限

免费版与专业版区别:功能、限额与使用场景对比

免费版与专业版区别

支持的镜像仓库:Docker Hub、GCR、GHCR、K8s 等列表

轩辕镜像支持的镜像仓库

拉取失败是否扣流量?计费规则说明

拉取失败流量计费

KYSEC 权限不够:麒麟 V10/统信 UOS 下脚本执行被拦截

KYSEC 权限错误

如何申请开具发票?(增值税普票/专票)

开具发票

如何修改网站与仓库登录密码?

修改网站和仓库密码

配置与原理类

registry-mirrors 未生效:仍访问官方仓库或报错的原因

registry-mirrors 未生效

如何去掉镜像名称中的轩辕域名前缀?(docker tag)

去掉域名前缀

如何拉取指定架构镜像?(ARM64/AMD64 等多架构)

拉取指定架构镜像

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
nearpod/kafka
博客公告Docker 镜像公告与技术博客
热门镜像查看热门 Docker 镜像推荐
一键安装一键安装 Docker 并配置镜像源
镜像拉取问题咨询请 提交工单,官方技术交流群:1072982923。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
镜像拉取问题咨询请提交工单,官方技术交流群:。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
官方邮箱:点击复制邮箱
©2024-2026 源码跳动
官方邮箱:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.