专属域名
文档搜索
轩辕助手
Run助手
邀请有礼
返回顶部
快速返回页面顶部
收起
收起工具栏
轩辕镜像 官方专业版
轩辕镜像
专业版
轩辕镜像 官方专业版
轩辕镜像
专业版
首页个人中心搜索镜像

交易
充值流量我的订单
工具
提交工单镜像收录一键安装
Npm 源Pip 源Homebrew 源
帮助
常见问题轩辕镜像免费版
其他
关于我们网站地图
热门搜索:
store

hugegraph/store

hugegraph

HugeGraph Distributed Store - NEW

下载次数: 0状态:社区镜像维护者:hugegraph仓库类型:镜像最近更新:2 天前
使用轩辕镜像,把时间还给真正重要的事。点击查看
镜像简介
标签下载
镜像标签列表与下载命令
使用轩辕镜像,把时间还给真正重要的事。点击查看

HugeGraph Store

![License]([***]

Note: From revision 1.5.0, the HugeGraph-Store code has been adapted to this location.

Overview

HugeGraph Store is a distributed storage backend for HugeGraph that provides high availability, horizontal scalability, and strong consistency for production graph database deployments. Built on RocksDB and Apache JRaft, it serves as the data plane for large-scale graph workloads requiring enterprise-grade reliability.

Core Capabilities

  • Distributed Storage: Hash-based partitioning with automatic data distribution across multiple Store nodes
  • High Availability: Multi-replica data replication using Raft consensus, tolerating node failures without data loss
  • Horizontal Scalability: Dynamic partition allocation and rebalancing for seamless cluster expansion
  • Query Optimization: Advanced query pushdown (filter, aggregation, index) and multi-partition parallel execution
  • Metadata Coordination: Tight integration with HugeGraph PD for cluster management and service discovery
  • High Performance: gRPC-based communication with streaming support for large result sets

Technology Stack

  • Storage Engine: RocksDB 7.7.3 (optimized for graph workloads)
  • Consensus Protocol: Apache JRaft (Ant Financial's Raft implementation)
  • RPC Framework: gRPC + Protocol Buffers
  • Deployment: Java 11+, Docker/Kubernetes support

When to Use HugeGraph Store

Use Store for:

  • Production deployments requiring high availability (99.9%+ uptime)
  • Workloads exceeding single-node storage capacity (100GB+)
  • Multi-tenant or high-concurrency scenarios (1000+ QPS)
  • Environments requiring horizontal scalability and fault tolerance

Use RocksDB Backend for:

  • Development and testing environments
  • Single-node deployments with moderate data size (<100GB)
  • Embedded scenarios where simplicity is preferred over distribution

Architecture

HugeGraph Store is a Maven multi-module project consisting of 9 modules:

ModuleDescription
hg-store-grpcgRPC protocol definitions (7 .proto files) and generated Java stubs for Store communication
hg-store-commonShared utilities, query abstractions, constants, and buffer management
hg-store-rocksdbRocksDB abstraction layer with session management and optimized scan iterators
hg-store-coreCore storage engine: partition management, Raft integration, metadata coordination, business logic
hg-store-clientJava client library for applications to connect to Store cluster and perform operations
hg-store-nodeStore node server implementation with gRPC services, Raft coordination, and PD integration
hg-store-cliCommand-line utilities for Store administration and debugging
hg-store-testComprehensive unit and integration tests for all Store components
hg-store-distDistribution assembly: packaging, configuration templates, startup scripts

Three-Tier Architecture

Client Layer (hugegraph-server)
    ↓ (hg-store-client connects via gRPC)
Store Node Layer (hg-store-node)
    ├─ gRPC Services (Session, Query, State)
    ├─ Partition Engines (each partition = one Raft group)
    └─ PD Integration (heartbeat, partition assignment)
         ↓
Storage Engine Layer (hg-store-core + hg-store-rocksdb)
    ├─ HgStoreEngine (manages all partition engines)
    ├─ PartitionEngine (per-partition Raft state machine)
    └─ RocksDB (persistent storage)

Key Architectural Features

  • Partition-based Distribution: Data is split into partitions (default: hash-based) and distributed across Store nodes
  • Raft Consensus per Partition: Each partition is a separate Raft group with 1-3 replicas (typically 3 in production)
  • PD Coordination: Store nodes register with PD for partition assignment, metadata synchronization, and health monitoring
  • Query Pushdown: Filters, aggregations, and index scans are pushed to Store nodes for parallel execution

For detailed architecture, Raft consensus mechanisms, and partition management, see Distributed Architecture.


Quick Start

Prerequisites

  • Java: 11 or higher
  • Maven: 3.5 or higher
  • HugeGraph PD Cluster: Store requires a running PD cluster for metadata coordination (see PD README)
  • Disk Space: At least 10GB per Store node for data and Raft logs
  • Network: Low-latency network (<5ms) between Store nodes for Raft consensus

Build

Important: Build hugegraph-struct first, as it's a required dependency.

From the project root:

bash
# Build struct module
mvn install -pl hugegraph-struct -am -DskipTests

# Build Store and all dependencies
mvn clean package -pl hugegraph-store/hg-store-dist -am -DskipTests

The assembled distribution will be available at:

hugegraph-store/apache-hugegraph-store-<version>/lib/hg-store-node-<version>.jar

Configuration

Extract the distribution package and edit conf/application.yml:

Core Settings

ParameterDefaultDescription
pdserver.addresslocalhost:8686Required: PD cluster endpoints (comma-separated, e.g., 192.168.1.10:8686,192.168.1.11:8686)
grpc.host127.0.0.1gRPC server bind address (use actual IP for production)
grpc.port8500gRPC server port for client connections
raft.address127.0.0.1:8510Raft service address for this Store node
raft.snapshotInterval1800Raft snapshot interval in seconds (30 minutes)
server.port8520REST API port for management and metrics
app.data-path./storageDirectory for RocksDB data storage (supports multiple paths for multi-disk setups)
app.fake-pdfalseEnable built-in PD mode for standalone testing (not for production)

Single-Node Development Example (with fake-pd)

yaml
pdserver:
  address: localhost:8686  # Ignored when fake-pd is true

grpc:
  host: 127.0.0.1
  port: 8500

raft:
  address: 127.0.0.1:8510
  snapshotInterval: 1800

server:
  port: 8520

app:
  data-path: ./storage
  fake-pd: true  # Built-in PD mode (development only)

3-Node Cluster Example (production)

Prerequisites: A running 3-node PD cluster at 192.168.1.10:8686, 192.168.1.11:8686, 192.168.1.12:8686

Store Node 1 (192.168.1.20):

yaml
pdserver:
  address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686

grpc:
  host: 192.168.1.20
  port: 8500

raft:
  address: 192.168.1.20:8510

app:
  data-path: ./storage
  fake-pd: false

Store Node 2 (192.168.1.21):

yaml
pdserver:
  address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686

grpc:
  host: 192.168.1.21
  port: 8500

raft:
  address: 192.168.1.21:8510

app:
  data-path: ./storage
  fake-pd: false

Store Node 3 (192.168.1.22):

yaml
pdserver:
  address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686

grpc:
  host: 192.168.1.22
  port: 8500

raft:
  address: 192.168.1.22:8510

app:
  data-path: ./storage
  fake-pd: false

For detailed configuration options, RocksDB tuning, and deployment topologies, see Deployment Guide.

Run

Start the Store server:

bash
# Replace {version} with your hugegraph version
# For historical 1.7.0 and earlier releases, use
# apache-hugegraph-store-incubating-{version} instead.
cd apache-hugegraph-store-{version}

# Start Store node
bin/start-hugegraph-store.sh

# Stop Store node
bin/stop-hugegraph-store.sh

# Restart Store node
bin/restart-hugegraph-store.sh

Startup Options

bash
bin/start-hugegraph-store.sh [-g GC_TYPE] [-j "JVM_OPTIONS"]
  • -g: GC type (g1 or ZGC, default: g1)
  • -j: Custom JVM options (e.g., -j "-Xmx16g -Xms8g")

Default JVM memory settings (defined in start-hugegraph-store.sh):

  • Max heap: 32GB
  • Min heap: 512MB

Verify Deployment

Check if Store is running and registered with PD:

bash
# Check process
ps aux | grep hugegraph-store

# Test gRPC endpoint (requires grpcurl)
grpcurl -plaintext localhost:8500 list

# Check REST API health
curl http://localhost:8520/v1/health

# Check logs
tail -f logs/hugegraph-store.log

# Verify registration with PD (from PD node)
curl http://localhost:8620/v1/stores

For production deployment, see Deployment Guide and Best Practices.


Integration with HugeGraph Server

HugeGraph Store serves as a pluggable backend for HugeGraph Server. To use Store as the backend:

1. Configure HugeGraph Server Backend

Edit hugegraph-server/conf/graphs/<graph-name>.properties:

properties
# Backend configuration
backend=hstore
serializer=binary

# Store connection (PD addresses)
store.provider=org.apache.hugegraph.backend.store.hstore.HstoreProvider
store.pd_peers=192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686

# Connection pool settings
store.max_sessions=4
store.session_timeout=30000

2. Start HugeGraph Server

Ensure PD and Store clusters are running, then start HugeGraph Server:

bash
cd hugegraph-server
bin/init-store.sh  # Initialize schema
bin/start-hugegraph.sh

3. Verify Backend

bash
# Check backend via REST API
curl --location --request GET 'http://localhost:8080/metrics/backend' \
--header 'Authorization: Bearer <YOUR_ACCESS_TOKEN>'
# Response should show:
# {"backend": "hstore", "nodes": [...]}

Testing

Run Store tests:

bash
# All tests (from hugegraph root)
mvn test -pl hugegraph-store/hg-store-test -am

# Specific test module
mvn test -pl hugegraph-store/hg-store-test -am -Dtest=HgStoreEngineTest

# From hugegraph-store directory
cd hugegraph-store
mvn test

Test Profiles

Store tests are organized into 6 profiles (all active by default):

  • store-client-test: Client library tests
  • store-core-test: Core storage and partition management tests
  • store-common-test: Common utilities and query abstraction tests
  • store-rocksdb-test: RocksDB abstraction layer tests
  • store-server-test: Store node server and gRPC service tests
  • store-raftcore-test: Raft consensus integration tests

For development workflows and debugging, see Development Guide.


Docker

Build Docker Image

From the project root:

bash
docker build -f hugegraph-store/Dockerfile -t hugegraph-store:latest .

Run Container

bash
docker run -d \
  -p 8520:8520 \
  -p 8500:8500 \
  -p 8510:8510 \
  -v /path/to/conf:/hugegraph-store/conf \
  -v /path/to/storage:/hugegraph-store/storage \
  -e PD_ADDRESS=192.168.1.10:8686,192.168.1.11:8686 \
  --name hugegraph-store \
  hugegraph-store:latest

Exposed Ports:

  • 8520: REST API (management, metrics)
  • 8500: gRPC (client connections)
  • 8510: Raft consensus

Docker Compose Example

For a complete HugeGraph distributed deployment (PD + Store + Server), see:

hugegraph-server/hugegraph-dist/docker/example/

For Docker and Kubernetes deployment details, see Deployment Guide.


Documentation

Comprehensive documentation for HugeGraph Store:

DocumentationDescription
Distributed ArchitectureDeep dive into three-tier architecture, Raft consensus, partition management, and PD coordination
Deployment GuideProduction deployment topologies, configuration reference, Docker/Kubernetes setup
Integration GuideIntegrating Store with HugeGraph Server, client API usage, migrating from other backends
Query EngineQuery pushdown mechanisms, multi-partition queries, gRPC API reference
Operations GuideMonitoring and metrics, troubleshooting common issues, backup and recovery, rolling upgrades
Best PracticesHardware sizing, performance tuning, security configuration, high availability design
Development GuideDevelopment environment setup, module architecture, testing strategies, contribution workflow

Production Deployment Notes

Cluster Topology

Minimum Cluster (development/testing):

  • 3 PD nodes
  • 3 Store nodes
  • 1-3 Server nodes

Recommended Production Cluster:

  • 3-5 PD nodes (odd number for Raft quorum)
  • 6-12 Store nodes (depends on data size and throughput)
  • 3-6 Server nodes (depends on query load)

Large-Scale Cluster:

  • 5 PD nodes
  • 12+ Store nodes (horizontal scaling)
  • 6+ Server nodes (load balancing)

High Availability

  • Store uses Raft consensus for leader election and data replication
  • Each partition has 1-3 replicas (default: 3 in production)
  • Cluster can tolerate up to (N-1)/2 Store node failures per partition (e.g., 1 failure in 3-replica setup)
  • Automatic failover and leader re-election (typically <10 seconds)
  • PD provides cluster-wide coordination and metadata consistency

Partition Strategy

  • Default Partitioning: Hash-based (configurable in PD)
  • Partition Count: Recommended 3-5x the number of Store nodes for balanced distribution
  • Replica Count: 3 replicas per partition for production (configurable)
  • Rebalancing: Automatic partition rebalancing triggered by PD patrol (default: 30 minutes interval)

Network Requirements

  • Latency: <5ms between Store nodes for Raft consensus performance
  • Bandwidth: 1Gbps+ recommended for data replication and query traffic
  • Ports: Ensure firewall allows traffic on 8500 (gRPC), 8510 (Raft), 8520 (REST)
  • Topology: *** rack-aware or availability-zone-aware placement for fault isolation

Monitoring

Store exposes metrics via:

  • REST API: http://<store-host>:8520/actuator/metrics
  • Health Check: http://<store-host>:8520/actuator/health
  • Prometheus Integration: Metrics exported in Prometheus format

Key Metrics to Monitor:

  • Raft leader election count and duration
  • Partition count and distribution
  • RocksDB read/write latency and throughput
  • gRPC request QPS and error rate
  • Disk usage and I/O metrics

For detailed operational guidance, see Operations Guide and Best Practices.


Community

  • Website: [***]
  • Documentation: [***]
  • GitHub: https://github.com/apache/hugegraph
  • Mailing List: ***
  • Issue Tracker: https://github.com/apache/hugegraph/issues

Contributing

Contributions are welcome! Please read our Development Guide and follow the Apache HugeGraph contribution guidelines.

For development workflows, code structure, and testing strategies, see the Development Guide.

License

HugeGraph Store is licensed under the Apache License 2.0.


HugeGraph Store is under active development. Please report issues via GitHub or the mailing list.

镜像拉取方式

您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。

轩辕镜像加速拉取命令点我查看更多 store 镜像标签

docker pull docker.xuanyuan.run/hugegraph/store:<标签>

使用方法:

  • 登录认证方式
  • 免认证方式

DockerHub 原生拉取命令

docker pull hugegraph/store:<标签>

更多 store 镜像推荐

hugegraph/hugegraph logo

hugegraph/hugegraph

hugegraph
Apache HugeGraph-Server官方版本提供分布式图数据库服务,支持大规模图数据的存储、查询与分析,由官方维护确保稳定性与兼容性。
5 次收藏1万+ 次下载
1 个月前更新
hugegraph/hugegraph-computer-operator-manager logo

hugegraph/hugegraph-computer-operator-manager

hugegraph
暂无描述
10万+ 次下载
2 年前更新
hugegraph/hubble logo

hugegraph/hubble

hugegraph
Apache HugeGraph分析仪表板(支持数据加载、模式管理、图遍历与展示)
1 次收藏5万+ 次下载
1 个月前更新
hugegraph/hugegraph-computer-operator logo

hugegraph/hugegraph-computer-operator

hugegraph
Apache HugeGraph Computer Operator Image
1 次收藏4千+ 次下载
2 年前更新
hugegraph/hugegraph-computer logo

hugegraph/hugegraph-computer

hugegraph
Apache HugeGraph Computer Core
3.1千+ 次下载
2 年前更新
hugegraph/vermeer logo

hugegraph/vermeer

hugegraph
Apache HugeGraph In-Memory Computing System - Fast & Easy to use
6.8千+ 次下载
1 个月前更新

查看更多 store 相关镜像

轩辕镜像配置手册

探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式

Docker 配置

登录仓库拉取

通过 Docker 登录认证访问私有仓库

专属域名拉取

无需登录使用专属域名

K8s Containerd

Kubernetes 集群配置 Containerd

K3s

K3s 轻量级 Kubernetes 镜像加速

Dev Containers

VS Code Dev Containers 配置

Podman

Podman 容器引擎配置

Singularity/Apptainer

HPC 科学计算容器配置

其他仓库配置

ghcr、Quay、nvcr 等镜像仓库

Harbor 镜像源配置

Harbor Proxy Repository 对接专属域名

Portainer 镜像源配置

Portainer Registries 加速拉取

Nexus 镜像源配置

Nexus3 Docker Proxy 内网缓存

系统配置

Linux

在 Linux 系统配置镜像服务

Windows/Mac

在 Docker Desktop 配置镜像

MacOS OrbStack

MacOS OrbStack 容器配置

Docker Compose

Docker Compose 项目配置

NAS 设备

群晖

Synology 群晖 NAS 配置

飞牛

飞牛 fnOS 系统配置镜像

绿联

绿联 NAS 系统配置镜像

威联通

QNAP 威联通 NAS 配置

极空间

极空间 NAS 系统配置服务

网络设备

爱快路由

爱快 iKuai 路由系统配置

宝塔面板

在宝塔面板一键配置镜像

需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单

镜像拉取常见问题

使用与功能问题

配置了专属域名后,docker search 为什么会报错?

docker search 限制

Docker Hub 上有的镜像,为什么在轩辕镜像网站搜不到?

站内搜不到镜像

机器不能直连外网时,怎么用 docker save / load 迁镜像?

离线 save/load

docker pull 拉插件报错(plugin v1+json)怎么办?

插件要用 plugin install

WSL 里 Docker 拉镜像特别慢,怎么排查和优化?

WSL 拉取慢

轩辕镜像安全吗?如何用 digest 校验镜像没被篡改?

安全与 digest

第一次用轩辕镜像拉 Docker 镜像,要怎么登录和配置?

新手拉取配置

轩辕镜像合规吗?轩辕镜像的合规是怎么做的?

镜像合规机制

错误码与失败问题

docker pull 提示 manifest unknown 怎么办?

manifest unknown

docker pull 提示 no matching manifest 怎么办?

no matching manifest(架构)

镜像已拉取完成,却提示 invalid tar header 或 failed to register layer 怎么办?

invalid tar header(解压)

Docker pull 时 HTTPS / TLS 证书验证失败怎么办?

TLS 证书失败

Docker pull 时 DNS 解析超时或连不上仓库怎么办?

DNS 超时

docker 无法连接轩辕镜像域名怎么办?

域名连通性排查

Docker 拉取出现 410 Gone 怎么办?

410 Gone 排查

出现 402 或「流量用尽」提示怎么办?

402 与流量用尽

Docker 拉取提示 UNAUTHORIZED(401)怎么办?

401 认证失败

遇到 429 Too Many Requests(请求太频繁)怎么办?

429 限流

docker login 提示 Cannot autolaunch D-Bus,还算登录成功吗?

D-Bus 凭证提示

为什么会出现「单层超过 20GB」或 413,无法加速拉取?

413 与超大单层

账号 / 计费 / 权限

轩辕镜像免费版和专业版有什么区别?

免费版与专业版区别

轩辕镜像支持哪些 Docker 镜像仓库?

支持的镜像仓库

镜像拉取失败还会不会扣流量?

失败是否计费

麒麟 V10 / 统信 UOS 提示 KYSEC 权限不够怎么办?

KYSEC 拦截脚本

如何在轩辕镜像申请开具发票?

申请开票

怎么修改轩辕镜像的网站登录和仓库登录密码?

修改登录密码

如何注销轩辕镜像账户?要注意什么?

注销账户

配置与原理类

写了 registry-mirrors,为什么还是走官方或仍然报错?

mirrors 不生效

怎么用 docker tag 去掉镜像名里的轩辕域名前缀?

去掉域名前缀

如何拉取指定 CPU 架构的镜像(如 ARM64、AMD64)?

指定架构拉取

用轩辕镜像拉镜像时快时慢,常见原因有哪些?

拉取速度原因

查看全部问题→

用户好评

来自真实用户的反馈,见证轩辕镜像的优质服务

用户头像

oldzhang

运维工程师

Linux服务器

5

"Docker访问体验非常流畅,大镜像也能快速完成下载。"

轩辕镜像
镜像详情
...
hugegraph/store
博客Docker 镜像公告与技术博客
热门查看热门 Docker 镜像推荐
安装一键安装 Docker 并配置镜像源
镜像拉取问题咨询请 提交工单,官方技术交流群:1072982923。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
镜像拉取问题咨询请提交工单,官方技术交流群:。轩辕镜像所有镜像均来源于原始仓库,本站不存储、不修改、不传播任何镜像内容。
商务合作:点击复制邮箱
©2024-2026 源码跳动
商务合作:点击复制邮箱Copyright © 2024-2026 杭州源码跳动科技有限公司. All rights reserved.