 and generated Java stubs for Store communication |
| hg-store-common | Shared utilities, query abstractions, constants, and buffer management |
| hg-store-rocksdb | RocksDB abstraction layer with session management and optimized scan iterators |
| hg-store-core | Core storage engine: partition management, Raft integration, metadata coordination, business logic |
| hg-store-client | Java client library for applications to connect to Store cluster and perform operations |
| hg-store-node | Store node server implementation with gRPC services, Raft coordination, and PD integration |
| hg-store-cli | Command-line utilities for Store administration and debugging |
| hg-store-test | Comprehensive unit and integration tests for all Store components |
| hg-store-dist | Distribution assembly: packaging, configuration templates, startup scripts |
Client Layer (hugegraph-server) ↓ (hg-store-client connects via gRPC) Store Node Layer (hg-store-node) ├─ gRPC Services (Session, Query, State) ├─ Partition Engines (each partition = one Raft group) └─ PD Integration (heartbeat, partition assignment) ↓ Storage Engine Layer (hg-store-core + hg-store-rocksdb) ├─ HgStoreEngine (manages all partition engines) ├─ PartitionEngine (per-partition Raft state machine) └─ RocksDB (persistent storage)
For detailed architecture, Raft consensus mechanisms, and partition management, see Distributed Architecture.
Important: Build hugegraph-struct first, as it's a required dependency.
From the project root:
bash# Build struct module mvn install -pl hugegraph-struct -am -DskipTests # Build Store and all dependencies mvn clean package -pl hugegraph-store/hg-store-dist -am -DskipTests
The assembled distribution will be available at:
hugegraph-store/apache-hugegraph-store-<version>/lib/hg-store-node-<version>.jar
Extract the distribution package and edit conf/application.yml:
Core Settings
| Parameter | Default | Description |
|---|---|---|
pdserver.address | localhost:8686 | Required: PD cluster endpoints (comma-separated, e.g., 192.168.1.10:8686,192.168.1.11:8686) |
grpc.host | 127.0.0.1 | gRPC server bind address (use actual IP for production) |
grpc.port | 8500 | gRPC server port for client connections |
raft.address | 127.0.0.1:8510 | Raft service address for this Store node |
raft.snapshotInterval | 1800 | Raft snapshot interval in seconds (30 minutes) |
server.port | 8520 | REST API port for management and metrics |
app.data-path | ./storage | Directory for RocksDB data storage (supports multiple paths for multi-disk setups) |
app.fake-pd | false | Enable built-in PD mode for standalone testing (not for production) |
Single-Node Development Example (with fake-pd)
yamlpdserver: address: localhost:8686 # Ignored when fake-pd is true grpc: host: 127.0.0.1 port: 8500 raft: address: 127.0.0.1:8510 snapshotInterval: 1800 server: port: 8520 app: data-path: ./storage fake-pd: true # Built-in PD mode (development only)
3-Node Cluster Example (production)
Prerequisites: A running 3-node PD cluster at 192.168.1.10:8686, 192.168.1.11:8686, 192.168.1.12:8686
Store Node 1 (192.168.1.20):
yamlpdserver: address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686 grpc: host: 192.168.1.20 port: 8500 raft: address: 192.168.1.20:8510 app: data-path: ./storage fake-pd: false
Store Node 2 (192.168.1.21):
yamlpdserver: address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686 grpc: host: 192.168.1.21 port: 8500 raft: address: 192.168.1.21:8510 app: data-path: ./storage fake-pd: false
Store Node 3 (192.168.1.22):
yamlpdserver: address: 192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686 grpc: host: 192.168.1.22 port: 8500 raft: address: 192.168.1.22:8510 app: data-path: ./storage fake-pd: false
For detailed configuration options, RocksDB tuning, and deployment topologies, see Deployment Guide.
Start the Store server:
bash# Replace {version} with your hugegraph version # For historical 1.7.0 and earlier releases, use # apache-hugegraph-store-incubating-{version} instead. cd apache-hugegraph-store-{version} # Start Store node bin/start-hugegraph-store.sh # Stop Store node bin/stop-hugegraph-store.sh # Restart Store node bin/restart-hugegraph-store.sh
Startup Options
bashbin/start-hugegraph-store.sh [-g GC_TYPE] [-j "JVM_OPTIONS"]
-g: GC type (g1 or ZGC, default: g1)-j: Custom JVM options (e.g., -j "-Xmx16g -Xms8g")Default JVM memory settings (defined in start-hugegraph-store.sh):
Check if Store is running and registered with PD:
bash# Check process ps aux | grep hugegraph-store # Test gRPC endpoint (requires grpcurl) grpcurl -plaintext localhost:8500 list # Check REST API health curl http://localhost:8520/v1/health # Check logs tail -f logs/hugegraph-store.log # Verify registration with PD (from PD node) curl http://localhost:8620/v1/stores
For production deployment, see Deployment Guide and Best Practices.
HugeGraph Store serves as a pluggable backend for HugeGraph Server. To use Store as the backend:
Edit hugegraph-server/conf/graphs/<graph-name>.properties:
properties# Backend configuration backend=hstore serializer=binary # Store connection (PD addresses) store.provider=org.apache.hugegraph.backend.store.hstore.HstoreProvider store.pd_peers=192.168.1.10:8686,192.168.1.11:8686,192.168.1.12:8686 # Connection pool settings store.max_sessions=4 store.session_timeout=30000
Ensure PD and Store clusters are running, then start HugeGraph Server:
bashcd hugegraph-server bin/init-store.sh # Initialize schema bin/start-hugegraph.sh
bash# Check backend via REST API curl --location --request GET 'http://localhost:8080/metrics/backend' \ --header 'Authorization: Bearer <YOUR_ACCESS_TOKEN>' # Response should show: # {"backend": "hstore", "nodes": [...]}
Run Store tests:
bash# All tests (from hugegraph root) mvn test -pl hugegraph-store/hg-store-test -am # Specific test module mvn test -pl hugegraph-store/hg-store-test -am -Dtest=HgStoreEngineTest # From hugegraph-store directory cd hugegraph-store mvn test
Store tests are organized into 6 profiles (all active by default):
store-client-test: Client library testsstore-core-test: Core storage and partition management testsstore-common-test: Common utilities and query abstraction testsstore-rocksdb-test: RocksDB abstraction layer testsstore-server-test: Store node server and gRPC service testsstore-raftcore-test: Raft consensus integration testsFor development workflows and debugging, see Development Guide.
From the project root:
bashdocker build -f hugegraph-store/Dockerfile -t hugegraph-store:latest .
bashdocker run -d \ -p 8520:8520 \ -p 8500:8500 \ -p 8510:8510 \ -v /path/to/conf:/hugegraph-store/conf \ -v /path/to/storage:/hugegraph-store/storage \ -e PD_ADDRESS=192.168.1.10:8686,192.168.1.11:8686 \ --name hugegraph-store \ hugegraph-store:latest
Exposed Ports:
8520: REST API (management, metrics)8500: gRPC (client connections)8510: Raft consensusFor a complete HugeGraph distributed deployment (PD + Store + Server), see:
hugegraph-server/hugegraph-dist/docker/example/
For Docker and Kubernetes deployment details, see Deployment Guide.
Comprehensive documentation for HugeGraph Store:
| Documentation | Description |
|---|---|
| Distributed Architecture | Deep dive into three-tier architecture, Raft consensus, partition management, and PD coordination |
| Deployment Guide | Production deployment topologies, configuration reference, Docker/Kubernetes setup |
| Integration Guide | Integrating Store with HugeGraph Server, client API usage, migrating from other backends |
| Query Engine | Query pushdown mechanisms, multi-partition queries, gRPC API reference |
| Operations Guide | Monitoring and metrics, troubleshooting common issues, backup and recovery, rolling upgrades |
| Best Practices | Hardware sizing, performance tuning, security configuration, high availability design |
| Development Guide | Development environment setup, module architecture, testing strategies, contribution workflow |
Minimum Cluster (development/testing):
Recommended Production Cluster:
Large-Scale Cluster:
(N-1)/2 Store node failures per partition (e.g., 1 failure in 3-replica setup)Store exposes metrics via:
http://<store-host>:8520/actuator/metricshttp://<store-host>:8520/actuator/healthKey Metrics to Monitor:
For detailed operational guidance, see Operations Guide and Best Practices.
Contributions are welcome! Please read our Development Guide and follow the Apache HugeGraph contribution guidelines.
For development workflows, code structure, and testing strategies, see the Development Guide.
HugeGraph Store is licensed under the Apache License 2.0.
HugeGraph Store is under active development. Please report issues via GitHub or the mailing list.
您可以使用以下命令拉取该镜像。请将 <标签> 替换为具体的标签版本。如需查看所有可用标签版本,请访问 标签列表页面。
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
docker search 限制
站内搜不到镜像
离线 save/load
插件要用 plugin install
WSL 拉取慢
安全与 digest
新手拉取配置
镜像合规机制
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
域名连通性排查
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务