
edrusb/atlas
DATA_ROOT must be set and point to the path inside the container where will be mounted the persistent storage.
ATLAS_HOSTNAME must also be set to a hostname that resolves to the IP address that external atlas hook client will be
able to connect at, in order to publish messages through kafka (port 9027).
Here is an example of call:
docker run -d -p 8983:8983 -p 9838:9838 -p 8838:8838 -p 9026:9026 -p 9027:9027 -p 61500:61500 -p 61510:61510 -p 61520:61520 -p 61530:61530 -p 21000:21000 -e [object Object] -e [object Object]=`hostname -A` -v /mapr/clustername/volume:[object Object] edrusb/atlas:1.18
remove the corresponding -p directive to make the corresponding service unavailable outside the container
To avoid data corruption when terminating the container while a lot of data is managed by atlas, it may be wise to increase the exit timeout to 60 seconds or more for a graceful shutdown to stay possible:
docker stop -t 60 <container ID>
TCP ports used and the corresponding process listening behind:
Since container version 1.15 the atlas hook binaries (for Hive, falcon, impala, kafka, sqoop, storm, hbase) are extracted from the docker image to the directory $DATA_ROOT/hook.packages
Version 1.16 brings support for Persistent Storage Claims which are not settable by root user, need to change to non-privileged user to setup the room for atlas, hbase, solr and zookeeper
Version 1.17 fixes a minor bug about modification brought to /etc/hosts in container: adding the $ATLAS_HOSTNAME as alias of the existing IP not as a new line
Version 1.18 exports the atlas-application.properties file beside the hooks to the $DATA_ROOT/hook.packages directory

manifest unknown 错误
TLS 证书验证失败
DNS 解析超时
410 错误:版本过低
402 错误:流量耗尽
身份认证失败错误
429 限流错误
凭证保存错误
来自真实用户的反馈,见证轩辕镜像的优质服务