
https://github.com/philips-software/docker-blackduck/workflows/build/badge.svg](https://github.com/philips-software/docker-blackduck/actions/) :/code philipssoftware/blackduck:7 /app/detect.sh \ --blackduck.url=<your-blackduck-url> \ --blackduck.api.token=<your-token> \ --detect.policy.check=true \ --detect.source.path=/code \ --detect.project.name=<your-project-name> \ --detect.project.version.name=<your-version>
Docker image scan
bash# If you can share docker mount with blackduck imageinspector docker run -v /var/run/docker.sock:/var/run/docker.sock --network="host" philipssoftware/blackduck:7-docker \ /app/detect.sh --blackduck.url=<your-blackduck-url> --blackduck.api.token=<your-token> --detect.policy.check=true \ --detect.project.name=<your-project-name> --detect.project.version.name=<your-version> --detect.docker.image=<your-image> # If you want to mount and provide blackduck imageinspector working directory mkdir $(pwd)/shared docker run -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):$(pwd) --network="host" -w $(pwd) philipssoftware/blackduck:7-docker \ /airgap/packaged-inspectors/docker/blackduck-docker-inspector.sh --blackduck.url=<your-blackduck-url> --blackduck.api.token=<your-token> \ --detect.policy.check=true --detect.project.name=<your-project-name> --detect.project.version.name=<your-version> \ --detect.docker.image=<your-image> --shared.dir.path.local=$(pwd)/shared
By setting setting the environment variable DETECT_AIR_GAP to true you can enable Air Gap. This eliminate the need for internet access that Detect requires to download those dependencies. Currently only the gradle inspector is supported. This mode is particularly useful when you are behind a corporate firewall which blocks connections to JFrog Artifactory.
Example:
bashdocker run -e DETECT_AIR_GAP=true -v $(pwd):/code philipssoftware/blackduck:6 /app/detect.sh --blackduck.url=<your-blackduck-url> --blackduck.api.token=<your-token> --blackduck.trust.cert=true --detect.policy.check=true --detect.source.path=/code --detect.project.name=<your-project-name> --detect.project.version.name=<your-version>
The images obviously contain blackduck and java8, but also two other files:
REPOTAGSThis file has a url to the REPO with specific commit-sha of the build. Example:
$ docker run philipssoftware/blackduck:6 cat REPO https://github.com/philips-software/docker-blackduck/tree/facb2271e5a563e5d6f65ca3f475cefac37b8b6c
This contains all the similar tags at the point of creation.
$ docker run philipssoftware/blackduck:6 cat TAGS blackduck blackduck:6 blackduck:6.7 blackduck:6.7.0
You can use this to pin down a version of the container from an existing development build for production. When using blackduck:6 for development. This ensures that you've got all security updates in your build. If you want to pin the version of your image down for production, you can use this file inside of the container to look for the most specific tag, the last one.
blackduck, blackduck:7, blackduck:7.14, blackduck:7.14.0 7/java/Dockerfileblackduck:node, blackduck:7-node, blackduck:7.14-node, blackduck:7.14.0-node 7/node/Dockerfileblackduck:python, blackduck:7-python, blackduck:7.14-python, blackduck:7.14.0-python 7/python/Dockerfileblackduck:golang, blackduck:7-golang, blackduck:7.14-golang, blackduck:7.14.0-golang 7/golang/Dockerfileblackduck:dotnetcore-2.2.110, blackduck:7-dotnetcore-2.2, blackduck:7.14-dotnetcore-2.2.110, blackduck:7.14.0-dotnetcore-2.2.110 7/dotnetcore-2.2.110/Dockerfileblackduck:7.14-dotnetcore-3.0, blackduck:7.14.0-dotnetcore-3.0.101 7/dotnetcore-3.0.101/Dockerfileblackduck:7.14.0-dotnetcore-3.1.102 7/dotnetcore-3.1.102/Dockerfileblackduck:dotnetcore, blackduck:7-dotnetcore, blackduck:7-dotnetcore-3, blackduck:7-dotnetcore-3.1, blackduck:7.14-dotnetcore, blackduck:7.-dotnetcore-3.1, blackduck:7.14.0-dotnetcore, blackduck:7.14.0-dotnetcore-3.1.302 7/dotnetcore-3.1.302/Dockerfileblackduck:docker, blackduck:7-docker, blackduck:7.14-docker, blackduck:7.14.0-docker 7/docker/DockerfileAll images above are also available for version 8.1.1, but since some heavily used deprecated arguments, we did not make 8 the latest version yet.
Why do we have our own docker image definitions?
We often need some tools in a container for checking some things. F.e. jq, aws-cli and curl. We can install this every time we need a container, but having this baked into a container seems a better approach.
That's why we want our own docker file definitions.
Currently this image only has java. Running a project with yarn or npm will not work yet.
License is MIT. See LICENSE file
https://github.com/JeroenKnoops https://github.com/bartgolsteijn https://github.com/loafoe https://github.com/kishoreinvits https://github.com/marcofranssen https://github.com/prakashguru https://github.com/dmixonphilips https://github.com/sudheeshps https://github.com/marcel-dias https://github.com/Wetula https://github.com/timovandeput
This module is part of the Philips Forest.
___ _ / __\__ _ __ ___ ___| |_ / _\/ _ \| '__/ _ \/ __| __| / / | (_) | | | __/\__ \ |_ \/ \___/|_| \___||___/\__| Infrastructure
Talk to the forestkeepers in the docker-images-channel on Slack.





探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务