Weather Research and Forecasting Model (WRF*) is a numerical weather prediction model designed for atmospheric modeling and operational forecasting. WRF’s build process is ***ed “fairly” complicated because it includes a series of dependencies.
intel/intel-optimized-wrf:latest:It is recommended to use the image on an 4th Generation Intel® Xeon® Scalable Processor.
The Configuration to run this image depends on the HW. You can check the number of physical CPUs in the machine where you're attempting to run the Docker image by running the following cmd:
lscpu
By getting the number of physical CPUs in the Machine, you can decide and designate how many cores you want to use to run the WRF Docker image.
docker run --cpuset-cpus <num>-<num> -it --name wrf-hpckit -v <path_to_run_file>/run.sh:/home/run.sh -v <local_directory_to_get_logfiles>:/log wrf-hpckit:latest
Replace < num >-< num > with a valid number range.
--cpuset-cpus - Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use if you have more than one CPU. The first CPU is numbered 0.
i.e. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
Replace < path_to_run_file > with the absolute path where you have the run.sh file saved. Replace < local_directory_to_get_logfiles > with the absolute path where you want to check the log files generated by WRF.
Benchmark test case files are needed to run WRF model.
Write a run.sh file with the following format to download the files for benchmarking using WRF model and set some variables
This run.sh file was written based on an 4th Generation Intel® Xeon® Scalable Processor with 8 CPUs.
conf#!/bin/bash wget -c https://www2.mmm.ucar.edu/wrf/users/benchmark/v44/v4.4_bench_conus12km.tar.gz tar -zxf v4.4_bench_conus12km.tar.gz ln -sf /v4.4_bench_conus12km/* /WRF/run ln -sf /v4.4_bench_conus12km/wrfbdy_d* /WRF/run ln -sf /v4.4_bench_conus12km/wrfinput_d* /WRF/run ln -sf /v4.4_bench_conus12km/*.dat /WRF/run ln -sf /v4.4_bench_conus12km/namelist.input.restart /WRF/run/namelist.input ln -sf /v4.4_bench_conus12km/wrfrst_d01_2019-11-26_23:00:00.ifort /WRF/run/wrfrst_d01_2019-11-26_23:00:00 export PROCESS_PER_NODE=4 export OMP_NUM_THREADS=2 export I_MPI_PIN_DOMAIN=auto export I_MPI_PIN_ORDER=bunch export OMP_PROC_BIND=close export OMP_PLACES= threads export KMP_BLOCKTIME=10 export KMP_STACKSIZE=128M export WRF_NUM_TILES=48 ulimit -S -s unlimited ulimit -S -m unlimited ulimit -S -d unlimited cd /WRF/run mpiexec.hydra -genvall -n $PROCESS_PER_NODE -ppn $PROCESS_PER_NODE ./wrf.exe
The run.sh file download CONUS-12km benchmark files for benchmarking using WRF model. (You can change the benchmark dataset for other test cases (please see examples of other WRF test cases at [**] and configure the necessary variables to work correctly.)*
PROCESS_PER_NODE*OMP_NUM_THREADS should always match the number of physical CPUs. The number should be a product of two variables PROCESS_PER_NODE * OMP_NUM_THREADS.
PROCESS_PER_NODE The requested number of MPI processes per node should correspond with a number of cores availbles in your HW instance.
OMP_NUM_THREADS Environment variable sets the number of threads to use for parallel regions by setting the initial value of the nthreads-var.
I_MPI_PIN_DOMAIN The threads of a process are kept within a node -- use this when running 1 process per node.
I_MPI_PIN_ORDER Set this environment variable to define the mapping order for MPI processes to domains as specified by the I_MPI_PIN_DOMAIN environment variable.
OMP_PROC_BIND Environment variable controls the thread affinity policy and whether OpenMP threads can be moved between places.
OMP_PLACES Environment variable specifies a list of places that are available when the OpenMP program is executed.
KMP_BLOCKTIME Sets the time, in milliseconds, that a thread should wait, after completing the execution of a parallel region, before sleeping.
KMP_STACKSIZE Sets the number of bytes to allocate for each parallel thread to use as its private stack. Use the optional suffix b, k, m, g, or t, to specify bytes, kilobytes, megabytes, gigabytes, or terabytes.
WRF_NUM_TILES The optimal number of tiles depends on the characteristics of the model and on the hardware.
Reference image with suggested values depending on the physical CPUs.
Please refer to the following links for detailed information on the variables described above:
PROCESS_PER_NODE
OMP_NUM_THREADS
I_MPI_PIN_DOMAIN
I_MPI_PIN_ORDER
OMP_PROC_BIND
OMP_PLACES
KMP_BLOCKTIME
KMP_STACKSIZE
WRF_NUM_TILES
探索更多轩辕镜像的使用方法,找到最适合您系统的配置方式
通过 Docker 登录认证访问私有仓库
无需登录使用专属域名
Kubernetes 集群配置 Containerd
K3s 轻量级 Kubernetes 镜像加速
VS Code Dev Containers 配置
Podman 容器引擎配置
HPC 科学计算容器配置
ghcr、Quay、nvcr 等镜像仓库
Harbor Proxy Repository 对接专属域名
Portainer Registries 加速拉取
Nexus3 Docker Proxy 内网缓存
需要其他帮助?请查看我们的 常见问题Docker 镜像访问常见问题解答 或 提交工单
manifest unknown
no matching manifest(架构)
invalid tar header(解压)
TLS 证书失败
DNS 超时
410 Gone 排查
402 与流量用尽
401 认证失败
429 限流
D-Bus 凭证提示
413 与超大单层
来自真实用户的反馈,见证轩辕镜像的优质服务