Skip to content

Commit ea1da1c

Browse files
authored
Merge pull request #737 from mselensky/amrwind-hbw
AMR-Wind hbw updates
2 parents 14e6b7d + f564ba1 commit ea1da1c

File tree

1 file changed

+68
-30
lines changed

1 file changed

+68
-30
lines changed

docs/Documentation/Applications/amrwind.md

+68-30
Original file line numberDiff line numberDiff line change
@@ -12,15 +12,80 @@ turbines within a wind farm. For more information see [the AMR-Wind documentatio
1212

1313
AMR-Wind is only supported on Kestrel.
1414

15+
## Using the AMR-Wind Modules
1516

16-
## Cmake installation
17+
NREL makes available different modules for using AMR-Wind for CPUs and GPUs for different toolchains.
18+
19+
### Running on the CPU nodes
20+
21+
On Kestrel, AMR-Wind performs the best on CPU nodes in the [`hbw` ("high bandwidth") partition](../Systems/Kestrel/Running/index.md#high-bandwidth-partition), which each have two network interface cards (NICs). **We strongly recommend submitting multi-node AMR-Wind jobs to the `hbw` partition for the best performance and to save AUs** when compared to running on single-NIC nodes in `short`, `standard`, or `long`.
22+
23+
Additionally, according to benchmarks, AMR-Wind achieves the best performance on Kestrel CPU nodes using 72 MPI ranks per node. An example script using the current CPU module of AMR-Wind using all of the best practice recommendations is provided below.
24+
25+
!!! Note
26+
Single-node jobs are not allowed to be submitted to `hbw`; they should instead be continued to be submitted to the "general" CPU partitions such as `short`, `standard`, or `long`.
27+
28+
??? example "Sample job script: Kestrel - High-Bandwidth Nodes"
29+
30+
```
31+
#!/bin/bash​
32+
#SBATCH --account=<user-account> # Replace with your HPC account
33+
#SBATCH –-partition=hbw​
34+
#SBATCH --time=01:00:00
35+
#SBATCH –-nodes=16 # May need to change depending on your problem​​
36+
37+
export FI_MR_CACHE_MONITOR=memhooks​
38+
export FI_CXI_RX_MATCH_MODE=software​
39+
export MPICH_SMP_SINGLE_COPY_MODE=NONE​
40+
export MPICH_OFI_NIC_POLICY=NUMA​
41+
42+
# Optimal number of launcher (srun) tasks per node benchmarked on Kestrel
43+
export SRUN_TASKS_PER_NODE=72
44+
45+
# Replace <input>​ with your input file
46+
srun -N $SLURM_JOB_NUM_NODES \
47+
-n $(($SRUN_TASKS_PER_NODE * $SLURM_JOB_NUM_NODES)) \
48+
--ntasks-per-node=$SRUN_TASKS_PER_NODE \
49+
--distribution=block:block \
50+
--cpu_bind=rank_ldom \
51+
amr_wind <input>​
52+
```
53+
54+
### Running on the GPU nodes
55+
56+
A module for AMR-Wind can also be run on GPU nodes, which can obtain the most optimal performance.
57+
58+
Here is a sample script for submitting an AMR-Wind application run on multiple GPU nodes, with the user's input file and mesh grid in the current working directory.
59+
60+
??? example "Sample job script: Kestrel - Multiple GPUs across nodes"
61+
62+
```
63+
64+
#!/bin/bash
65+
#SBATCH --time=1:00:00
66+
#SBATCH --account=<user-account> # Replace with your HPC account
67+
#SBATCH --nodes=2
68+
#SBATCH --gpus=h100:4
69+
#SBATCH --exclusive
70+
#SBATCH --mem=0
71+
72+
module load PrgEnv-nvhpc
73+
module load amr-wind/main-craympich-nvhpc
74+
75+
# Replace <input>​ with your input file
76+
srun -K1 -n 16 --gpus-per-node=4 amr_wind <input>
77+
78+
```
79+
80+
81+
## Custom `cmake` installation
1782

1883
In this section we provide cmake scripts for installation of AMR-wind.
19-
Make sure to add cmake lines for additional dependencies (openfast, NETCDF, HELICS, etc ...).
84+
Make sure to add cmake lines for additional dependencies (OpenFAST, NETCDF, HELICS, etc ...).
2085

2186

2287
### Installation of AMR-Wind on CPU Nodes
23-
AMR-wind can be installed by following the instructions [here](https://exawind.github.io/amr-wind/user/build. html#building-from-source).
88+
AMR-wind can be installed by following the instructions [here](https://exawind.github.io/amr-wind/user/build.html#building-from-source).
2489
On Kestrel CPU nodes, this can be achieved by executing the following script:
2590

2691
```
@@ -170,30 +235,3 @@ ml PrgEnv-nvhpc
170235
ml cray-libsci/22.12.1.1
171236
```
172237

173-
174-
## Running on the GPUs Using Modules
175-
176-
NREL makes available different modules for using AMR-Wind for CPUs and GPUs for
177-
different toolchains. It is recommended that AMR-Wind be run on GPU nodes for obtaining the most optimal
178-
performance.
179-
180-
Here is a sample script for submitting an AMR-Wind application run on multiple GPU nodes, with the user's input file and mesh grid in the current working directory.
181-
182-
??? example "Sample job script: Kestrel - Full GPU node"
183-
184-
```
185-
186-
#!/bin/bash
187-
#SBATCH --time=1:00:00
188-
#SBATCH --account=<user-account>
189-
#SBATCH --nodes=2
190-
#SBATCH --gpus=h100:4
191-
#SBATCH --exclusive
192-
#SBATCH --mem=0
193-
194-
module load PrgEnv-nvhpc
195-
module load amr-wind/main-craympich-nvhpc
196-
197-
srun -K1 -n 16 --gpus-per-node=4 amr_wind abl_godunov-512.i >& ablGodunov-512.log
198-
199-
```

0 commit comments

Comments
 (0)