You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/Documentation/Applications/ansys.md
+2-3
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,10 @@
1
1
## Ansys
2
-
3
-
The NREL Computational Science Center (CSC) maintains an Ansys license pool for general use, including two seats of CFD, one seat of Ansys Mechanical, and four Ansys HPC Packs to support running a model on many cores/parallel solves.
2
+
The current Ansys license is an unlimited license that covers all Ansys products, with no restrictions on quantities. However, since Ansys is unable to provide a license file that includes all products in unlimited quantities, we have requested licenses based on our anticipated needs. You can check the available licenses on Kestrel using the command `lmstat.ansys`. If the module you need is not listed, please submit a ticket by emailing [[email protected]](mailto:[email protected]) so that we can request an updated license to include the specific module you require.
4
3
5
4
The main workflow that we support has two stages. The first is interactive graphical usage, e.g., for interactively building meshes or visualizing boundary geometry. For this, Ansys should be run on a [FastX desktop](https://nrel.github.io/HPC/Documentation/Viz_Analytics/virtualgl_fastx/). The second stage is batch (i.e., non-interactive) parallel processing, which should be run on compute nodes via a Slurm job script. Of course, if you have Ansys input from another location ready to run in batch mode, the first stage is not needed. We unfortunately cannot support running parallel jobs on the DAV nodes, nor launching parallel jobs from interactive sessions on compute nodes.
6
5
7
6
### Shared License Etiquette
8
-
License usage can be checked on Kestrel with the command `lmstat.ansys`. Network floating licenses are a shared resource. Whenever you open an Ansys Fluent window, a license is pulled from the pool and becomes unavailable to other users. *Please do not keep idle windows open if you are not actively using the application*, close it and return the associated licenses to the pool. Excessive retention of software licenses falls under the inappropriate use policy.
7
+
Network floating licenses are a shared resource. Whenever you open an Ansys Fluent window, a license is pulled from the pool and becomes unavailable to other users. *Please do not keep idle windows open if you are not actively using the application*, close it and return the associated licenses to the pool. Excessive retention of software licenses falls under the inappropriate use policy.
9
8
10
9
### A Note on Licenses and Job Scaling
11
10
HPC Pack licenses are used to distribute Ansys batch jobs to run in parallel across many compute cores. The HPC Pack model is designed to enable exponentially more computational resources per each additional license, roughly 2x4^(num_hpc_packs). A table summarizing this relationship is shown below.
Copy file name to clipboardexpand all lines: docs/Documentation/Applications/comsol.md
+41
Original file line number
Diff line number
Diff line change
@@ -129,4 +129,45 @@ To configure a COMSOL job with multiple MPI ranks, required for any job where th
129
129
130
130
The job script can be submitted to SLURM just the same as above for the single-node example. The option `-mpibootstrap slurm` helps COMSOL to deduce runtime parameters such as `-nn`, `-nnhost` and `-np`. For large jobs that require more than one node, this approach, which uses MPI and/or OpenMP, can be used to efficiently utilize the available resources. Note that in this case, we choose 32 MPI ranks, 8 per node, and each rank using 13 threads for demonstration purpose, but *not* as an optimal performance recommendation. The optimal configuration depends on your particular problem, workload, and choice of solver, so some experimentation may be required.
131
131
132
+
## Running a COMSOL Model with GPU
133
+
In COMSOL Multiphysics®, GPU acceleration can significantly increase performance for time-dependent simulations that use the discontinuous Galerkin (dG) method, such as those using the Pressure Acoustics, Time Explicit interface, and for training deep neural network (DNN) surrogate models. The following is a job script example used to run COMSOL jobs on GPU nodes.
134
+
135
+
???+ example "Example GPU Submission Script"
136
+
137
+
```
138
+
#!/bin/bash
139
+
#SBATCH --job-name=comsol-batch-GPUs
140
+
#SBATCH --time=00:20:00
141
+
#SBATCH --gres=gpu:1 # request 1 gpu per node, each gpu has 80 Gb of memory
142
+
#SBATCH --mem-per-cpu=2G # requested memory per CPU core
143
+
#SBATCH --ntasks-per-node=30
144
+
#SBATCH --nodes=2
145
+
#SBATCH --account=<allocation handle>
146
+
#SBATCH --output=comsol-%j.out
147
+
#SBATCH --error=comsol-%j.err
148
+
149
+
# This helps ensure your job runs from the directory
150
+
# from which you ran the sbatch command
151
+
cd $SLURM_SUBMIT_DIR
152
+
153
+
# Set up environment, and list to stdout for verification
154
+
module load comsol
155
+
echo " "
156
+
module list
157
+
echo " "
158
+
159
+
inputfile=$SLURM_SUBMIT_DIR/myinputfile.mph
160
+
outputfile=$SLURM_SUBMIT_DIR/myoutputfilename
161
+
logfile=$SLURM_SUBMIT_DIR/mylogfilename
162
+
163
+
# Run a 2-node, 64-rank parallel COMSOL job with 1 threads for each rank and 1 gpu per node
Note, when launching a GPU job on Kestrel, be sure to do so from one of its dedicated [GPU login nodes](../Systems/Kestrel/index.md).
172
+
132
173
The Complex Systems Simulation and Optimization group has hosted introductory and advanced COMSOL trainings. The introductory training covered how to use the COMSOL GUI and run COMSOL in batch mode on Kestrel. The advanced training showed how to do a parametric study using different sweeps (running an interactive session is also included) and introduced equation-based simulation and parameter estimation. To learn more about using COMSOL on Kestrel, please refer to the training. The recording can be accessed at [Computational Sciences Tutorials](https://nrel.sharepoint.com/sites/ComputationalSciencesTutorials/Lists/Computational%20Sciences%20Tutorial%20Recordings/AllItems.aspx?viewid=7b97e3fa%2Dedf6%2D48cd%2D91d6%2Df69848525ba4&playlistLayout=playback&itemId=75) and the slides and models used in the training can be downloaded from [Github](https://github.com/NREL/HPC/tree/master/applications/comsol/comsol-training).
0 commit comments