Documentation
Citation
SegregatedVMSSolver.jl for solving incompressible Navier-Stokes using stabilized Finite Element Method, in specific Streamline-Upwind Petrov-Galerkin (SUPG) and Variational MultiScale method (VMS)
The package solves the incompressible Navier-Stokes equations in the Finite Element Framework using SUPG and VMS method. VMS has been originally introduced by Hughes2000. In specific, a linearized and segregated version of the SUPG (following the steps illustrated by Janssens2014) and VMS is solved.
The methods belong to the Large Eddy Simulation (LES) family. The package can solve the Taylor Green Vortices, Lid Driven Cavity Flow (only 2D), Cylinder vortex shedding and and general Airfoils (2D and 3D).
It works fully in parallel. It is specialized for the resolution of flow over airfoils, testing the capability of detecting the Laminar Separation Bubble. It is equipped with some utility modules for reading the output files and creating proper initial conditions.
The package is registered, so you can install it as:
using Pkg
Pkg.add(SegregatedVMSSolver)or from the REPL just press ].
(@1.8) pkg> add SegregatedVMSSolver
You can use the most recent release installing it as:
using Pkg
Pkg.add(url="https://github.com/carlodev/SegregatedVMSSolver.jl")For a complete and smooth experience is suggested to install the free software ParaView which allows to graphically visualize the results and open .vtu and .pvtu files.
For creating mesh and physical boundary conditions is suggested to install the free software gmsh.
- Implementation of SUPG and VMS formulation for same-order elements for velocity and pressure
- Solve 3D airfoils geometries, time-dependend, fully parallelized code
- Using custom Meshes created with
gmsh. For airfoils the packageAirfoilGmsh.jlhas been developed for speeding up the process - Solve 2D and 3D cases
- Possibility of choosing the backend thanks to
PartitionedArrays.jl. It can be run in theREPLfor debugging or inMPI
Taylor-Green Vortices |
Lid Driven Cavity Flow |
|---|
Cylinder Vortex Shedding |
Airfoil |
|---|
For parallelization is used MPI, and to solve the sparse and distribute numerical systems we use PETSc.
The benchmark case is the 2D taylor Green, the time reported here are intended for each time-step. The order of the elements for this simulation is always 2, and the CFL constant at 0.32.
Strong scalability evaluates how efficiently a parallel code reduces execution time when the problem size remains fixed, but the number of processing units increases. There is a total of 400 elements on each side, leading to 160000 elements and 1920000 dofs in total.

Weak scalability measures how well a parallel code maintains performance when the problem size is kept constant per processor, and the number of processors increases. On each processor there are 50x50 elements, the number of dofs is kept constant at 30K dfos/procs.

It relies on the Gridap ecosystem. It is also completely written in Julia and allows parallelization. The MPI and PartititionedArrays are also at the basis of the parallelization.
It is a collaborative project open to contributions. You can:
- Open a new issue
- Contact the project administator
- Open a PR with the contribution



