Running zCFD

zCFD is designed to exploit parallelism in the target computing hardware, making use of multiple processes (MPI), multiple threads per CPU process (OpenMP) and / or many threads per GPU. zCFD will handle most parallel functions automatically, so the user just needs to specify how much, and what type, of resource to use for a given run.

Command Line Execution

zCFD includes an in-built command-line environment, which should be activated either when run directly or via a cluster management and job scheduling system, such as Slurm.

source /INSTALL_LOCATION/zCFD-version/bin/activate

This sets up the environment to enable execution of the specific version. When within the command-line environment, the command prompt will show a zCFD prefix:

(zCFD) >

To deactivate the command line environment, returning the environment to the previous state use:

deactivate

Your command prompt should return to its normal appearance.

To run zCFD on the command-line:

run_zcfd -n <num_tasks> -d <device_type> -p <mesh_name> -c <case_name>

where:

Parameter

Value

Description

num_tasks

Integer

The number of partitions (one per device socket - CPU or GPU)

device_type

CPU or GPU

Specifies whether or not to use GPU(s) if present

mesh_name

String

The name of the mesh file (<mesh>.h5)

case_name

String

The name of the control dictionary (<control_dict>.py)

For example, to run zCFD from the command-line environment on a desktop computer with a single 12-core CPU processor and no GPU:

run_zcfd -n 1 -d cpu -p <mesh_name> -c <case_name>

By default, zCFD will use all 12 cores via OpenMP unless a specific number is required. To run the above case using only 6 cores (there will be one thread per CPU core):

run_zcfd -n 1 -d cpu -o 6 -p <mesh_name> -c <case_name>

or

export OMP_NUM_THREADS=6; run_zcfd -n 1 -d cpu -p <mesh_name> -c <case_name>

To run zCFD on a desktop computer with a single CPU and two GPUs:

run_zcfd -n 2 -d gpu -p <mesh_name> -c <case_name>

Input Validation

zCFD provides an input validation script which can be used to check the solver control dictionary before run time reducing the likelihood of a job spending a long time queuing only to fail at start up.

The script can be executed from the zCFD environment as follows:

validate_input <case_name> [-m <mesh_name>]

If the -m option is given with a mesh file the script will check whether any zones specified as lists to boundary conditions, transforms or reports in the input exist in the mesh.

Overriding Mesh and Case Names

For more advanced simulations with overset meshes (see overset) an override dictionary can be supplied as an additional argument to the command-line, in interactive mode or within a batch submission script, to specify multiple meshes and associated control dictionaries:

run_zcfd -n 2 -d gpu -f <override_file>

An example override.py file for a background domain with a single rotating turbine would be:

override = {'mesh_case_pair': [('background.h5', 'background.py'),
                               ('turbine.h5', 'turbine.py')]}

Batch Queue Submission

To run zCFD on a compute cluster with a queuing system (such as Slurm), the command-line environment activation is included within the submission script:

Example Slurm submission script “run_zcfd.sub”:

#!/bin/bash
#SBATCH -J account_name
#SBATCH --output zcfd.out
#SBATCH --nodes 2
#SBATCH --ntasks 4
#SBATCH --exclusive
#SBATCH --time=10:00:00
#SBATCH --gres=gpu:2
#SBATCH --cpus-per-task=16
source /INSTALL_LOCATION/zCFD-version/bin/activate
run_zcfd -n 4 -d gpu -p <mesh_name> -c <case_name>

In this system we have 2 nodes, each with 2 GPUs. We allocate one task per GPU, giving a total of 4 tasks. Note that num_tasks, in this case 4, on the last line should match Slurm “ntasks” on line 5. The case is run in “exclusive” mode which means that zCFD has uncontested use of the devices. In this case we have a 64-core CPU, giving 16 CPU cores for each of the 4 tasks. The “gres” line tells zCFD that there are 2 GPUs available on each node.

The submission script would be submitted from the command-line:

sbatch run_zcfd.sub

The job can then be managed using the standard Slurm commands.

Note

Users unfamiliar with the batch submission parameters in a specific system should consult the system administrator.