Running zCFD ============ zCFD is designed to exploit parallelism in the target computing hardware, making use of multiple processes (MPI), multiple threads per CPU process (OpenMP) and / or many threads per GPU. zCFD will handle most parallel functions automatically, so the user just needs to specify how much, and what type, of resource to use for a given run. .. contents:: Table of Contents :depth: 1 :local: :backlinks: none .. _`zcfd-command`: Command Line Execution ---------------------- zCFD includes an in-built command-line environment, which should be activated either when run directly or via a cluster management and job scheduling system, such as Slurm. .. code-block:: bash source /INSTALL_LOCATION/zCFD-version/bin/activate This sets up the environment to enable execution of the specific version. When within the command-line environment, the command prompt will show a zCFD prefix: .. code-block:: bash (zCFD) > To deactivate the command line environment, returning the environment to the previous state use: .. code-block:: bash deactivate Your command prompt should return to its normal appearance. To run zCFD on the command-line: .. code-block:: bash run_zcfd -n -d -p -c where: .. list-table:: :widths: 20 20 70 :header-rows: 1 * - Parameter - Value - Description * - **num_tasks** - Integer - The number of partitions (one per device socket - CPU or GPU) * - **device_type** - CPU or GPU - Specifies whether or not to use GPU(s) if present * - **mesh_name** - String - The name of the mesh file (.h5) * - **case_name** - String - The name of the control dictionary (.py) For example, to run zCFD from the command-line environment on a desktop computer with a single 12-core CPU processor and no GPU: .. code-block:: bash run_zcfd -n 1 -d cpu -p -c By default, zCFD will use all 12 cores via OpenMP unless a specific number is required. To run the above case using only 6 cores (there will be one thread per CPU core): .. code-block:: bash run_zcfd -n 1 -d cpu -o 6 -p -c or .. code-block:: bash export OMP_NUM_THREADS=6; run_zcfd -n 1 -d cpu -p -c To run zCFD on a desktop computer with a single CPU and two GPUs: .. code-block:: bash run_zcfd -n 2 -d gpu -p -c .. _`input-validation`: Input Validation ---------------- zCFD provides an input validation script which can be used to check the solver control dictionary before run time reducing the likelihood of a job spending a long time queuing only to fail at start up. The script can be executed from the zCFD environment as follows: .. code-block:: bash validate_input [-m ] If the -m option is given with a mesh file the script will check whether any zones specified as lists to boundary conditions, transforms or reports in the input exist in the mesh. Overriding Mesh and Case Names --------------------------------- For more advanced simulations with overset meshes (see :ref:`overset`) an override dictionary can be supplied as an additional argument to the command-line, in interactive mode or within a batch submission script, to specify multiple meshes and associated control dictionaries: .. code-block:: bash run_zcfd -n 2 -d gpu -f An example override.py file for a background domain with a single rotating turbine would be: .. code-block:: python override = {'mesh_case_pair': [('background.h5', 'background.py'), ('turbine.h5', 'turbine.py')]} Batch Queue Submission ---------------------- To run zCFD on a compute cluster with a queuing system (such as Slurm), the command-line environment activation is included within the submission script: Example Slurm submission script “run_zcfd.sub”: .. code-block:: bash #!/bin/bash #SBATCH -J account_name #SBATCH --output zcfd.out #SBATCH --nodes 2 #SBATCH --ntasks 4 #SBATCH --exclusive #SBATCH --time=10:00:00 #SBATCH --gres=gpu:2 #SBATCH --cpus-per-task=16 source /INSTALL_LOCATION/zCFD-version/bin/activate run_zcfd -n 4 -d gpu -p -c In this system we have 2 nodes, each with 2 GPUs. We allocate one task per GPU, giving a total of 4 tasks. Note that num_tasks, in this case 4, on the last line should match Slurm “ntasks” on line 5. The case is run in “exclusive” mode which means that zCFD has uncontested use of the devices. In this case we have a 64-core CPU, giving 16 CPU cores for each of the 4 tasks. The “gres” line tells zCFD that there are 2 GPUs available on each node. The submission script would be submitted from the command-line: .. code-block:: bash sbatch run_zcfd.sub The job can then be managed using the standard Slurm commands. .. note:: Users unfamiliar with the batch submission parameters in a specific system should consult the system administrator.