Main Navigation

Secondary Navigation

Page Contents

Contents

Using OpenMPI on AMD EPYC with AOCC compiler

OpenMPI is available in a great variety of versions and is combinable with different compilers. This page describes OpenMPI and the use of the LLVM compilers clang/flang. In case you use Fortran - we do not use it that frequently but may assist you in writing working scripts as well.

First you have to load the environment. As this is subject to change, it just refers to my installation:

. /home/schuele/Projekte/AMD/setenv_AOCC.sh
. /home/schuele/Projekte/AMD/aocl/4.1.0/aocc/amd-libs.cfg
MPI_INSTALL_PATH=~/Projekte/AMD/MPI_aocc/
PATH=$PATH:$MPI_INSTALL_PATH/bin
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MPI_INSTALL_PATH/lib
C_INCLUDE_PATH=$C_INCLUDE_PATH:$MPI_INSTALL_PATH/include/
CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$MPI_INSTALL_PATH/include/

Now you may compiler your source - either with OpenMP (hybrid parallel) or without OpenMP (pure MPI parallel).

COPTS="-ffast-math -O3 -fopenmp -march=znver2 -mavx2 -fnt-store \  # example with OpenMP
-ffp-contract=fast -std=c17 -flto"
mpicc ${COPTS} prog.c -o your_program -lalm -lm		# using the LLVM math library

Best practice for starting a program in the batch environment is by using a so called script that contains parameters for the batch system. You may copy the given example script in a file called openmpi_job.sh and submit to the batch system:

sbatch openmpi_job.sh

#!/bin/bash -l			# required in first line (the -l is needed !)
#SBATCH -p partition		# select a partition containing EPYC Cpus
#SBATCH --account project 	# specify your project for priorization
#SBATCH --mail-type=END		# want a mail notification at end of job
#SBATCH -J jobname		# name of the job 
#SBATCH -o jobname.%j.out	# Output: %j expands to jobid
#SBATCH -e jobname.%j.err	# Error: %j expands to jobid
#SBATCH -L aocc_openmpi		# request a license for openmpi
#SBATCH --nodes=2		# requesting 2 nodes (identical -N 2)
#SBATCH --ntasks=4		# requesting 4 MPI tasks (identical -n 4)
#SBATCH --ntasks-per-node=2     # 2 MPI tasks will be started per node
#SBATCH --cpus-per-task=3       # each MPI task starts 3 OpenMP threads

### use generic part and setting of environment see above

# AMD specific:
MAP=""
OMP=""
SLOT=""
if [ "$SLURM_CPUS_PER_TASK" != "" ]; then	# Is this a Hybrid job?
 if [ $SLURM_CPUS_PER_TASK -ge 2 ]; then	# Are at least 2 threads used per task ?
   SLOT="slot"
 fi
 OMP="-x OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK"  # set the ENV-Variable to be used
 MAP="--map-by $SLOT:PE=$SLURM_CPUS_PER_TASK"	# organize the threads on the cores of the node
else
 MAP="--map-by $SLOT:PE=1"
fi

# to start the MPI tasks in a symmetric (!recommended) layout
SNT=""						
if [ "$SLURM_NTASKS_PER_NODE" != "" ]; then
   SNT="-N $SLURM_NTASKS_PER_NODE"
fi

# Communication options for the network
HGN="-mca mtl psm2 --mca btl self,vader -mca btl_openib_allow_ib 1"

OPTS="${MAP} ${OMP} ${SNT} ${HGN}"

mpirun -np $SLURM_NTASKS $OPTS ./your_program

This example covers Hybrid OpenMP/MPI and pure MPI cases. In case of a pure MPI cases the following SBATCH line has to be commented (additional # at beginning) or erased:

#SBATCH --cpus-per-task=3       # each MPI task starts 3 OpenMP threads
and of course the -fopenmp option has to be removed for the compilation.