Main Navigation

Secondary Navigation

Page Contents

Contents

Starting Jobs with Intel MPI

General

Please load the wanted version of Intel tools like:

module load intel/latest

This enables the usage of mpiicc or mpiicpc in conjuncture with the Intel compilers icc and icpc and the appropriate MPI libraries. You may now compile your program like the following example:

mpiicc -o hello_world hello_world.c

Please note: mpicc and mpicxx will use the GNU compilers. The additional i is no misspelling.

Best practise for starting a program in the batch environment is by using a so called script that contains parameters for the batch system. An example may look like:

#!/bin/bash			# required in first line
#SBATCH -p your_partition	# select a project or insert # in first column 
#SBATCH --mail-type=END		# want a mail notification at end of job
#SBATCH -J jobname		# name of the job 
#SBATCH -o jobname.%j.out	# Output: %j expands to jobid
#SBATCH -e jobname.%j.err	# Error: %j expands to jobid
#SBATCH -C XEON_SP_6126		# selects only nodes of that type
#SBATCH -C IB			# node s
#SBATCH -L impi			# request a license for intel mpi
#SBATCH --nodes=2		# requesting 2 nodes (identical -N 2)
#SBATCH --ntasks=4		# requesting 4 MPI tasks (identical -n 4)
#SBATCH --ntasks-per-node=2     # 2 MPI tasks will be started per node
#SBATCH --cpus-per-task=4       # each MPI task starts 4 OpenMP threads

#Empty line above indicates end of SBATCH options
module purge			# clean all module versions
module load intel/your-version	# load your version of Intel compilers

mpiexec.hydra ./your_program
#mpirun -env OMP_NUM_THREADS $SLURM_CPUS_PER_TASK ./your_program # alternative with OpenMP

In case your program is pure MPI without OpenMP threads just change the example (not the mpirun alternative) above in
##SBATCH --cpus-per-task=4	# add a # in 1st column
You may copy this example in a file called intelmpi_job.sh and submit to the batch system:
sbatch intelmpi_job.sh
Please inform yourself in advance which node types do exist in the project you are going to use. Find a list of available nodes via
scontrol show partition your_partition
and look for the nodes features: scontrol show node | grep Features | sort| uniq
This has to be done only once. 

OpenMP and Intel 2019

There are some changes in the 2019 version causing problems on nodes which use Mellanox Infiniband as high speed network. In case of Infiniband the following 2 lines have to be added after loading the intel/latest module:
source mpivars.sh release_mt
export FI_PROVIDER=verbs
Without this, the program may only run on a single core or only the cores of 1 CPU instead of all cores of a node or even fail with errors.