Environment and Customization

The Software Catalog

CINECA offers a variety of third-party applications and community codes that are installed on its HPC systems. Most of the third-party software is installed using software modules mechanism (see The module command section). Information on the available packages and their detailed descriptions are organized in a catalog, divided by discipline (link).

The catalog is also accessible directly on HPC clusters by using the commands module and modmap descrived in next sections.

The module command

All softwares installed on the CINECA clusters are available as modules. As default, a set of basic modules are preloaded in the enviroment at login. To manage modules in the production enviroment, the user can execute the command module  with a variety of options. A short description of the most useful module command usage is reported in the following table.

Command

Action

module avail

show the available modules on the machine

module load <appl>

load the module in the current shell session,

preparing the enviroment for the application.

module load autoload <appl>

load the module and all dependencies in the current session

module help <appl>

show specific information and basic help on the application

module list

show the module currently loaded in the shell session

module purge

unload all the loaded modules

module unload <appl>

unload a specific module

The modmap command

For an easy reading, the modules are collected in different profiles. Only the base profile is automatically loaded at login. modmap is a very useful command to look for a specific module in all the profiles at once. It shows at standard output all the modules with the searched name showing in wgicg profile they can be found. For example, suppose you are looking for the lammps software:

$ modmap -m lammps
     Profile: archive
        applications
             lammps
              20220623--openmpi--4.1.4--gcc--11.3.0-cuda-11.8
     Profile: astro
     Profile: base
     Profile: bioinf
     Profile: chem-phys
        applications
             lammps
              29aug2024
              2aug2023
              2aug2023--intel-oneapi-compilers--2023.2.1
     Profile: deeplrn
     Profile: eng
     Profile: geo-inquire
     Profile: lifesc
     Profile: meteo
     Profile: quantum
     Profile: spoke7
     Profile: statistics

The output of modmap is showing that several lammps versions are present in the chem-phys profile and an old one in the archive profile. To load the module is now easy:

$ module load profile chem-phys
$ module load lammps/29aug2024

Compilers

You can check the complete list of available compilers on a specific cluster with the command:

$ modmap -c compilers

For GPU compilation the available compilers are:

  • For NVIDIA GPUs cuda-aware

    • GNU Compilers Collection (GCC)

    • NVIDIA nvhpc (ex PGI)

    • NVIDIA cuda

For CPU compilation the available compilers are:

  • For INTEL CPUs

    • Intel oneAPI compilers (x and classic compilers)

    • GNU Compilers Collection (GCC)

  • For AMD CPUs

    • AOCC compilers

    • GNU Compilers Collection (GCC)

GCC

The GNU compilers are always available. A GCC version is available on the system (gcc –version ) without the need to load any module. In the module environment you can find more recent version though:

$ modmap -m gcc

To use a specific version:

$ module load gcc/<version>

The name of the GNU compilers are:

  • gfortran: fully compliant with the Fortran 95 Standard and includes legacy F77 support

  • gcc: C compiler

  • g++: C++ compiler

The gcc module loading set a specific environment variable for each compiler:

  • CC: gcc

  • CXX: g++

  • FC: gfortran

  • F90: gfortran

  • F77: gfortran

The documentation can be obtained with the “man” command after loading the gcc module:

$ module load gcc/<version>

On the accelerated clusters the available gcc modules support the offloading to the device. For NVIDIA GPUs the target is nvptx.

On the cluster provided with accelerated and non-accelerated partitions that share the same modules environment the available offloading gcc modules can be used on both. As a result there is one only installation of a specific gcc version that supports the offload-device and you can use also on CPUs partition.

The GCC OpenMPI implementation is always available on accelerated and non accelerated clusters.

The version installed for NVIDIA GPUs is configured to support CUDA, but you can use it also for partitions non accelerated of a cluster. In this case, however, it is highly recommended to compile with the MPI implementation specific for their architecture (e.g intel-oneapi-mpi module for INTEL CPUs).

You can check the list of available OpenMPI modules on a specific cluster with the command:

$ modmap -m openmpi

To use a specific one:

$ module load openmpi/<version>

After loading a specific GCC openmpi module select the MPI compiler wrapper for Fortran, C or C++ codes.

  • mpicc: gcc compiler mpi wrappers

  • mpic++ mpiCC mpicxx: g++ compiler mpi wrappers

  • mpif77 mpif90 mpifort: gfortran compiler mpi wrappers

e.g. Compiling C code:

$ module load openmpi/<version>

$ mpicc -o myexec  myprog.c

NVIDIA nvhpc

(ex PORTLAND PGI + NVIDIA CUDA)

The NVHPC compilers are always available on the NVIDIA GPUs clusters. In the module environment you can find more recent version though:

$ modmap -m nvhpc

To use a specific version:

$ module load nvhpc/<version>

The name of the NVHPC compilers are:

  • nvc: Compile C source files (C11 compiler. It supports GPU programming with OpenACC, and supports multicore CPU programming with OpenACC and OpenMP)

  • nvc++: Compile C++ source files (C++17 compiler. It supports GPU programming with C++17 parallel algorithms (pSTL) and OpenACC, and supports multicore CPU programming with OpenACC and OpenMP)

  • nvfortran: Compile FORTRAN source files (supports ISO Fortran 2003 and many features of ISO Fortran 2008. It supports GPU programming with CUDA Fortran and OpenACC, and supports multicore CPU programming with OpenACC and OpenMP)

  • nvcc: CUDA C and CUDA C++ compiler driver for NVIDIA GPUs

As of August 5, 2020, the “PGI Compilers and Tools” technology is a part of the NVIDIA HPC SDK product, available as a free download from NVIDIA. For legacy reasons, the NVIDIA nvhpc suite also offers the PGI C, C++, and Fortran compilers with their original names, as follows.

  • pgcc: Compile C source files.

  • pgc++: Compile C++ source files.

  • pgf77: Compile FORTRAN77 source files.

  • pgf90: Compile FORTRAN90 source files.

  • pgf95: Compile FORTRAN95 source files.

  • pgfortran: Compile PGI Fortran

The documentation can be obtained with the “man” command after loading the gcc module:

$ module load nvhpc/<version>

$ man nvc

To enable CUDA C++ or CUDA Fortran, and link with the CUDA runtime libraries, use the -cuda option (-Mcuda is deprecated). Use the -gpu option to tailor the compilation of target accelerator regions.

The OpenACC parallelization is enabled by the -acc flag. GPU targeting and code generation can be controlled by adding the -⁠gpu flag to the compiler command line.

The OpenMP parallelization is enabled by the -mp compiler option. The GPU offload via OpenMP is enabled by the -mp=gpu option.

The NVHPC MPI implementation is always available on the clusters provided with NVIDIA gpus.

The OpenMPI nvhpc version, if installed, is available as openmpi/<nvhpc-version> module. The version built-in from NVIDIA is available within nvhpc installation as hpcx-mpi/<version> module.

You can check the list of available NVHPC OpenMPI/hpcx-mpi modules on a specific cluster with the command:

$ modmap -m openmpi OR hpcx-mpi

To use a specific one:

$ module load openmpi/<version> OR hpcx-mpi/<version>

After loading a specific nvhpc openmpi module select the MPI compiler wrapper for Fortran, C or C++ codes.

  • mpicc: nvc compiler mpi wrappers

  • mpic++ mpiCC mpicxx: nvc++ compiler mpi wrappers

  • mpif77 mpif90 mpifort: nvfortran compiler mpi wrappers

e.g. Compiling C code:

$ module load openmpi/<version> OR hpcx-mpi/<version>

$ mpicc -o myexec  myprog.c (uses the nvc compiler)

Intel oneAPI

The Intel compilers are the best choice on the Intel CPUs clusters. In the module environment you can find more recent version though:

$ modmap -m intel-oneapi-compilers

To use a specific version:

$ module load intel-oneapi-compilers/<version>

Starting from 2021 version up to 2023 intel-oneapi-compilers module makes available two types of compilers, classic and oneAPI.

Intel classic compilers:

  • icc: Compile C source files

  • icpc: Compile C++ source files

  • ifort: Compile FORTRAN source files

LLVM-based Intel oneAPI compilers:

  • icx: Compile C source files

  • icpx: Compile C++ source files

  • ifx: Compile FORTRAN source files

  • dpcpp: Compile C++ source files with SYCL extensions

Starting from 2024 version intel-oneapi-compilers module makes available only oneAPI compilers set and ifort classic compiler which is no longer available from 2025 version.

In order to use the Intel classic compilers load:

$ module load intel-oneapi-compilers-classic

e.g. Compiling Fortran code with oneAPI:

$ module load intel-oneapi-compilers/<version>

$ ifx -o myexec myprog.f90

The Intel MPI implementation is the best choice on the Intel CPUs clusters. In the module environment you can find more recent version though:

$ modmap -m intel-oneapi-mpi

To use a specific module:

$ module load intel-oneapi-mpi/<version>

This module makes available classic and oneAPI compilers wrappers.

After loading a specific intel-oneapi-mpi module select the MPI compiler wrapper, classic or oneaAPI, for Fortran, C or C++ code.

Intel OneAPI compilers wrappers:

  • mpiicx (C code)

  • mpiicpx (C++ code)

  • mpiifx (Fortran code)

Intel classic compilers wrappers:

  • mpiifort (Fortran code)

  • mpiicc (C code)

  • mpiicpc (C++ code)

Intel GNU compilers wrappers:

  • mpifc, mpif77, mpif90 (Fortran MPI wrapper)

  • mpicc (C MPI wrapper)

  • mpicxx: (C++ MPI wrapper)

e.g. Compiling C code:

$ module load intel-oneapi-mpi/<version>

$ mpiicx -o myexec  myprog.c

AMD AOCC

The AOCC compilers are available on the AMD CPUs clusters. In the module environment you can find more recent version though:

$ modmap -m aocc

To use a specific version:

$ module load aocc/<version>

The AOCC compilers allow the development for x86 applications written in C, C++, and Fortran.

AMD AOCC compilers:

  • clang: Compile C source files

  • clang++: Compile C++ source files

  • flang: Compile FORTRAN source files

e.g. Compiling Fortran code with AOCC:

$ module load aocc/<version>

$ flang [command line flags] -o myexec myprog.f90

AOCC compiler offers target-dependent and target-independent optimizations, with a particular focus on AMD “Zen” processors.

You can read more about these in the command line option AMD section https://docs.amd.com/r/en-US/57222-AOCC-user-guide/Command-line-Options

The AOCC OpenMPI implementation is available on AMD clusters.

You can check the list of available OpenMPI modules on a specific cluster with the command:

$ modmap -m openmpi

To use a specific one:

$ module load openmpi/<version>

After loading a specific AOCC openmpi module select the MPI compiler wrapper for Fortran, C or C++ codes.

  • mpicc: gcc compiler mpi wrappers

  • mpic++ mpiCC mpicxx: g++ compiler mpi wrappers

  • mpif77 mpif90 mpifort: gfortran compiler mpi wrappers

e.g. Compiling C code:

$ module load openmpi/<version>

$ mpicc -o myexec  myprog.c

Basic MPI execution

To test if your parallel executable works, you can execute it with mpirun on the login node and with a single process:

module load <mpi module used to install your exec>

mpirun ./myexec

To run it in the parallel way you have to allocate the compute nodes via interactive job or sbatch job and execute it with mpirun or srun launcher .

Example: 2 GPU compute nodes allocation and 2 tasks execution

module load <mpi module used to install your exec>

salloc -N 2 --ntasks-per-node=1 --cpus-per-task=1 --gres=gpu:1 -A <name account> --time=<execution time> --partition=<partition name> --qos=<qos name>

srun -n 2 ./myexec
module load <mpi module used to install your exec>

srun -N 2 --ntasks-per-node=1 --cpus-per-task=1 --gres=gpu:1 -A <name account> --time=<execution time> --partition=<partition name> --qos=<qos name>  --pty /bin/bash

mpirun -n 2 ./myexec
sbatch my_batch_script.sh

cat my_batch_script.sh

#!/bin/sh
#SBATCH --job-name osu
#SBATCH -N2 --ntasks-per-node=1
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:1
#SBATCH --time=<hh:mm:ss>
#SBATCH --account= <account name>
#SBATCH --partition=<partition name>
#SBATCH --qos=<qos name if necessary>

module load <mpi module used to install your exec>

mpirun ./myexec or srun ./myexec

Totalview

This document introduces the user how to launch totalview through an Access via Remote Visualization (RCM) session.

With respect to other GUIs that can be run on RCM, Totalview is a little peculiar and must be run directly on the nodes that execute the parallel code. In the following, we will detail how to establish a Totalview debugging session through RCM with a SLURM job.

Once you have established a connection through RCM with one of our systems, GALILEO100 or Leonardo, please follow the instructions below.

1) Setup the .tvdrc file - only the first time

The first time you estabilish a Totalview session, a folder named .totalview will be created in your $HOME (it is not visible with the standard “ls” command, you have to add the flag -a for the hidden directories and files). Inside it, create a text file named .tvdrc, that should contain the following lines documented also in the official Slurm manual:

dset -set_as_default TV::bulk_launch_enabled true
dset -set_as_default TV::bulk_launch_string {srun --mem-per-cpu=0 -N%N -n%N -w`awk -F. 'BEGIN {ORS=","} {if (NR==%N) ORS=""; print $1}' %t1` -l --input=none %B/tvdsvr%K -callback_host %H -callback_ports %L -set_pws %P -verbosity %V -working_directory %D %F}
dset -set_as_default TV::bulk_launch_tmpfile1_host_lines {%R}
2) Prepare the job (job.sh script) and submit it

Example job.sh for GALILEO100:

#!/bin/bash

#SBATCH -t 30:00
#SBATCH -N 1
#SBATCH -o totaljob.out
#SBATCH -e totaljob.err
#SBATCH -A <your_account>
#SBATCH -p g100_usr_prod

module load totalview
module load <modules-needed-to-your_executable>

tvconnect srun ./your_executable

Submit the job via:

$ sbatch job.sh
3) Open a Totalview terminal

In the RCM shell, load the module of Totalview and launch “totalview” to open the GUI. When the job starts, you will be asked by a prompt to connect to it and you will see that the tool is trying to debug the “srun” command.

4) Launch the simulation

Press the green “Go” button to launch the simulation. Eventually, a prompt will ask you if you want to stop the parallel job: if you choose “Yes”, you will finally see the main code of the executable you want to debug and you can start working on it.

Installing packages with python environment

In Cineca clusters you can find the available versions for python and py-mpi4py with the command modmap -m python and modmap -m py-mpi4py, respectively. In case you need to install packages through a python virtual environment you can do:

$ module load python/<version>
# In case you need py-mpi4py
$ module load py-mpi4py/<version>
$ python -m venv my_env_test
$ source my_env_test/bin/activate
$ pip install <package>

Note

  • my_env_test: choose an arbitrary name for your personal virtual env.

  • It is advised to create your personal envs in your $WORK area, since the $HOME disk quota is limited to 50 GB.

  • Once you source your virtual environment you will see on your shell (before the login node name), something like this: (my_env_test) [otrocon1@login02 UserGuideTests]$ .

  • Once you finish to work on your env, you can deactivate it with the command deactivate.

  • In case you need specific python or artificial intelligence packages optimized for Cineca’s clusters you can refer to the section: Cineca-ai and Cineca-hpyc modules.

SPACK

To assist users in customizing their production environment by installing fresh software, we offer a powerful tool named Spack. Spack is a multi-platform package manager that facilitates the easy installation of multiple versions and configurations of software. Below, you will find a step-by-step guide to install software using Spack. For a comprehensive and detailed guide, please refer to the official Spack documentation.

Quick usage

$ ml spack
$ spack spec -Il <package>  # to check current specs
$ spack install <package>   # to actually install
$ ml <package>              # load the created module

For a fine-grained control, you can select the Spack version (see Loading the preconfigured Spack available on the cluster), and you can add specs (see Variants and dependencies) to the spec and the install commands (see Spec and install commands). It may happen that the module created by Spack will miss some dependencies, you can create the missing modulefiles via spack module tcl refresh (see Module command and Spack managing).

Additional useful steps are:

Installing a new package

Loading the preconfigured Spack available on the cluster

We provide a module to load a pre-configured Spack instance:

$ modmap -m spack
$ module load spack/<version>

The directory /spack-<version> is automatically created into a default space, containing some sub-directories created and used by Spack during the package installation. On GALILEO100, the default area is $WORK/$USER, while on LEONARDO is $PUBLIC. You will find, for example on LEONARDO:

  • software installation root: $PUBLIC/spack-<version>/install

  • modulefiles location: $PUBLIC/spack-<version>/modules

  • user scope: $PUBLIC/spack-<version>/user_cache

  • sources cache: $PUBLIC/spack-<version>/cache

For GALILEO100 users, please be aware that $WORK space will be removed after six months since project expiration. If you want to define different paths for installations, modules, user scope directories, and cache, please refer to Spack manual (a simple workaround is to redefine WORK to a different path, e.g. export WORK=/your/different/path, before loading Spack module).

Listing the software that can be installed via Spack

You can check if the software package you want to install is known to Spack via the command spack list, which will print out the list of all the packages you can install via Spack. You can also specify the name of the package (or only part of its name):

$ spack list <package_name>
$ spack list <partial_package_name>

or

$ spack list | grep <package_name>
Find already installed packages

You will find a suite of compilers, libraries, tools and applications already installed by Cineca staff via Spack. It is strongly recommended you use them to install additional software.

Find the already installed packages

$ spack find

Check if a specific package is already installed or what packages have been already installed to provide a specific virtual package (e.g mpi)

$ spack find <package_name>
$ spack find <virtual_package_name>

List the packages already installed and see e.g. the used variants (-v), dependencies (-d), the installation path (-p) and the hash (-l). The meaning of the hash is discussed in the next paragraph.

$ spack find -ldvp <package>

You can also list the packages already installed with a specific variant

$ spack find -l +<variant>
e.g. $ spack find -l +cuda

or which depends on a specific package (e.g openmpi) or a generic virtual package (e.g. mpi)

$ spack find -l ^<package_name>
e.g. $ spack find -l ^openmpi
e.g. $ spack find -l ^mpi

or installed with a specific compiler

$ spack find %<compiler>
Add a new compiler to Spack compilers

The list of all the compilers already installed and ready to be used can be seen with

$ spack compilers

To add a compiler to the ones known to Spack:

$ module load <compiler>
$ spack compiler add
$ module unload <compiler>
Variants and dependencies

If the package of your interest is listed by spack list, you can inspect its build variants via

$ spack info <package_name>

You can activate (+) or deactivate (-) variants via

$ spack spec -Il <package_name> +variant_1 -variant_2 variant_3=value
$ spack install <package_name> +variant_1 -variant_2 variant_3=value

and also for a dependency

$ spack spec -Il <package_name> ^"<dependency_package_name> +variant_1 -variant_2 variant_3=value"
$ spack install <package_name> ^"<dependency_package_name> +variant_1 -variant_2 variant_3=value"

Spec and install commands

In order to install a package with the Spack module, you have to select for it a version (@), a compiler (%), the dependencies (^) and the building variants (+/-). The combination of all these parameters is the spec with which the package will be installed.

If you don’t select any combination during the installation, a default spec is selected. Before installing a package, it is strongly recommended to check the default spec with which the package would be installed:

$ spack spec -Il <package_name>

The suggested options to the spec command used in the example above are: -I (install), which shows the installation status of the package and its dependencies with a symbol preceding the hash of the spec (- not installed, +/^ installed/installed from another user); -l (long) which shows the unique identifier (“hash”) of the package installation (e.g. aouyzha).

Important

On Cineca clusters it’s recommended to execute always spec command before installing a package to make sure its dependencies are satisfied with Cineca installations (^) where available. The Cineca installations are optimised and tested for the architecture of the specific cluster. This is especially true for e.g. openmpi.

Note

Even when a Cineca installation is available to satisfy a dependency, the default spec for that dependency may differ, thus a symbol - may be shown. If possible, force the spec to match the one corresponding to the Cineca one (so the symbol will become ^). A simple way to force this is to force the dependency via its hash:

$ spack spec -Il <package_name> ^/hash
$ spack install <package_name> ^/hash
e.g. $ spack spec -Il <package_name> ^/aouyzha
e.g. $ spack install <package_name> ^/aouyzha

Once you select the spec, a spack install is all you need:

$ # default spec
$ spack install <package_name>
$
$ # custom spec
$ spack install <package_name>@<version> +/~/<variant> <variant>=<value> %<compiler>@<version> ^<dependency_name>

Module command and Spack managing

You can load the installed software by loading the correspondent modulefile Spack automatically created. To force its creation, you can run:

$ spack module tcl refresh --upstream-modules <package_name>

Then you can find and load the new modulefile by adding the “modules” folder to the search path via module use (this is done implicitly also when loading Spack), e.g. on Leonardo:

$ module use $PUBLIC/spack-<version>/modules
$ module av <package_module>
$ module load <package_module>

Please refer to section Loading the preconfigured Spack available on the cluster to know the correct path to the modulefiles folder.