Intel compilers
This post is over 5 years old, it may be out of date.
New Intel compilers (Intel OneAPI Base and HPC Toolkits)
The newest versions of the Intel compilers, available under the name Intel OneAPI, do no longer require a license, they are freely available. The 2023.2 version has been installed on C&CZ managed Ubuntu systems. When they are available, the module command
module avail
will list among others “compiler/latest”, so e.g. the Intel C (icc) and Fortran (ifort) compilers can be used after typing
module add compiler/latest
Old Intel compilers (2019u5 / 2019u4 / 2019 / 2014)
C&CZ has bought together with TCM and Theoretical Chemistry two licences
for concurrent use of the Intel Parallel Studio XE for
Linux. Different
versions have been installed in /vol/opt/intelcompilers
and is
available on a.o. clusternodes en
loginservers. To set the environment
variables correctly, SH/BASH users must first run:
source /vol/opt/intelcompilers/intel-2019u5/composerxe/bin/compilervars.sh intel64
and CSH users must run:
setenv arch intel64
source /vol/opt/intelcompilers/intel-2019u5/composerxe/bin/compilervars.csh intel64
After that, icc -V
returns the version number as output. For older
versions, substitute “2019u5” with “2019u4”, “2019” or “2014”.
A very useful resource is
intel-mkl-link-line-advisor
which will advise you on compiler and linker options for using the MKL.
Documentation for the previous version (2011)
Compiling Fortran (/opt/intel/bin/ifort)
- Intel Fortran Composer XE 2011 Getting Started Tutorials
- Intel Fortran Compiler XE 12.0 User and Reference Guides
Math Kernel Library (mkl, linking blas, lapack)
Intel Cluster Studio 2011
- Intel Cluster Studio 2011 for Linux* OS - index to all local documentation
- Intel® MPI Library for Linux OS Documentation Index
How to create a standalone MKL version of BLAS and LAPACK shared libraries ?
This is described in detail in Building Custom Shared Objects
- Create a new directory (e.g. ~/lib)
mkdir ~/lib
cd ~/lib
- Copy these files:
cp /opt/intel/composerxe/mkl/tools/builder/{makefile,blas_list,lapack_list} ~/lib
- Set the MKLROOT variable (in bash):
MKLROOT=/opt/intel/mkl
export MKLROOT
In tcsh use:
setenv MKLROOT /opt/intel/mkl
- Make the shared libraries libblas_mkl.so and liblapack_mkl.so
make libintel64 export=blas_list interface=lp64 threading=parallel name=libblas_mkl
make libintel64 export=lapack_list interface=lp64 threading=parallel name=liblapack_mkl
The options are described here
The newly created libblas_mkl.so and liblapack_mkl.so require
/opt/intel/lib/intel64/libiomp5.so
to work. On the cluster nodes this file is automatically linked when required.
Using the MKL BLAS and LAPACK shared libraries (with Scilab)
This should work for any executable that uses a dynamically linked blas or lapack. We use Scilab as an example.
- Make sure we have an executable, not just a script that calls the executable:
file scilab-bin
The output looks something like this:
scilab-bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15 ...
- Determine the exact name that is used by the executable:
ldd scilab-bin | grep blas
The output could be:
libblas.so.3gf => ~/sciab-5.4.1/lib/thirdparty/libblas.so.3gf
- Replace the library with a link to the MKL version
cd ~/sciab-5.4.1/lib/thirdparty/
rm libblas.so.3gf
ln -s ~/lib/libblas_mkl.so libblas.so.3gf
Also follow this procedure for lapack.
- To use more than one thread, i.e., for parallel computation, set:
MKL_NUM_THREADS=4
export MKL_NUM_THREADS
This example will use 4 cores.
- To check the number of cores available, use:
cat /proc/cpuinfo | grep processor | wc