We currently live in the era of multicore computers, where almost every
computing machine has many CPUs. In order to benefit from this
computational power, we need to devise programs explicitly written for
parallel machines. Nowadays there are several paradigms to write
parallel code, and in this crash course we will focus on OpenMP. This is
a minimally invasive open-source parallelization method that allows, in
many cases, fast and straight-forward parallelization of currently
working serial codes. OpenMP is meant for compiled code (eg C++ or
Fortran) that run in a single computing node, but we will also visit
some alternatives for Python, along with a short overview of
parallelization over distributed memory machines via MPI.