diff --git a/docs/environment/spack.md b/docs/environment/spack.md new file mode 100644 index 000000000..9fe19c119 --- /dev/null +++ b/docs/environment/spack.md @@ -0,0 +1,141 @@ +# Spack + + + + + +[Spack](https://spack.io/about/) is an open-source package manager designed for installing, building, and managing software on High Performance Computing systems. It facilitates the efficient installation of software packages with an added emphasis on flexibility, as it allows for an extensive and easy customization of dependencies through command line options. It provides a large number of software packages ([more than 8,600](https://packages.spack.io/) since `v.1.0.0` release), and supports multiple versions, compilers, and configurations, all of which can coexist in a single system without conflict. + + + +??? question "Why use automatic building tools like [Easybuild](https://docs.easybuild.io) or [Spack](https://spack.io/) on HPC environments?" + + While it may seem obvious to some of you, scientific software is often surprisingly difficult to build. Not all software packages rely on standard building tools like Autotools/Automake (the famous `configure; make; make install`) or CMake. Even with standard building tools, parsing the available option to ensure that the build matches the underlying hardware is time consuming and error-prone. Furthermore, scientific software often contains hardcoded build parameters or the documentation on how to optimize the build is poorly maintained. + + Software build and installation frameworks like Easybuild or Spack allow reproducible builds, handle complex dependency trees, and automatically generate corresponding environment modulefiles (e.g., LMod) for easy usage. In the ULHPC platform, EasyBuild is the primary tool used to ensure optimized software builds. However, Spack is also available and can be valuable in more flexible or user-driven contexts.Spack Some HPC sites use both [1]. + + _Resources_ + + 1. See this [talk](https://www.archer2.ac.uk/training/courses/200617-spack-easybuild/) by William Lucas at EPCC for instance. + +??? question "[Spack](https://spack.io/) or [Easybuild](https://docs.easybuild.io)? Use cases" + + + In theory, Spack and Easybuild are two different software management frameworks for HPC systems. As such, they may be seen as direct competitors. In practice, however, their distinct features make each one well-suited for different tasks within the ULHPC cluster. + + Easybuild mainly focuses on stability and robustness. It provides a user-facing software stack built out of the same common toolchain and based on stable versions. For these reasons, it has been the main framework to manage the software set on ULHPC for several years. + + On the other hand, Spack is designed to provide flexibilty and easy-to-tweak software installations. These features make Spack particularly suitable for personal or software development setups, where users can easily tweak software installations via command line by selecting specific compilers, enabling/disabling features like MPI or CUDA, or managing large and complex dependency chains. Additionally, Spack supports user-level installations, allowing users to create isolated environments without administrative privileges. In the context of the ULHPC cluster, Spack is particularly suitable for setting up user and experimental software environments, in cases where the required software is not included in the software stack or when it is necessary to build multiple configurations of the same software, as in software development. + + +## Pivot elements of Spack + +### Spec syntax +Thanks to its [spec syntax](https://spack.readthedocs.io/en/latest/spec_syntax.html), Spack offers extensive and easy-to-tweak customization of software installations. The constraints used to install a specific configuration of a software package are referred to as *specs*. These can be used to specify multiple options such as the compiler version, compile options, architecture, microarchitecture, direct and transitive dependencies, and even Git versions. Its recursive syntax allows these elements to be combined in many ways, enabling the installation of highly specific software configurations. The strength of this syntax is illustrated in the example below, which shows how to use these constraints to specify an installation of `mpileaks`: +``` { .sh .copy } +spack install mpileaks@1.2:1.4 +debug ~qt target=x86_64_v3 %gcc@15 ^libelf@1.1 %clang@20 +``` +Package versions are selected after `@`. Direct dependencies are selected after the `%` sigil and transitive dependencies after the `^` sigil. By just combining all of them, we can target a very specific installation of `mpileaks` software. + +### YAML files + +All Spack configuration settings are done via [configuration files](https://spack.readthedocs.io/en/latest/configuration.html) written in [YAML](https://yaml.org/) format. Configuration files in Spack are defined by their precedence order and allow for multiple configuration scopes, where the user environment and the user configuration folder are the scopes with the highest precedence by default. + +## Spack in ULHPC + +Spack is part of the ULHPC software set. In the latest release (`2023b`), an environment module for `0.21.2` version of Spack is provided. +The recommended approach for building software with Spack on the ULHPC cluster is to use this environment module. It is configured to provide a working Spack interface including the most relevant libraries used to build scientific software. Moreover, it also avoids the generationof multiple files in the `$HOME` directory of the user, since the the Spack package list is already preloaded in the Spack root directory. + +### Setting up Spack. + +!!! warning "Connect to a compute node" + + For all tests and compilation with Spack, it is essential to run on a [**compute node**](../systems/iris/compute.md), not in the [**login/access**](../connect/access.md). For detailed information on resource allocation and job submission, visit the [**Slurm Job Management System**](../slurm/index.md). + +Running an [**interactive job**](../systems/iris/compute.md) on the cluster, the first step to create a Spack environment is to load the environment module: + +```{ .sh .copy } + module load devel/Spack/0.21.2 +``` + +??? note "Verfifying Spack shell support" + + By default, the setup-env.sh script is sourced when the Spack module is loaded, setting up Spack's shell support. To checkup that the environment has been correctly setup, the following command is executed: + ``` { .sh .copy } + which spack + ``` + !!! note "Expected output resembles:" + + ``` { .sh } + spack () + { + : this is a shell function from: /home/users//spack/share/spack/setup-env.sh; + : the real spack script is here: /home/users//spack/bin/spack; + _spack_shell_wrapper "$@"; + return $? + } + ``` + +We can check the multiple [configuration scopes](https://spack.readthedocs.io/en/latest/configuration.html#configuration-files) available, displayed from highest to lowest precedence order: + +```{ .sh .copy } +spack config scopes -p +Scope Path +command_line +spack /mnt/aiongpfs/users/jdelgadoguerrero/.local/easybuild/software/Spack/1.1.0/etc/spack/ +user /home/users/jdelgadoguerrero/.spack/ +site /mnt/aiongpfs/users/jdelgadoguerrero/.local/easybuild/software/Spack/1.1.0/etc/spack/site/ +defaults /mnt/aiongpfs/users/jdelgadoguerrero/.local/easybuild/software/Spack/1.1.0/etc/spack/defaults/ +defaults:linux /mnt/aiongpfs/users/jdelgadoguerrero/.local/easybuild/software/Spack/1.1.0/etc/spack/defaults/linux/ +defaults:base /mnt/aiongpfs/users/jdelgadoguerrero/.local/easybuild/software/Spack/1.1.0/etc/spack/defaults/base/ +_builtin +``` + +The highest scope corresponds to the `$EBROOTSPACK/etc/spack`. This setup is intended for *single-user* or *project-specific* installations of Spack. The second highest scope corresponds to the `~/.spack` folder which is automatically created in your `$HOME` sdirectory when the module is loaded. This implies that any [configuration file](https://spack.readthedocs.io/en/latest/configuration.html) placed in that folder has precedence over any other scope, offering the possibility of easily configure and customize Spack at the user level. + +The YAML file responsible of basic configuration options is the `config.yaml`. A minimum version of the `config.yaml` can be found on `$EBROOTSPACK/etc/spack/site/`. It can be copied to the `~/.spack` via the following command: + +```{ .sh .copy } +cp $EBROOTSPACK/etc/spack/site/config.yaml ~/.spack/. +``` + +#### Define System-Provided Packages + +Package settings are determined in Spack through [`packages.yaml`](https://spack.readthedocs.io/en/latest/packages_yaml.html) configuration file. Following [spec syntax](https://spack.readthedocs.io/en/latest/spec_syntax.html), Spack allows fine-grained control over how software is built. This enables users to choose preferred implementations for virtual dependencies, select specific compilers, and configure Spack to use external software already installed on the ULHPC cluster, thus avoiding the need to rebuild everything from source at the `$HOME` directory. + +!!! question "Why is it crucial for users to define external packages in packages.yaml?" + + While Spack can build everything from source, fundamental libraries like [MPI](../software/swsets/mpi.md) and [BLAS](https://www.netlib.org/blas/)/[LAPACK](https://www.netlib.org/lapack/)are often highly optimized and meticulously tuned by system administrators to leverage the specific hardware capabilities of the HPC clusters (e.g., network interconnects, CPU features, GPU architectures). + + Using Spack's generic builds for these core libraries often results in sub-optimal performance compared to the finely-tuned system-provided versions. Declaring optimized external packages in `packages.yaml` ensures that Spack-built applications link against the most performant versions available in the [ULHPC software collection](https://hpc-docs.uni.lu/software/), thereby maximizing the efficiency of scientific computations. This avoids the overhead of rebuilding everything from source unnecessarily and guarantees users code benefits from HPC system's specialized hardware optimizations. + +A `packages.yaml` file is provided via `$EBROOTSPACK/etc/spack/site/packages.yaml`. It contains the configuration of different external system build dependencies, as well as the `GCC`and `LLVM`compilers provided in the `2023b` release. Moreover, it also contains all the packages corresponding to the `2023b` *foss* common compiler toolchain and other relevant packages to compile scientific code. During a Spack software installation, the software packages specified in `packages.yaml` are labeled as externals and Spack automatically loads its corresponding environment module. To do so, the module name must be specified in the package configuration. For example, the GCC compiler was added to `packages.yaml` with the following specifications: + +```yaml + gcc: + externals: + - spec: gcc@13.3.0 languages:='c,c++,fortran' + prefix: /opt/apps/easybuild/systems/aion/rhel810-20251006/2024a/epyc/software/GCCcore/13.3.0 + modules: + - compiler/GCC/13.3.0 + extra_attributes: + compilers: + c: /opt/apps/easybuild/systems/aion/rhel810-20251006/2024a/epyc/software/GCCcore/13.3.0/bin/gcc + cxx: /opt/apps/easybuild/systems/aion/rhel810-20251006/2024a/epyc/software/GCCcore/13.3.0/bin/g++ + fortran: /opt/apps/easybuild/systems/aion/rhel810-20251006/2024a/epyc/software/GCCcore/13.3.0/bin/gfortran + buildable: false +``` +With this setup, Spack labels the GCC compuler as external, and will load the corresponding module during the installation of software buitlt under GCC. Moreovee, it also has been labeled as not buildable, which implies that Spack will not try to build the package unless it is forced to do it. The list of the available compilers detected by Spack can be displayed through the command `spack compiler list`. + +In order to customize the packages settings at the user level scope, `$EBROOTSPACK/etc/spack/site/packages.yaml` can be copied onto the `~/.spack` folder. Alternatively, a new `packages.yaml` file can be created in `~/.spack` to add new external packages or override ``$EBROOTSPACK/etc/spack/site` configuration. + +The process of [adding external packages](https://spack.readthedocs.io/en/latest/packages_yaml.html#packages-config) to the user settings can be automated as follows: + +``` +module load $package_module + +spack external find --not-buildable $package_name@version +``` + +(Next:compilers) diff --git a/docs/software/cae/fenics.md b/docs/software/cae/fenics.md index 40fabfbb7..551b6a0c6 100644 --- a/docs/software/cae/fenics.md +++ b/docs/software/cae/fenics.md @@ -1,162 +1,135 @@ [![](https://fenicsproject.org/pub/tutorial/sphinx1/_static/fenics_banner.png){: style="width:200px;float: right;" }](https://fenicsproject.org/) -[FEniCS](https://fenicsproject.org/) is a popular open-source (LGPLv3) computing platform for -solving partial differential equations (PDEs). -FEniCS enables users to quickly translate scientific models -into efficient finite element code. With the high-level -Python and C++ interfaces to FEniCS, it is easy to get started, -but FEniCS offers also powerful capabilities for more -experienced programmers. FEniCS runs on a multitude of -platforms ranging from laptops to high-performance clusters. - -## How to access the FEniCS through [Anaconda](https://www.anaconda.com/products/individual) -The following steps provides information about how to installed -on your local path. -```bash -# From your local computer -$ ssh -X iris-cluster # OR ssh -Y iris-cluster on Mac - -# Reserve the node for interactive computation with grahics view (plots) -$ si --x11 --ntasks-per-node 1 -c 4 -# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4 - -# Go to scratch directory -$ cds - -/scratch/users/ $ Anaconda3-2020.07-Linux-x86_64.sh -/scratch/users/ $ chmod +x Anaconda3-2020.07-Linux-x86_64.sh -/scratch/users/ $ ./Anaconda3-2020.07-Linux-x86_64.sh - -Do you accept the license terms? [yes|no] -yes -Anaconda3 will now be installed into this location: -/home/users//anaconda3 - - - Press ENTER to confirm the location - - Press CTRL-C to abort the installation - - Or specify a different location below - -# You can choose your path where you want to install it -[/home/users//anaconda3] >>> /scratch/users//Anaconda3 - -# To activate the anaconda -/scratch/users/ $ source /scratch/users//Anaconda3/bin/activate - -# Install the fenics in anaconda environment -/scratch/users/ $ conda create -n fenicsproject -c conda-forge fenics - -# Install matplotlib for the visualization -/scratch/users/ $ conda install -c conda-forge matplotlib -``` -Once you have installed the anaconda, you can always -activate it by calling the `source activate` path where `anaconda` -has been installed. - -## Working example -### Interactive mode -```bash -# From your local computer -$ ssh -X iris-cluster # or ssh -Y iris-cluster on Mac - -# Reserve the node for interactive computation with grahics view (plots) -$ si --ntasks-per-node 1 -c 4 --x11 -# salloc -p interactive --qos debug -C batch --x11 --ntasks-per-node 1 -c 4 - -# Activate anaconda -$ source /${SCRATCH}/Anaconda3/bin/activate - -# activate the fenicsproject -$ conda activate fenicsproject - -# execute the Poisson.py example (you can uncomment the plot lines in Poission.py example) -$ python3 Poisson.py -``` - -### Batch script -```bash -#!/bin/bash -l -#SBATCH -J FEniCS -#SBATCH -N 1 -###SBATCH -A -###SBATCH --ntasks-per-node=1 -#SBATCH -c 1 -#SBATCH --time=00:05:00 -#SBATCH -p batch - -echo "== Starting run at $(date)" -echo "== Job ID: ${SLURM_JOBID}" -echo "== Node list: ${SLURM_NODELIST}" -echo "== Submit dir. : ${SLURM_SUBMIT_DIR}" - -# activate the anaconda source -source ${SCRATCH}/Anaconda3/bin/activate - -# activate the fenicsproject from anaconda -conda activate fenicsproject - -# execute the poisson.py through python -srun python3 Poisson.py -``` - -### Example (Poisson.py) -```bash -# FEniCS tutorial demo program: Poisson equation with Dirichlet conditions. -# Test problem is chosen to give an exact solution at all nodes of the mesh. -# -Laplace(u) = f in the unit square -# u = u_D on the boundary -# u_D = 1 + x^2 + 2y^2 -# f = -6 - -from __future__ import print_function -from fenics import * -import matplotlib.pyplot as plt - -# Create mesh and define function space -mesh = UnitSquareMesh(8, 8) -V = FunctionSpace(mesh, 'P', 1) - -# Define boundary condition -u_D = Expression('1 + x[0]*x[0] + 2*x[1]*x[1]', degree=2) - -def boundary(x, on_boundary): - return on_boundary - -bc = DirichletBC(V, u_D, boundary) - -# Define variational problem -u = TrialFunction(V) -v = TestFunction(V) -f = Constant(-6.0) -a = dot(grad(u), grad(v))*dx -L = f*v*dx - -# Compute solution -u = Function(V) -solve(a == L, u, bc) - -# Plot solution and mesh -#plot(u) -#plot(mesh) - -# Save solution to file in VTK format -vtkfile = File('poisson/solution.pvd') -vtkfile << u - -# Compute error in L2 norm -error_L2 = errornorm(u_D, u, 'L2') - -# Compute maximum error at vertices -vertex_values_u_D = u_D.compute_vertex_values(mesh) -vertex_values_u = u.compute_vertex_values(mesh) -import numpy as np -error_max = np.max(np.abs(vertex_values_u_D - vertex_values_u)) - -# Print errors -print('error_L2 =', error_L2) -print('error_max =', error_max) - -# Hold plot -#plt.show() -``` + + + +[FEniCS](https://fenicsproject.org/) is a popular open-source computing platform for solving partial differential equations (PDEs) using the finite element method ([FEM](https://en.wikipedia.org/wiki/Finite_element_method)). Originally developed in 2003, the earlier version is now known as legacy FEniCS. In 2020, the next-generation framework [FEniCSx](https://docs.fenicsproject.org/) was introduced, with the latest stable [release v0.9.0](https://fenicsproject.org/blog/v0.9.0/) in October 2024. Though it builds on the legacy FEniCS but introduces significant improvements over the legacy libraries. FEniCSx is composed of the following libraries that support typical workflows: [UFL](https://github.com/FEniCS/ufl) → [FFCx](https://github.com/FEniCS/ffcx) → [Basix](https://github.com/FEniCS/basix) → [DOLFINx](https://github.com/FEniCS/dolfinx), which are the build blocks of it. And new users are encouraged to adopt [FEniCSx](https://fenicsproject.org/documentation/) for its modern features and active development support. + +FEniCSx can be installed on [ULHPC](https://www.uni.lu/research-en/core-facilities/hpc/) systems using [Easybuild](https://docs.easybuild.io) or [Spack](https://spack.io/), Below are detailed instructions for each method, + + + +### Building FEniCS With Spack + +Building FEniCSx with Spack on the [ULHPC](https://www.uni.lu/research-en/core-facilities/hpc/) system requires that Users already installed Spack and sourced its enviroment on the cluster. If Spack is not yet configured, follow the [spack documentation](../../environment/spack.md) for installation and configuration. + +!!! note + + Spack would be a good choice for building FEniCSx because it automatically manages complex dependencies, allows to isolates all installations in a dedicated environment, leverages system-provided packages in ~/.`spack/packages.yaml` for optimal performance, and simplifies reproducibility and maintenance across different systems. + +Create and Activate a Spack Environment: + +To maintain an isolated installation, create a dedicated Spack environment in a chosen directory. +The following example sets up a stable release of FEniCSx `v0.9.0` in the `fenicsx-test` directory inside the `home` directory: + + cd ~ + spack env create -d fenicsx-test/ + spack env activate fenicsx-test/ + +Add the core FEniCSx components and common dependencies: + + spack add py-fenics-dolfinx@0.9.0+petsc4py fenics-dolfinx+adios2+petsc adios2+python petsc+mumps + +!!! Additional + + The spack `add command` add abstract specs of packages to the currently active environment and registers them as root `specs` in the environment’s `spack.yaml` file. Alternatively, packages can be predefined directly in the `spack.yaml` file located in`$SPACK_ENV`. + + spack: + # add package specs to the `specs` list + specs: + - py-fenics-dolfinx@0.9.0+petsc4py + - fenics-dolfinx+adios2+petsc + - petsc+mumps + - adios2+python + + view: true + concretizer: + unify: true + !!! note + Replace `@0.9.0` with a different version if you prefer to install others release. + +??? question " why unify : true ? " + + `unify: true` ensures all packages share the same dependency versions, preventing multiple builds of the same library. Without it, each `spec` could resolve dependencies independently, leading to potential conflicts and redundant installations. + +Once Packages `specs` have been added to the current environment, they need to be concretized. + + spack concretize + spack install -j16 + +!!! note + + Here, [`spack concretize`](https://spack.readthedocs.io/en/latest/environments.html#spec-concretization) resolves all dependencies and selects compatible versions for the specified packages. In addition to adding individual specs to an environment, the `spack install` command installs the entire environment at once and `-j16` option sets the number of CPU cores used for building, which can speed up the installation. + Once installed, the FEniCSx environment is ready to use on the cluster. + +The following are also common dependencies used in FEniCS scripts: + + spack add gmsh+opencascade py-numba py-scipy py-matplotlib + +It is possible to build a specific version (git ref) of DOLFINx. +Note that the hash must be the full hash. +It is best to specify appropriate git refs on all components. + + # This is a Spack Environment file. + # It describes a set of packages to be installed, along with + # configuration settings. + spack: + # add package specs to the `specs` list + specs: + - fenics-dolfinx@git.4f575964c70efd02dca92f2cf10c125071b17e4d=main+adios2 + - py-fenics-dolfinx@git.4f575964c70efd02dca92f2cf10c125071b17e4d=main + + - py-fenics-basix@git.2e2a7048ea5f4255c22af18af3b828036f1c8b50=main + - fenics-basix@git.2e2a7048ea5f4255c22af18af3b828036f1c8b50=main + + - py-fenics-ufl@git.b15d8d3fdfea5ad6fe78531ec4ce6059cafeaa89=main + + - py-fenics-ffcx@git.7bc8be738997e7ce68ef0f406eab63c00d467092=main + + - fenics-ufcx@git.7bc8be738997e7ce68ef0f406eab63c00d467092=main + + - petsc+mumps + - adios2+python + view: true + concretizer: + unify: true + +It is also possible to build only the C++ layer using (Need to comment about why we add python depndencies?) + + spack add fenics-dolfinx@0.9.0+adios2 py-fenics-ffcx@0.9.0 petsc+mumps + +To rebuild FEniCSx from main branches inside an existing environment: + + spack install --overwrite -j16 fenics-basix py-fenics-basix py-fenics-ffcx fenics-ufcx py-fenics-ufl fenics-dolfinx py-fenics-dolfinx + +#### Testing the build + +Quickly test the build with + + srun python -c "from mpi4py import MPI; import dolfinx" + +!!! info "Try the Build Explicitly" + + After installation, the [FEniCSx](https://fenicsproject.org/documentation/) build can be tried explicitly by running the demo problems corresponding to the installed release version, as provided in the [FEniCSx documentation](https://docs.fenicsproject.org/). + For [DOLFINx](https://docs.fenicsproject.org/dolfinx/main/python/) Python bindings, see for example the demos in the [stable release v0.9.0](https://docs.fenicsproject.org/dolfinx/v0.9.0/python/demos.html). + + +#### Known issues + +Workaround for inability to find gmsh Python package: + + export PYTHONPATH=$SPACK_ENV/.spack-env/view/lib64/:$PYTHONPATH + +Workaround for inability to find adios2 Python package: + + export PYTHONPATH=$(find $SPACK_ENV/.spack-env -type d -name 'site-packages' | grep venv):$PYTHONPATH + + + + +### Building FEniCS With EasyBuild + + + ## Additional information FEniCS provides the [technical documentation](https://fenicsproject.org/documentation/), diff --git a/mkdocs.yml b/mkdocs.yml index 5d0cfdd10..ba9ade0ae 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -69,10 +69,17 @@ nav: # - Multi-Factor Authentication: 'connect/mfa.md' ############## - Environment: +<<<<<<< HEAD + - Overview: 'environment/index.md' + - Modules: 'environment/modules.md' + - Easybuild: 'environment/easybuild.md' + - Spack: 'environment/spack.md' +======= - Overview: 'environment/index.md' - Modules: 'environment/modules.md' - Easybuild: 'environment/easybuild.md' - EESSI software stack: 'environment/eessi.md' +>>>>>>> upstream - Containers: 'environment/containers.md' - Conda: 'environment/conda.md' ###########