About Us

NHR South-West

NHR South-West is a strategic collaboration between

and is part of the German National High Performance Computing (NHR) Alliance.

By combining the strengths of its partners in High Performance Computing (HPC) and Artificial Intelligence (AI), NHR South-West provides researchers with access to cutting-edge HPC systems and services. Our teams work closely with domain scientists in Simulation and Data Labs, while the Method Labs focus on developing innovative computational methods.

Core and Research Competencies

Simulation and Data Labs (SDLs)

Life sciences and Molecular Systems

The different disciplines in life sciences have among the fastest growing demands concerning compute performance, data storage, and AI/ML support, while they also have a huge overlap with the physics discipline of molecular systems. An overarching topic of this SDL is multi-scale modeling, which requires new approaches to data organization and management. This SDL interacts closely with the SDL Biology at JSC and supports researchers from medicine. It focuses on the development and user support for scalable HPC methods and further supports machine learning for the various user groups and applications.

Nuclear, Particle and Astrophysics

The SDL combines the interplay between physical research from the domain of nuclear, particle, and astrophysics with the development and application of algorithms and software for current and future HPC architectures. The first topical focus will be on QCD applications, which require huge core counts, low latency interconnects and high-bandwidth memory. The computational scientists of this SDL will help to extend the established workshops series on „Lattice Practices“, which is currently organized by DESY/Zeuthen and Jülich, with an emphasis on the use of new hardware architectures to strengthen the community’s ability to make the most efficient use of the allocated computing resources. The second main focus of this SDL will be to assist users from astrophysics to optimize scientific codes such as Whisky or Llama to apply vectorization and GPU accelerators and to include new methods like the CCZ4 formulation.

Quantum Computer Systems

This SDL focuses on the development of software tools, modeling concepts, and solution algorithms that are specific to the various quantum computer technologies. The SDL also supports users in the application of hybrid quantum HPC algorithms and works closely with the Jülich Supercomputing Centre. Furthermore, the SDL serves as a bridge for the transfer from basic research to innovative applications and from knowledge and research to industrial users. In addition, the SDL organizes training events on the topic as well as short stays, annual summer schools, and international workshops in cooperation with the JSC.

Method Labs

Performance Engineering

Standard support offered by this lab includes code optimization, vectorization, and adoption of the code for GPU use. Novel tools for this lab will also be made available by our other method labs. The biggest potential in efficiency improvement is the algorithm itself and several joint projects with our user groups to improve the scientific software efficiency have led to improvements of typically more than one order of magnitude. Training activities include workshops on (parallel) programming, vectorization, GPU programming, debugging, and performance tuning.

AI and ML

AI is rapidly transforming nearly all scientific disciplines, with an ever-expanding ecosystem of techniques and tools. For individual researchers, keeping pace—let alone identifying and effectively applying the right solution—has become a major challenge. This method lab aims to help scientists navigate the evolving AI/ML landscape by providing targeted support tailored to their needs. We focus on visualization, debugging, and explainability—key components of Explainable AI—to foster deeper understanding and more efficient use of AI in concrete projects. We also support users interested in integrating domain-specific models with neural approaches, such as Hybrid or Neurosymbolic AI. Our services include tool deployment, optimization, and user training for frameworks like TensorFlow, PyTorch, and Caffe. In collaboration with other method labs, we are also addressing the limited HPC support in many AI tools to ensure scalable, high-performance integration.

Parallel Programming Environments and Domain-Specific Compilation

Modern HPC systems increasingly rely on heterogeneous accelerators, posing major challenges for code portability and programmability. Legacy applications written in low-level languages like C/C++ are often ill-suited for these architectures.

This method lab supports the adoption of domain-specific languages (DSLs), which allow scientists to express application logic at a higher abstraction level while enabling efficient execution across diverse hardware. We focus on adapting AnyDSL and related technologies to user needs, offering regular tutorials and hands-on support for DSL implementation. Our expertise includes metaprogramming, DSL embedding, LLVM-based compilation, and automatic differentiation with tools like CoDiPack for high-performance derivative computations.

As HPC systems scale in size, parallel programming models become critical to maintain performance. We offer guidance and training in models such as MPI, GASPI, GPI-2, and task parallelism, along with consultancy for integrating these into existing codes. We also assist users with end-to-end HPC workflow setup, including advanced topics like differentiation workflows and parallel visualization.