Accelerating HPC Workloads with FPGA Reconfigurable Hardware

Nowadays, accelerator devices, namely GPUs, are the workhorse for floating point intensive applications, such as machine learning and large computational simulations. Furthermore, these devices are also energy efficient, which is critical for the upcoming exascale era of computation. Reconfigurable hardware such as Field Programmable Gate Arrays (FPGA) present a promising alternative solution for increased performance per watt. The advancement of high-level synthesis techniques combined with well-known parallel programming APIs (e.g., OpenCL) now allows HPC software developers to express program logic, parallelism, and data dependencies, using familiar paradigms as opposed to the traditional Hardware Description Language (e.g. Verilog, VHDL) approach typically more suited to hardware designers. The goal of the minisymposium is to gather researchers and developers to discuss their experiences with application development on FPGA devices. It will serve to review the current state-of-the-art and trends of developments in this domain by engaging with experts in the field, facilitating knowledge transfer and possible collaborations. Speakers will be requested to describe their experience, the implemented techniques, and the eventual new developments, which will cover a wide range of perspectives, including those from domain scientists and technical engineers.

Organizer(s): Alfio Lazzaro (HPE Switzerland), Timothy Dykes (HPE HPC/AI EMEA Research Lab), and Utz-Uwe Haus (HPE
HPC/AI EMEA Research Lab)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Advances in Methods and Applications in Computational Chemistry and Materials Science

The COVID-19 pandemic put a pause to holding scientific meetings and exchanging scientific ideas. This minisymposium is a timely opportunity to catch up with current developments in method and software development related to computational chemistry and materials sciences, as well as novel applications they are being applied to. The minisymposium will highlight opportunities for scientific discoveries made possible via the emerging synergies between recent advances in high performance computing software and hardware development, method development, big data science and artificial intelligence/machine learning for computational chemistry and materials sciences applications. The workshop will feature three speakers and a discussion panel comprised of speakers and organizers.

Organizer(s): Rika Kobayashi (Australian National University), Stephan Irle (Oak Ridge National Laboratory), and Bryan Wong (University of California Riverside)

Domain: Chemistry and Materials

Click here to view the video message.


Advancing Scientific Computing Across the Globe Through DEI: Successes and Challenges in Normalizing Inclusion

This minisymposium will examine how diversity, equity and inclusion (DEI) in computational science are defined and manifest across the globe and related topics. DEI takes on different forms in various contexts and settings across the world, and the related issues and challenges may have common as well as differing features. The goal of this minisymposium is to gather a variety of perspectives and identify commonalities and differences such that patterns may be identified and approaches and lessons learned shared. Three major topics include: 1) how is DEI defined or manifested in your region, 2) what successful strategies have been implemented to address diversity, and 3) what are the challenges to promoting DEI.

Organizer(s): Mary Ann Leung (Sustainable Horizons Institute), and Michelle Barker (Research Software Alliance)

Domain: Humanities and Social Sciences

Click here to view the video message.


Architecture and Performance of Hardware-Independent Frameworks for Particle-In-Cell and Grid-Based Methods

Heterogeneous computing architectures are unavoidable moving towards the era of exascale computing. Computing nodes are being built with ever increasing depths of hierarchy. Hardware as well as performance portability are key capabilities to make efficient use of them. Particle-in-cell (PIC) is the method of choice for computational simulations of many physical applications including, but not limited to, particle accelerators, nuclear fusion and astrophysics. Closely related to PIC schemes are the semi-Lagrangian methods which offer a grid-based alternative for many plasma physics applications. Hence, portability in the context of PIC and semi-Lagrangian schemes is of utmost importance for carrying out extreme-scale simulations in current and next generation architectures. This minisymposium will serve as a platform to discuss architecture and performance of hardware-independent and thus portable frameworks regarding particle and grid computations.

Organizer(s): Matthias Frey (University of St Andrews), Katharina Kormann (Uppsala University, Max Planck Institute for Plasma Physics), Sriramkrishnan Muralikrishnan (Paul Scherrer Institut), and Andreas Adelmann (Paul Scherrer Institute, ETH Zurich)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Cell-Based Tissue Modeling

This minisymposium focuses on state-of-the-art computational approaches and their validation to explore the growth, dynamics and organization of biological tissues at various scales, from organs to the subcellular level. Recent advances in imaging technologies, algorithms, and computer hardware enable more realistic models of cell and tissue dynamics with subcellular features. Exploiting concepts from high-performance computing and advanced algorithms, computer simulations with high spatial and temporal resolution are becoming accessible. Coupling biomechanical models with chemical signaling or intra- or extracellular fluids and components further enhances the complexity of tissue processes that can be accurately simulated. Collective phenomena involving dozens or hundreds of cells need to be simulated within useful simulation times. In combination with theory development, computational approaches that enable this provide novel insights into the biomechanical principles of cell aggregation and tissue growth and structure, with important implications for our understanding of biological development, morphogenesis and diseases. In this minisymposium, recent advances in numerical high-performance simulations of cellular systems and their experimental validation are discussed.

Organizer(s): Dagmar Iber (ETH Zurich, Swiss Institute of Bioinformatics), and Roman Vetter (ETH Zurich, Swiss Institute of Bioinformatics)

Domain: Life Sciences


Challenges in Fine-Grained Task Scheduling on Exascale Heterogeneous Architectures: A Molecular Dynamics Perspective

Scientific and technical computing has been facing challenges posed by hardware trends characterized by increasing hardware parallelism, cost of data movement, specialization and heterogeneity. To fully utilize the potential of upcoming hardware, efficient task parallelization frameworks are required. Successful frameworks need to be capable of efficiently expressing fine-grained concurrency, scheduling microsecond-granularity tasks and data movement, and tightly integrate with communication libraries to allow critical-path optimizations in a fully asynchronous schedule. Most of the current tasking frameworks have either struggled to provide low overhead, or have been mainly targeting accelerators, and integration with communication standards like MPI is still an active research area. Molecular dynamics (MD) has been facing these challenges for a decade due to the typically fixed problem size motivating the strong scaling needs, and very high, sub-millisecond, iteration rates. Hence, the experience in this field can be beneficial to the broader community likely to face similar challenges on future exascale architectures. In this session experts in runtimes and developers of flagship MD codes GROMACS and NAMD will to provide a cross-disciplinary view of state-of-the-art in tasking frameworks, APIs, as well as algorithmic and implementation advances made in tackling fine-grained task scheduling challenges while strong-scaling MD on heterogeneous architectures.

Organizer(s): Szilárd Páll (KTH Royal Institute of Technology), and Berk Hess (KTH Royal Institute of Technology)

Domain: Chemistry and Materials


The Climate in Climate Economics

Anthropogenic climate change and the associated economic damages constitute a substantial negative externality. Economic policy can mitigate this externality and potentially even lead to significant welfare gains across all economic agents. In order to determine the optimal mitigation strategies, economists need to develop quantitative models that produce a realistic link between CO2 emissions and global warming and that are informed by research in climate science as presented in the Intergovernmental Panel on Climate Change (IPCC) reports, that is, the “state-of-the-art” in climate science. One fundamental challenge is that the computational costs for Earth system models are so significant that they are not suitable for studying the two-way feedback between the Earth system and human behavior. Therefore, economic models focusing on this feedback have to rely on a much-simplified representation of the Earth system component. To this end, this minisymposium aims to bring together leading scientists from computational economics, machine learning, and climate science that work at the intersection of coupling the two fields in a sound and tractable way.

Organizer(s): Simon Scheidegger (University of Lausanne), and Doris Folini (ETH Zurich)

Domain: Humanities and Social Sciences


Communicating High Performance Computing for Democratizing Science

More than ever before, high-performance computing (HPC) is at the forefront of grand scientific challenges that significantly impact public life, ranging from the virulence of the SARS-CoV-2 pandemic to our understanding of the climate in the future with ever-changing weather patterns. Meanwhile, the massive infrastructure projects and sheer expense of high-performance computing systems limit the public's access to the understanding of the methodology and limit the inclusion of global scientists and perspectives. Political and social theorists have long recognized the relationship between inclusion, communication, democratization with public adoption, uptake, and trust. This minisymposium reviews the recent efforts of HPC scientists in disseminating their work through public media and public policy. It will also discuss broader implications for HPC, including how simulation interfaces with public reason and civil society and what a model of global inclusion might look like for HPC.

Organizer(s): Austin Clyde (University of Chicago, Harvard University), Arvind Ramanathan (Argonne National Laboratory, University of Chicago), and Ian Foster (Argonne National Laboratory, University of Chicago)

Domain: Humanities and Social Sciences


Computation Powered by Machine Learning in Material Science

Machine learning (ML) methods have been widely adopted in material science in recent years. While these approaches have been used for years in engineering and science in general, the widespread application in computational materials science is relatively new [1]. For modeling computationally heavy quantum-chemistry calculations, two major approaches can be discriminated. In the first, one tries to replace certain parts of already established frameworks with ML models, e.g., the parameterization of molecular forcefields [2] or the functionals in density-functional theory [3]. The second approach tries to create a surrogate model for prediction of materials properties given only the fingerprints as an input. Recent efforts also focus on the prospects of creating "new" materials from generative models or directly feeding the structural graph to a neural-network approximator [4]. This minisymposium aims to give an overview on some of the most relevant developments concerning ML methods in material science.
References:
[1] Rogers, D.; Hahn, M.; J. Chem. Inf. Model. 2010, 50, 742−754.
[2] Behler, J.; Parrinello, M.; Phys. Rev. Lett. 2007, 98, No. 146401.
[3] Rupp, M.; et al..; Phys. Rev. Lett. 2012, 108, No. 058301.
[4] Xie, T.; Grossman, J. C.; Phys. Rev. Lett. 2018, 120, No. 145301.

Organizer(s): Alessio Gagliardi (TU Munich)

Domain: Chemistry and Materials

Click here to view the video message.


The Current State of High-Performance Cellular Flow Simulations: Can we Define a Consistent vision?

With the development of large-scale computing platforms, cellular flows can now be simulated by taking into account the deformations and interactions of individual cells. The continuous increase in computational capacity enabled high-fidelity flow simulations employing detailed cell models to become a valuable complement to experimental studies, giving access to more details and aiding in the design of microfluidic setups. Some of these developments happen in competition, and some individually, around a diverse set of scientific questions. However, the functionality overlap, the limits, and the differences of the proposed methods are not necessarily clear. This minisymposium aims to benefit our community by setting the following goals: 1) to connect the prominent members of the current bleeding edge research in cellular flow simulations and display the state of the art of these simulations 2) to provide a platform for an open forum directed discussion after the presentations to deliberate on the current limitations, necessary validations, overlaps and unique features in functionality, and upcoming targets in the development of these codes.

Organizer(s): Gabor Zavodszky (University of Amsterdam), and Igor Pivkin (Università della Svizzera italiana)

Domain: Life Sciences


Domain Specific Languages (DSLs) for Revolutionising HPC Code Development: A Panacea or Empty Promises?

Programming efficient HPC code is difficult and in the domain of the few experts. These challenges look to become more severe in the exascale era as hardware heterogeneity and scale of parallelism will increase significantly. Traditionally HPC application developers have used sequential languages, such as C or Fortran, combined with MPI, OpenMP, or CUDA to determine all aspects of parallelism for their code. But specifying such low level and tricky details are time consuming and not scalable to future exascale machines or scientific ambition. There is however another way, and that is of Domain Specific Languages (DSLs). Often embedded into existing languages, such as Python, these provide abstractions which can then be used as a basis for writing an HPC application. By working within the confines of these abstractions, which are specific to a domain, there is a rich amount of information upon which the compiler can then determine key, but complex, optimisation details. However, DSLs are yet to gain ubiquity in HPC, and in this minisymposium we bring together those interested in application development, compilers, and DSL design to discuss how we can cooperate and address challenges that are limiting the role of DSLs in HPC codes.

Organizer(s): Nick Brown (EPCC), Tobias Grosser (School of Informatics, University of Edinburgh), and Paul Kelly (Imperial College London)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Edge-Computing towards Sustainability, Resilience, and Decarbonization: What are we Overlooking?

Data-driven computational approaches that bring computing and storage nearer to the requests, and especially operating on instant real-time sensor data, have contributed immensely to our state of knowledge of the impacts of climate change on our built environment and human systems. The holistic understanding of these issues demand a cross-disciplinary approach that is rigorous, based on data, and embedded in the domain. This minisymposium focuses on bringing together experts and researchers in edge-computing with knowledge of the driving issues for a discussion on the existing challenges and advances needed in driving edge-computing deployments towards well understood goals of sustainability, resilience, and decarbonization. The international panel of invited speakers will cover a wide range of topics, from a deeply data and computational perspective about IoT, streaming data, and real-time analytics, to traceability of the data sources for deep decarbonization and their impact on policy-energy-economics. Issues of data scarcity and of inequity in the Global South will be discussed. We expect to round off the discussion with a forward-looking view of integrated communities of the future as highly controllable entities in which the built environment - buildings, vehicles, energy generation, and the grid, collectively maximize the user’s wellbeing.

Organizer(s): Jibonananda Sanyal (Oak Ridge National Laboratory), Juliette Ugirumurera (National Renewable Energy Laboratory), and Ronita Bardhan (University of Cambridge)

Domain: Computer Science and Applied Mathematics


Exascale Computing, Artificial Intelligence, and Data: Opportunities and Challenges for Weather, Climate, Ocean, and Environmental Prediction

Exascale computing, artificial intelligence (AI), and data science have the potential to significantly improve predictive capabilities and services across the weather, climate, ocean, and broader environment domains; however, successful application of these technologies requires overcoming many challenges. The World Meteorological Organization (WMO) Research Board commissioned the Task Team on Exascale, AI, and Data to produce whitepapers providing a synthesis of the current state of activity in these areas, as well as recommendations to guide progress and initiate coordinated activity in the WMO community for enabling effective uptake. An overview of the whitepapers and how they fit into the WMO’s long-term strategic plan will be presented. Subsequent talks will provide a deep dive into the whitepapers on exascale computing and data production, and AI and data analysis. Focus will be placed on challenges that must be overcome to realize the benefits of these new technologies, as well as critical gaps between regions with and without access to significant computing resources—and ways to address these gaps. Concrete short-term actions recommended in the whitepapers will be presented. A discussion panel will feature the whitepaper’ authors—representing international weather, climate, and supercomputing centers—providing opportunity for feedback and collaboration with the broader scientific computing community.

Organizer(s): Kris Rowe (Argonne National Laboratory), Veronique Bouchet (Environment and Climate Change Canada), and Wenchao Cao (World Meteorological Organization)

Domain: Climate, Weather and Earth Sciences

Click here to view the video message.


High Performance Computing in Kinetic Simulations of Plasmas - Part I: HPC Opportunities

Recent advances in cutting-edge physics applications of kinetic theory in plasmas will be presented. Plasmas are subject to a multitude of collective effects including electromagnetic global modes, instabilities and turbulence, which span several orders of magnitude in space and time scales. Various dynamically reduced approaches, such as gyrokinetics of magnetized plasmas or hybrid fluid-kinetic, have been developed to circumvent some of these difficulties, but even with these reductions the problem remains far from trivial. Thanks to progresses in numerical methods, algorithmic development, massive parallelization schemes enabling the efficient use of the most powerful HPC platforms, numerical simulations have gained increasing levels of realism, making possible scientific application studies nowadays that were previously out of reach. This session represents the first part of the three minisymposia on simulations of plasmas and is dedicated more specifically towards issues related to the efficient implementation of the codes on Exascale architectures. Three talks will be devoted to different Particle-In-Cell codes targeting applications in magnetic fusion and plasma accelerators. The final talk will present semi-Lagrangian simulations on a six-dimensional grid.

Organizer(s): Eric Sonnendrücker (Max Planck Institute for Plasma Physics, TU Munich), Axel Huebl (Lawrence Berkeley National Laboratory), and Laurent Villard (EPFL)

Domain: Physics


High Performance Computing in Kinetic Simulations of Plasmas - Part II: Physics Applications

Recent advances in cutting-edge physics applications of kinetic theory in magnetized plasmas will be presented. Plasmas are subject to a multitude of collective effects including electromagnetic global modes, instabilities and turbulence, which span several orders of magnitude in space and time scales. Various dynamically reduced approaches, such as gyrokinetics or hybrid fluid-kinetic, have been developed to circumvent some of these difficulties, but even with these reductions the problem remains far from trivial. Thanks to progresses in numerical methods, algorithmic development, massive parallelization schemes enabling the efficient use of the most powerful HPC platforms, numerical simulations have gained increasing levels of realism, making possible scientific application studies nowadays that were previously out of reach. The minisymposium will address more specifically four aspects, all exploring frontiers of the domain. (1) The multiscale nature of turbulence. (2) Edge and SOL gyrokinetic turbulence. (3) Electromagnetic gyrokinetic turbulence and interaction with global modes. (4) Effects of suprathermal particles on MHD-like modes.

Organizer(s): Laurent Villard (EPFL), Stephan Brunner (EPFL), and Eric Sonnendrücker (Max Planck Institute for Plasma Physics)

Domain: Physics


High Performance Computing in Kinetic Simulations of Plasmas - Part III: Advanced Numerical Methods and Algorithms

Recent advances in cutting-edge physics applications of kinetic theory in magnetized plasmas will be presented. Plasmas are subject to a multitude of collective effects including electromagnetic global modes, instabilities and turbulence, which span several orders of magnitude in space and time scales. Various dynamically reduced approaches, such as gyrokinetics or hybrid fluid-kinetic, have been developed to circumvent some of these difficulties, but even with these reductions the problem remains extremely challenging. Thanks to progresses in numerical methods, algorithmic development, massive parallelization schemes enabling the efficient use of the most powerful HPC platforms, numerical simulations have achieved increasing levels of realism, making scientific application studies nowadays possible that were previously out of reach. This session represents the third and last part of this minisymposium on simulations of plasmas and is dedicated to the ongoing effort in developing innovative numerical methods for carrying out efficient, flexible and robust kinetic simulations. The set of four presentations will in particular focus on recent efforts in developing various alternative Eulerian-based schemes for carrying out gyrokinetic simulations of the edge region of tokamak plasmas. Among others, these talks will address discontinuous Galerkin and high order finite volume discretization, block-structured grids, as well as polynomial basis representations.

Organizer(s): Stephan Brunner (EPFL), Eric Sonnendrücker (Max Planck Institute for Plasma Physics, TU Munich), and Laurent Villard (EPFL)

Domain: Physics


High-Performance Machine Learning: Scale and Performance

With the growing range of machine learning approaches and the availability of large-scale data, performant and scalable algorithmic techniques have become central topics. Nowadays, high-performance computing (HPC) technologies have become essential for modern large-scale machine learning applications. The convergence of HPC and machine learning has shown compelling success across many application fields. With that said, real-world applications are rife with inherent challenges associated with high dimensionality, scarcity of data, and ill-posedness, among many others. This minisymposium will serve as a platform to discuss cutting-edge developments for modern, scalable, and efficient machine learning approaches.

Organizer(s): Aryan Eftekhari (University of Lausanne, Università della Svizzera italiana), and Olaf Schenk (Università della Svizzera italiana)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


HPC in Reduced Order Modelling for Advanced Mechanics Simulations and Digital Twinning

Computational mechanics simulations offer a valuable analysis tool for engineering applications ranging from design and conceptualization all the way to the operation and maintenance of engineered systems. Such simulations often involve very detailed High-Fidelity Models (HFM) accounting for complex physics, such as failure, nonlinearities and time-dependence, as well as multi-physics interactions. The resulting computational complexity can be prohibitive for several applications. This is particularly true for Digital Twinning applications, where the digital representation is often coupled with monitoring data derived from the physical counterpart of the Twin. In this case, models have to be evaluated repeatedly for reliably updating the digital representation, in certain cases even in real-time. Reduced Order Modelling (ROM) techniques allow to substantially reduce the involved computational cost, while preserving precision. These involve an offline stage, where a ROM is constructed based on information from the HFM, and an online stage where the ROM is deployed. The exploitation of HPC infrastructure is indispensable in both the offline and online phase, for ensuring robust training and real-time estimation capabilities, respectively. In this MS, recent advances in ROM methods, algorithms, and application challenges will be discussed, with a particular focus on exploitation of HPC capabilities and assimilation with data.

Organizer(s): Eleni Chatzi (ETH Zurich), Paolo Tiso (ETH Zurich), and Konstantinos Agathos (University of Exeter)

Domain: Engineering


HPC Workflow Automation and Management

Computational science projects today consist of numerous steps and methods which involve multiple software projects and may run on various devices, from classical HPC systems to cloud-based services. This makes the workflow itself an integral part of the result, and Workflow Managers one of the main tools computational scientists will have to use in their work. By using proper workflow tools, scientific results can more easily be shared and reproduced, while keeping the process to obtain them flexible and transferable. Together with our speakers we look at the current state of software packages for workflow orchestration and their building blocks to make such an automation possible across supercomputers. We give concrete examples demonstrating how to integrate Machine Learning into scientific workflows in a domain-agnostic way, and show how smart data management in a workflow allows for asynchronous data analysis to improve time to solution.

Organizer(s): Tiziano Müller (HPE), Nina Mujkanović (HPE), and Alfio Lazzaro (HPE)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


ICON-Next: Current Developments in Rewriting the ICOsahedral Non-hydrostatic (ICON) Model for Emerging Architectures

The next step in climate modeling is so-called "storm-resolving" earth system models (SR-ESMs), which work at kilometer-scale. SR-ESMs make it possible for the first time to directly simulate small-scale processes like the formation of thunderstorms. Emerging exascale high-performance computers have the necessary computing power to carry out such high-resolution simulations for longer simulation times. These mainly heterogeneous hardware architectures pose challenges for all scientific software development efforts, and the ICON weather and climate model is no exception. Several initiatives address these technical challenges in order to exploit the possibilities of km-scale ICON simulations, for example to improve forecast skills or to produce climate information of the highest possible quality. This minisymposium will present the research direction of four of these initiatives: ICON-22, WarmWorld, EXCLAIM, and ICON-C. While the first three are independent projects, they are all tied together through the common ICON model and the need to enable scalable development while ensuring performance portability. These three projects will also contribute to the fourth: the ICON-Consolidated effort will pull together various developments into a consolidated ICON software package. At PASC22, first results of each project will be presented along with their synergies towards the overarching goal of storm-resolving simulations.

Organizer(s): William Sawyer (ETH Zurich / CSCS), and Xavier Lapillonne (MeteoSwiss)

Domain: Climate, Weather and Earth Sciences

Click here to view the video message.


Interdisciplinary Challenges towards Exascale Fluid Dynamics

With exascale computing capabilities on the horizon, we have seen a transition to more heterogeneous architectures with various accelerators, such as GPUs. While offering high theoretical peak performance and high memory bandwidth, significant programming investments are necessary to exploit these systems efficiently, a challenge that can no longer be ignored with pre and exascale systems like LUMI and Frontier. CFD is a natural driver for exascale computing, with a virtually unbounded need for computational resources for accurate simulation of turbulent fluid flow, both for academic and engineering usage. However, established CFD codes build on years of verification and validation of their underlying numerical methods, potentially preventing a complete rewrite and rendering disruptive code changes a delicate task. Therefore, porting established codes to accelerators poses several interdisciplinary challenges, from formulating suitable numerical methods to applying sound software engineering practices to cope with disruptive code changes. The wide range of topics makes the exascale CFD transition relevant to a broader audience, extending outside the traditional fluid dynamics community. This minisymposium aims at bringing together the CFD community as a whole, from domain scientists to HPC experts, to discuss current and future challenges towards enabling exascale fluid dynamics simulations on anticipated accelerated systems.

Organizer(s): Philipp Schlatter (KTH Royal Institute of Technology), and Niclas Jansson (KTH Royal Institute of Technology)

Domain: Engineering


Julia for HPC

Natural sciences and engineering applications increasingly leverage advanced computational methods to further improve our understanding of complex natural systems, using predictive modelling or analysing data. However, the flow of large amounts of data and the constant increase in spatiotemporal model resolution pose new challenges in scientific software development. Also, high-performance computing (HPC) resources massively rely on hardware accelerators such as graphical processing units (GPUs) that need to be efficiently utilised, representing an additional challenge. Performance portability and scalability as well as fast development on large-scale heterogeneous hardware represent crucial aspects in scientific software development that can be leveraged by the capabilities of the Julia language. The goal of this minisymposium is to bring together scientists who work on or show interest in large-scale Julia HPC development, including but not restricted to software ecosystems and portable programming models for development; GPU computing multiphysics solvers, and more. The selection of speakers with expertise spanning from computational to domain science, offers a unique opportunity to learn about the latest development of Julia for HPC to drive discoveries in Earth system sciences and Geodynamics using the next generation of integrated climate models and 3D lithospheric models at unprecedented resolution.

Organizer(s): Ludovic Räss (ETH Zurich), and Boris Kaus (Johannes Gutenberg University Mainz)

Domain: Climate, Weather and Earth Sciences


Lattice QCD in the Exascale Era: Prospects, Challenges and Impact

This minisymposium will explore the physics potential of exascale computing for Lattice Quantum Chromodynamics (QCD) and its impact on our understanding of fundamental particle physics. Novel algorithms and new computing paradigms, in particular machine learning at the level of Monte Carlo integration and in data analysis will be discussed. To fully and efficiently exploit next-generation computing lattice QCD requires significant algorithm development and optimisation – including the development of efficient, resilient sparse solvers, redesigned workflows and data management on highly scalable heterogeneous architectures, including porting and optimisation of codes for accelerators. These challenges are common across computational domains and for the societal grand challenges addressed in the conference theme. Through talks and a panel discussion the minisymposium will elucidate how scientific computing motivated by open questions in fundamental physics leads to computing solutions at exascale that are relevant and timely for many HPC applications.

Organizer(s): Sinéad Ryan (Trinity College Dublin, Hamilton Mathematics Institute)

Domain: Physics


Leveraging Data Lakes to Manage and Process Scientific Data

In recent years, data lakes have become increasingly popular as central storage, particularly for unstructured data. Generally, data lakes aim to integrate heterogeneous data from diverse sources into a unified information management system, where data is retained in its original format. Storing data in raw format, opposed to inferring a schema on write as it is commonly done in a data warehouse, supports the reuse and sharing of already collected data. The idea is to basically dump the data into the lake and later fish for knowledge using sophisticated analysis tools. This approach, however, is quite challenging since it has to be ensured that all data, no matter the number or size of the different data sets, will be found and can be accessed later on. In addition, especially for domain researchers in public research institutions, a research data management solution should not only ensure the preservation of the data but also support and guide scientists in complying with good scientific practices from the very beginning. In order to discuss the current challenges, their possible solutions and share personal insights into data lakes, we bring different experts together and discuss with the scientific community the potential and technical approaches.

Organizer(s): Hendrik Nolte (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen), and Julian Kunkel (Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Looking into the Earth: Modelling, Inversion and Uncertainty Quantification in Computational Geophysics

The central goal of computational geophysics is to infer physical properties of the Earth's interior, inaccessible for direct measurements (such as density or wave velocity), from data typically acquired near the surface with appropriate instruments. Inversion techniques are applied to infer unknown parameters of a mathematical model to fit the observed data. The resulting model provides a view of the current physical state and can be used to simulate previous and future states. Such models lead to complicated nonlinear numerical systems, which are usually too large for standard computers. Luckily, HPC facilities' ever-increasing power allows continuous model resolution and complexity growth. Nevertheless, harnessing this power brings new challenges for implementations. Despite the success of deterministic inversion methods, nonuniqueness and nonlinearity of the inverse problems can reduce the meaningfulness of the solutions. Bayesian inference represents an attractive tool to overcome these issues and provide uncertainty quantification. However, it brings another level of complexity that needs to be re-assessed in light of the recent advances in methodology and technologies. This three-sessions minisymposium aims to bring together scientists engaged in theory, numerical methods, algorithms, and scientific software engineering for scalable numerical modelling, inversion and uncertainty quantification in computational geophysics.

Organizer(s): Václav Hapla (ETH Zurich), Andrea Zunino (ETH Zurich), and Joeri Brackenhoff (ETH Zurich)

Domain: Climate, Weather and Earth Sciences

Click here to view the video message.


Nexus of AI and HPC for Weather, Climate, and Earth System Modelling

Accurately and reliably predicting weather and climate change and associated extreme weather events are critical to plan for disastrous impacts well in advance and to adapt to sea level rise, ecosystem shifts, and food and water security needs. The ever-growing demands of high-resolution weather and climate modelling require exascale systems. Simultaneously, petabytes of weather and climate data are produced from models and observations each year. Artificial Intelligence (AI) offers novel ways to learn predictive models from complex datasets, at scale, that can benefit every step of the workflow in weather and climate modelling: including data assimilation, process emulation, solver acceleration, and ensemble prediction. Further, how do we make the best use of AI to build Earth digital twins for a wide range of applications from extreme weather to renewable energy, including at highly localized scales such as cities? The next generation of breakthroughs will require a true nexus of high-performance computing (HPC) and large-scale AI. This minisymposium will delve into the challenges and opportunities at the nexus of HPC and AI. Presenters will describe scientific and computing challenges and the development of efficient and scalable AI solutions for weather, climate, and Earth system modeling.

Organizer(s): Karthik Kashinath (NVIDIA Inc., Lawrence Berkeley National Laboratory), Bing Gong (Forschungszentrum Jülich), and Peter Dueben (ECMWF)

Domain: Climate, Weather and Earth Sciences

Click here to view the video message.


Numerical Methods for Solving Large Scale Linear Systems on Modern Exascale Computers for Industrial Applications

Numerical simulation is a very common tool in a wide range of applications, ranging from structural mechanics to computational fluid-dynamics, underground processes, electromagnetism, and many others. Nowadays, the industrial world is constantly pushing for increasingly large and accurate, real-time solutions and modern engineers need efficient tools to address numerical simulations that require the solution of sparse linear systems of millions or even billions of unknowns. A common bottleneck in large size simulations is the cost for solving a linear systems of equations which tend to grow superlinearly with the problem size. Thus, the use of High-Performance Computing (HPC) is increasingly necessary, and the development of efficient and scalable sparse linear solvers is an important research topic. From a hardware point of view, the trend in HPC is towards massively parallel computing through smart interconnected nodes equipped with accelerators featuring thousands of simple computing units (cores). The availability of these novel computational platforms boosts both a theoretical renovation of well-established algorithms and the development of brand-new ideas. The algorithms must be designed and optimized having as target platforms multiple nodes with thousands of simple computing units and a memory hierarchy that is much more exposed to the developer's control.

Organizer(s): Carlo Cavazzoni (Leonardo), Giovanni Isotton (M3E), and Carlo Janna (M3E), Andrea Franceschini (University of Padova)

Domain: Computer Science and Applied Mathematics


Portable Solutions for High Energy Physics Workflows on Heterogeneous Architectures

High-energy physics (HEP) experiments have developed millions of lines of code over decades that are optimized to run on traditional x86 CPU systems. However we are seeing a rapidly increasing fraction of floating point computing power in leadership-class computing facilities and traditional data centers coming from new heterogeneous accelerator architectures, such as GPUs. Though the GPU field is currently being led by NVIDIA, other manufacturers such as Intel and AMD are making increasing inroads into this territory, each with their own architecture and compiler languages. Many HEP experiments are also using FPGAs for their frontend detector readouts. Rewriting current CPU based High Energy Physics code for multiple accelerator architectures is not a viable scenario, given the available person power and code maintenance issues. Furthermore, as the number of architectures proliferate, it becomes increasingly onerous to validate the code, and it is vital to ensure that workflows on different hardware produce identical results. Developing portable solutions for HEP codebases to run on multiple heterogeneous architectures is essential. We explore how major HEP experiments are addressing these issues, and also a domain wide investigation to compare the efficacy of the various portability layers such as Kokkos, SYCL, Alpaka, OpenMP, and std::execution::parallel.

Organizer(s): Charles Leggett (Lawrence Berkeley National Laboratory)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Research Software Science: Applying the Scientific Method to Understand and Improve How We Develop, Maintain, and Use Software for Research

The discipline of scientific software development, maintenance and use has incorporated an increasing diversity of skills. Early products were typically developed by single "heroes" or small teams of domain experts. More recently, teams have increasingly included computer science and mathematics expertise to improve algorithmic choice and implementation, and software engineering expertise to improve software design, use of tools and new workflows. In addition to factors associated with individual software teams, software ecosystems (based on software stacks, disciplines, and/or other community identification) have become more prevalent and visible, with common policies, practices, and priorities across teams and a broad interest in sustaining developer and user communities. For community open source projects, community engagement expertise is also needed. This responsibility can be an explicit team role for large projects, but for smaller projects is more often an additional role that one or more developers take on. In this minisymposium we explore both possible and demonstrated successes resulting from introducing expertise in social and cognitive sciences, organizational psychology, community development and policies, and other human factors fields.

Organizer(s): Michael Heroux (Sandia National Laboratories, St. John University), and Daniel Katz (University of Illinois Urbana-Champaign)

Domain: Humanities and Social Sciences

Click here to view the video message.


The Rise of Low Dimensionality Materials: Opportunities and Challenges from Cutting-Edge Computational Investigations

Since the pivotal work of Geim and Novoselov highlighting the outstanding properties of graphene nanosheets, there has been increasing excitement for structurally confined two-dimensional (2D) materials. These systems have been proven to possess distinctive physical characteristics, compared to the parental 3D analogues, making them very appealing for many technological applications. Still, the symmetry breaking associated with the confinement of the material in two dimensions results in 1) an immense structural and compositional flexibility, whose smart and rational exploration represents an important challenge; 2) the downfall of conventional theoretical models for the description of their properties and in the need for advanced approaches. Computer simulations can have a unique impact in facing these and many other challenges, thanks to the continuous efforts in translating advanced physical models into effective computer codes, together with the availability of cutting-edge computing resources. This minisymposium will focus on the use of computational tools for scientific studies related to the structures and properties of 2D materials. These include the use of artificial intelligence and high-throughput strategies for their discovery or design and large-scale parallel computing for cutting edge, atomistic simulations.

Organizer(s): Claudio Quarti (University of Mons), and Silvio Osella (University of Warsaw)

Domain: Chemistry and Materials

Click here to view the video message.


"Share without Sharing" - Designing an Integrated Approach for Health Data, Distributed Computing, Privacy and Advanced Analytics

With the modern-day explosion of health data, we are witnessing a fast-rising number of disconnected solutions, data silos, and varying opinions on how to best integrate the oceans of available information. There are evolving discussions on what is considered acceptable regarding how these data should be used. Even if the discussions are productive, it is still questioned whether we have practical and robust technologies to enable those ambitions. Are technology developers keeping track of what society will accept? What are the real obstacles that prevent distributed computing being a routine way of analysing health data? How can practical interdisciplinary ideas be effectively created and tested? In this minisymposium we tackle the basics and promises of federated learning, as well as the complexities of this quickly growing field and the challenges it faces. The presentations will describe current data sharing systems and federated learning approaches, how these approaches are becoming integrated in hospital operations, as well as legal and value considerations to be aware of. The talks will be followed by interactive discussions and exchange of ideas with the audience.

Organizer(s): Leila Tamara Alexander (Swiss Institute of Bioinformatics, University of Basel), Cristina Golfieri (University of Basel), and Torsten Schwede (University of Basel, Swiss Institute of Bioinformatics)

Domain: Life Sciences


Software and Data Sustainability in Computational Science and Engineering

The role of research software in scientific computing has grown tremendously over the last few decades. In addition to traditional research areas, data-intensive science has seen a rapid rise in prominence. This has been accompanied by disrupting changes in computer architectures and increasing complexity in the system software stack. At the same time, software development teams are growing in size and disciplinary diversity. While we focus on the scientific discoveries enabled by modern computational and data science, we also understand that the science is only as credible as the software that underlies it. The changing hardware environment, growing demand for solutions from modeling, simulation, and data analytics, growing emphasis on transparency and reproducibility, as well as the social challenges of larger/diverse teams needing to collaborate effectively, poses serious challenges for the scientific computing community. This minisymposium will explore the current state and future prospects for sustainability of software and data, spanning multiple scientific areas, and computational and human scales -- from individuals and small teams to broader communities. We hope to bring a variety of experience and perspectives to understand the challenges being faced and to explore insights that can be spread to other areas of computational and data science.

Organizer(s): Rinku Gupta (Argonne National Laboratory), Carlos Martinez Ortiz (Netherlands eScience Center), and David Bernholdt (Oak Ridge National Laboratory)

Domain: Computer Science and Applied Mathematics


Storage Systems at Extreme Scale for Data-Centric Workflows

In the past few years, applications in scientific domains such as weather and forecast have evolved into complex data-centric workflows. This evolution has resulted in a data deluge which has been observed in large computing centers such as the National Energy Research Scientific Computing Center (NERSC), where the volume of data stored increased by a factor of 40 in the last ten years. However, the gap between computing and I/O capabilities continues to grow: the ratio of I/O bandwidth to computing power has decreased by an order of magnitude over the last ten years for the top 3 supercomputers listed in the top500. Faced with this critical situation, storage systems must adapt. In this minisymposium, we will examine the needs of scientific workflows in terms of massive data management through a concrete example. Then, we will explore the solutions proposed by the computer science research community, both from an infrastructure and a execution software stack perspective. We will see how these lines of research can improve data orchestration in scientific workflows where storage is central, and identify which research topics remain to be further developed.

Organizer(s): François Tessier (INRIA)

Domain: Computer Science and Applied Mathematics


Swiss Chapter Women in HPC

WHPC is the only international organization working to improve equity, diversity and inclusion in High Performance Computing. The Swiss chapter of WHPC is being formed by senior professionals scientists and engineers working in Switzerland, representing academia, large research centre, national HPC Centre and the IT industry. At PASC22 we hold the second minisymposium dedicated to the chapter of Women in HPC in Switzerland. The first edition of the minisymposium was held virtually last year at PASC21. We hope to have an in-person gathering and networking event with prominent speakers. The mission of WHPC is to "promote, build and leverage a diverse and inclusive HPC workforce by enabling and energising those in the HPC community to increase the participation of women and highlight their contribution to the success of supercomputing. To ensure that women are treated fairly and have equal opportunities to succeed in their chosen HPC career. To ensure everyone understands the benefits of promoting and achieving inclusivity".

Organizer(s): Marie-Christine Sawley (ICES FOUNDATION), Sadaf Alam (ETH Zurich / CSCS), Florina Ciorba (University of Basel), and Maria Girone (CERN)

Domain: Computer Science and Applied Mathematics

Click here to view the video message.


Towards ESM-ML Hybrids with Practical Workflows and Model Integration

A promising application area for Machine Learning (ML) in the domain of Earth System Modelling (ESM) and Numerical Weather Prediction (NWP) revolves around scalable solutions that embed ML in running ESM/NWP code. For example, in order to ensure the long-term stability of ML models that can substitute existing model parameterizations, it may be required to tune an ML model against a live numerical model, in addition to the challenge of getting initial training data extracted from the model at possibly high spatial and temporal resolutions in the first place. Such setups are much more complex than offline training settings in terms of software integration and CPU-GPU data exchange in an HPC setup. The standard ML workflows will generate models for inference whose utility is currently limited to the Python ecosystem. Exporting them to work with ESMs in Fortran or C environments results in cumbersome workflows. Tackling such challenges is of key value to research and development efforts that aim to build ESM-ML hybrids. To encourage collaboration and formulation of best practices, the minisymposium will bring together ESM and ML developers, HPC experts as well as software engineering expertise revolving around building Python/Fortran/C bridges, and foster exchange with other application domains.

Organizer(s): Tobias Weigel (German Climate Computing Centre (DKRZ), Helmholtz AI)

Domain: Climate, Weather and Earth Sciences