Poster Title: 
Poster Abstract: 
Author First Name: 
Author  Last Name: 



Author Name:  Nina Akerman
Poster Title:  Simulating jellyfish galaxies
Poster Abstract: 

Jellyfish galaxies are peculiar objects that are in the process of losing their gas as they move through the space. This gas forms tails behind a galaxy making it resemble a jellyfish, hence the name. The process that strips the galaxies of their gas is known as ram-pressure stripping (RPS). There is observational evidence, however, that RPS can also make some of the gas flow to the galaxy centre. This is important, because at the centre of almost all galaxies there is a supermassive black hole that can ‘consume’ certain amount of gas. This amount can determine a galaxy’s evolution.

To answer the question whether RPS would make the gas flow to the galaxy centre at a rate higher than in regular galaxies, I use galaxy-scale simulations. Galaxy-scale simulations allow us to control the initial conditions of a simulated galaxy (its mass, size, etc.) and model it with a very high resolution. On the other hand, such simulations are very expensive, since hydrodynamics, gravity, chemistry, cooling, models for star formation and other physics need to be implemented. For my modelling I use a well-established code Enzo which I also plan to modify later this year.

Poster File URL:  View Poster File


Author Name:  Bieito Beceiro
Poster Title:  Acceleration of Information-Based Feature Selection Methods for HPC Systems
Poster Abstract: 

Feature selection is a dimension reduction technique used to identify the most relevant features in a dataset while eliminating irrelevant, redundant, or noisy ones. It is commonly used as a preprocessing step to accelerate model training, particularly in neural networks. However, with the exponential growth in data production and consumption in recent years, feature selection can become unfeasible, especially when dealing with larger datasets, due to its high computational complexity. Our research focuses on adapting information-based feature selection methods to take advantage of powerful HPC environments, such as multinode systems with multicore nodes and GPU-accelerated systems. We started by developing an MPI/multithreaded approach that achieved speedups of up to 229x on a 256-core cluster when compared to a sequential counterpart written in C. We then moved to GPUs and developed a CUDA approach that achieved speedups of up to 283x on an Nvidia A100 when compared to the same sequential counterpart.

Poster File URL:  View Poster File


Author Name:  Tyler Tippens
Poster Title:  Modeling the Emission of Energetic Neutral Atoms at Titan
Poster Abstract: 

We combine the electromagnetic fields from a hybrid plasma model with a particle tracing tool to study the spatial distribution of energetic neutral atoms (ENAs) emitted from Titan's atmosphere when the moon is exposed to different magnetospheric upstream regimes. These ENAs are generated when energetic magnetospheric ions undergo charge exchange within Titan's atmosphere. The spatial distribution of the emitted ENA flux is largely determined by the parent ions' trajectories through the draped fields in Titan's interaction region. Since images from the ENA detector aboard Cassini captured only a fraction of the ENA population, we provide context for such observations by calculating maps of the ENA flux through a spherical detector concentric with Titan. We determine the global distribution of ENA emissions and constrain deviations between the locations of ENA production and detection. We find that the ENA flux is highest in a band that encircles Titan perpendicular to the ambient magnetospheric field, which was strictly perpendicular to the moon's orbital plane during only one Cassini flyby. The field line draping strongly attenuates the emitted ENA flux, but does not alter the overall morphology of the detectable flux pattern. The majority of detectable ENAs leave Titan's atmosphere far from where they are produced, i.e., even a spacecraft located directly above the moon's atmosphere would detect ENAs generated beyond its immediate environment. Some energetic parent ions produce ENAs only after they are mirrored by the field perturbations in Titan's wake and return to the moon, demonstrating the complex histories of detectable ENAs.

Poster File URL:  View Poster File


Author Name:  Michael Haahr
Poster Title:  Towards Realistic Solar Flare Models
Poster Abstract: 

Understanding the coronal heating problem, which involves the maintenance of extremely high temperatures in the solar corona, remains a challenge in astrophysics. This research focuses on investigating the role of high-velocity particles as an energy source for the corona. 

Magnetic reconnection events, like solar flares, are known to accelerate particles and convert magnetic energy into thermal and kinetic energy in the plasma. However, the underlying mechanisms of this conversion process are poorly understood.

While magnetohydrodynamic (MHD) models provide a common framework for plasma simulations, they are limited in capturing small-scale dynamics within the reconnection region. Here, the Particle-In-Cell (PIC) method, a kinetic approach, becomes crucial. However, PIC simulations are computationally expensive, and modeling an entire flare solely with PIC is impractical.

To overcome these challenges, this project introduces a new PIC solver integrated into the DISPATCH framework, designed for exascale supercomputing. Leveraging modern graphics processing units (GPUs), the PIC solver will coexist with existing MHD solvers to create a comprehensive model of a solar flare, encompassing both small- and large-scale dynamics. This holistic approach aims to advance our understanding of the coronal heating problem.

This study showcases the power of high-performance computing in unraveling complex astrophysical phenomena and emphasizes the importance of combined modeling techniques. The integration of MHD and PIC solvers within DISPATCH offers a promising avenue to elucidate the intricate processes governing the solar corona, with implications for plasma physics and astrophysical research.

Poster File URL:  View Poster File


Author Name:  Grzegorz Florczyk
Poster Title:  How to model the evolution of the polluted Planetary Boundary Layer? - Exploring a new approach
Poster Abstract: 

Understanding the feedback loop between the height of the planetary boundary layer (PBL) and the concentration of absorbing aerosols is crucial for developing more accurate forecasting models. For this purpose, we enhanced the existing PBL model developed by M. Witek et al. [1] in NASA Jet Propulsion Lab based on the so-called Eddy Diffusivity-Mass flux (EDMF) scheme [2]. Although the study focused on the new form of the entrainment coefficient in the EDMF model, they produced a robust and fast 1D numerical model written entirely in MATLAB.

To investigate the previously mentioned feedback loop in PBL with aerosols (Polluted PBL), the additional radiation transfer module was added to the EDMF scheme. We used the NASA Langley Fu-Liou [3] Radiation Transfer model (version from 2004) with added functionality enabling input custom vertical profiles of aerosol properties.

The model was supplied with synthetic and realistic profiles of the PBL and aerosol parameters. We varied the optical depth and single scattering albedos and recorded the model output. Initial results showed that the model calculations are qualitatively in line with expectations. We plan to further develop, refine and validate the model through comparison with experimental observations. We expect it could be a valuable tool for understanding the aerosol influence on the strongly unstable Polluted PBL.

[1] Witek, M. L., J. Teixeira, and G. Matheou, 2011: An integrated TKE-based eddy-diffusivity/mass-flux boundary layer closure
for the dry convective boundary layer. J. Atmos. Sci., 68, 1526–1540, https://doi.org/10.1175/2011JAS3548.1.

[2] Siebesma, A. P., Soares, P. M. M. and J. Teixeira, 2007: A combined eddy-diffusivity mass-flux approach for the convective boundary layer. J. Atmos. Sci., 64, 1230–1248, https://doi.org/10.1175/JAS3888.1

[3] GitHub - fredgrose/Ed4_LaRC_FuLiou: Edition 4 version of LaRC FuuLiou Broadband Correlated K Sw & Lw Radiative Transfer code

Poster File URL:  View Poster File


Author Name:  Iñaki Amatria Barral
Poster Title:  Accelerating lncRNA functional analyses
Poster Abstract: 
Non-coding RNA refers to RNA that does not translate into protein. With the emergence of Next Generation Sequencing (NGS) technologies, it has been discovered that the percentage of RNA that encodes proteins is the smallest in the genome, and that the vast majority of this percentage corresponds to non-coding RNA. Among non-coding RNA, long non-coding RNA (lncRNA) sequences, which are transcripts with no less than 200 nucleotides in length, have drastically changed how scientists approach genetics.

NGS technologies have allowed the rapid and inexpensive mining of large biological datasets. This has shown that lncRNAs play key roles in several biological processes, such as immune response, cell cycle regulation, and chromatin modification. In addition, it has been proven that the dysfunction of many lncRNAs is associated with severe diseases, such as cancer, Parkinson’s, preeclampsia, and SARS-CoV-2.

There is still much work to be done in the functional analysis of lncRNAs, as the functions of most of them, as well as the possible diseases to which they may be linked to, remain unknown. As of today, there are several bioinformatics tools that can help determine the function of lncRNAs. However, the analysis of large datasets obtained with NGS procedures are very time-consuming, and more often than not cannot be completed at all because they require too much memory that conventional computers do not have.

The objective behind this work, therefore, is to develop tools that are capable of exploiting High Performance Computing (HPC) hardware to accelerate the functional analyses of lncRNAs.
Poster File URL:  View Poster File


Author Name:  Harry McHugh
Poster Title:  On the suitability of physics informed neural networks to accelerate computational fluid dynamics
Poster Abstract: 

Several areas of human life involve the scientific study of fluids, including energy production (e.g. by increasing the efficiency of wind turbines), transport (more efficient air-travel through wing design) and health (modelling air-borne disease such as COVID-19), leading to improved quality of life.

Unfortunately, computational fluid dynamics, like many other fields in HPC, can be exceptionally computationally expensive. Sometimes prohibitively so, such as in the case of full fidelity, direct numerical simulations.

Other fields have seen great benefit from shifting computational time "offline" into the training of machine learning models only for inference and predictions to be generated, often seeing a speed-up factor of over two orders of magnitude when compared to initial data simulation.

However, traditional machine learning models can struggle to learn the complex physics that govern fluid flow. Therefore significant research attention has been given to incorporating physical constraints into these deep learning models such that non-physical predictions are heavily penalised during the training process.

This project explores the potential of physics informed neural networks (PINNs) to accelerate computational fluid dynamics.

Poster File URL:  View Poster File


Author Name:  Moaad Khamlich
Poster Title:  Optimal Transport-inspired Deep Learning Framework for Slow-Decaying Problems
Poster Abstract: 

Many practical problems have dominant patterns that characterize the variation of the solution with respect to parameters. That’s why  reduction techniques  are used to compress the dataset while preserving its essential properties. 

These techniques can be classified as linear or nonlinear methods. Linear methods, such as proper orthogonal decomposition (POD) and the Greedy algorithm, approximate the solution within a linear subspace. However, they face challenges when dealing with problems that have a slow decay of approximation accuracy and struggle to represent certain phenomena in nonlinear problems.

Nonlinear reduction techniques, including deep learning (DL) frameworks, offer advancements in spanning the nonlinear manifold associated with parameterized partial differential equations (PDEs). DL-based methods efficiently extract relevant features from high-dimensional data, but they require increased training costs during the offline stage.

Our framework exploits the OT-based distance metric, known as the Wasserstein distance, to construct a custom kernel that captures the underlying hidden features of the data. The dimensionality reduction is carried out nonlinearly using KPOD, a kernel based extension of POD that leverages the idea that data sets with nonlinearly separable attributes can be embedded into a higher-dimensional feature space, where the data becomes more spread out and easier to separate.  By mapping the training data to this feature space, kPOD applies the POD algorithm. When projected back to the original space, the resulting modes yield a nonlinear projection that better represents the data. To avoid the explicit calculation of the higher-dimensional coordinates, kPOD uses the kernel function to implicitly map the data to the feature space, reducing the method's computational complexity. As a result, kPOD can perform the computation in the original space while still benefiting from the advantages of the nonlinear projection.

To train our models, \ie to recover the forward and backward mapping from the latent space to the full order solution, we exploit an autoencoder architecture, where the encoding layer is forced to learn the reduced representation.  By doing so the encoded act as a proxy for the forward map from the full order space to the kPOD coefficients and the decoder retrieve the backward map.


Poster File URL:  View Poster File


Author Name:  Benedetta Santoro
Poster Title:  Risk assessment of contagion from airborne pathogens’ in closed settings
Poster Abstract: 

Closed environments, where interpersonal distance is reduced and ventilation is low, are more likely to become sites of infection clusters from airborne pathogens. 


Thus, starting from the need to determine an easy-to-use model to compute infection probability from Sars-Cov-2 in those settings at risk, we modified the Wells-Riley model to take into account the characteristics of the new pathogen and other environmental elements along with ventilation. This model was embedded in a Graphical User Interface and in an Android app; the goal was to create software that might be used also by those unfamiliar with mathematical modelling. 


The other aim of the project is to demonstrate that this stationary model is reliable, by simulating an infection cluster which occurred in the local hospital at the beginning of the pandemic in 2020. This simulation will benefit from the use of parallelization for calculations. 



Poster File URL:  View Poster File


Author Name:  Edric Matwiejew
Poster Title:  High Performance Simulation for Quantum Algorithm Design
Poster Abstract: 

Quantum Variational Algorithms (QVAs) are a class of hybrid quantum-classical algorithms in which a classically parameterised quantum kernel is tuned via classical optimisation techniques. These algorithms are robust to noise and have a flexible structure that targets near-term Noisy Intermediate-Scale Quantum (NISQ) processors. Accordingly, QVAs have been increasingly spotlighted as a practical application for near-term quantum processors, promising to solve complex computational problems in logistics, chemistry, finance and more. However, noise in even state-of-the-art quantum hardware still imposes significant limitations on the complexity and size of QVAs that can be executed in practice. To study the limiting performance and scaling behaviour of QVAs, researchers must utilise classical simulation tools which rely on High-Performance Computing (HPC) resources and efficient numerical techniques that can scale efficiently on shared and distributed-memory systems.

To address this need, we developed QuOp_MPI, a scalable framework for designing and simulating QVAs on massively parallel systems. It provides an approachable modular Python interface that allows researchers with minimal experience in parallel computing to explore novel QVAs at large scales, with a backend optimised to the structure of unitary operations common to QVAs. Most recently, QuOp_MPI was integral to developing the Quantum Multivariable Optimisation Algorithm (QMOA), a QVA that can efficiently optimise continuous multivariable functions by exploiting general structural properties of a discretised continuous solution space. Results obtained with QuOp_MPI demonstrated that the QMOA can achieve a degree of convergence that exceeds pre-existing methods, particularly for high-dimensional oscillatory functions, which are challenging for classical optimisers. Current development efforts are working towards integrating cutting-edge GPU-accelerated methods into the existing QuOp_MPI codebase. Dubbed 'QuOp_Wavefront', this new backend utilises the open-source HIP programming model and targets the new generation of GPU-accelerated HPE Cray EX systems. 

Poster File URL:  View Poster File