Poster Title: 
Poster Abstract: 
Author First Name: 
Author  Last Name: 



Author Name:  Oscar Amaro
Poster Title:  Reduced semi-analytical model to accelerate the study of laser-electron collisions
Poster Abstract: 

In an intense electromagnetic background, leptons obtain relativistic velocities and emit energetic photons. A fraction of these photons decays into electron-positron pairs, which can themselves be accelerated by the fields and radiate new photons. These phenomena occur in extreme astrophysical environments such as pulsars, and in the scattering of Petawatt-class lasers and ultrarelativistic electron beams in the laboratory.
To benchmark the fundamental models of this field, called High-Field-QED-Plasma, with experiments, it's crucial to run accurate simulations of these processes and develop analytical models that describe them. However, realistic simulations can take up to millions of CPU hours on HPC systems, limiting the explored parameter space when supporting the design of experiments. In this work, we propose a reduced yet accurate predictive model for the final lepton and photon spectra, and number of positrons produced in these collisions, which can be run on a single CPU in a matter of minutes. We hope to provide experimentalists with a lightweight semi-analytical tool for faster design of experiments.

Poster File URL:  View Poster File


Author Name:  Mahmoud Abouamer
Poster Title:  Resource Allocation in IRS-assisted Wireless Communications System
Poster Abstract: 

Current wireless technologies focus on designing the transmitter or the receiver, and the configuration of the wireless channel itself remains largely beyond the reach of the designer. Intelligent reflecting surfaces (IRS) can fill this gap by enhancing the performance of a communication system by tuning the wireless propagation channel. Specifically, an IRS encompasses a large array of configurable reflecting elements that can collectively reshape the incident signal in terms of its phase and amplitude. The research aims at leveraging wireless physical layer design, optimization and machine learning in order to improve the performance and reliability of wireless IRS-assisted communications systems. Wireless physical design is traditionally hinged upon mathematical models and signal-processing techniques that face severe computational challenges when it comes to practically optimizing wireless systems in a reasonable time. Indeed, acquiring channel state information accurately with low complexity and delays is the key to improving the performance and reliability of wireless systems. The research aims at advancing robust, distributed and data-driven estimation of channel-side -information with low overhead and delays. Moreover, we aim at exploiting structured deep learning to allocate resources in wireless systems in real time while reducing overhead and latencies. Indeed, HPC is critical in training machine learning models while meeting the practical time and complexity constraints. Furthermore, HPC is essential to this research as modeling and configuring a wireless channel is a computationally intensive and time sensitive task that requires HPC and distributed mechanisms in order to enable efficient and reliable communications in near real time.


Poster File URL:  View Poster File


Author Name:  Mark Klaisoongnoen
Poster Title:  I feel the need for speed: Exploiting latest generation FPGAs in providing new capabilities for high frequency trading
Poster Abstract: 

Field Programmable Gate Arrays (FPGAs) have enjoyed significant popularity in financial algorithmic trading. Such systems typically involve high velocity data, for instance arriving from markets, streaming through FPGAs which then undertake real-time transformations to deliver insights within tight time constraints. Such high bandwidth, low latency data processing approaches have proven highly successful in delivering important insights to financial trading floors.

However due to the real-time nature requirements there is only a small window in which such data manipulations can occur. Therefore these transformations are, by necessity, fairly simplistic as there is not time for more advanced workloads. However, the past few years have seen very significant improvements in both the hardware and software eco-system for FPGAs which is potentially a game changer in this regard. New, more advanced hardware technologies such as Xilinx's Alveo and Intel's Stratix range, provide far more capability than ever before, and with exciting developments such as the AI engines in Xilinx's Versal ACAP due for release later this year, open up significant possibilities. Furthermore, the investment in the software ecosystem not only improving the programmability of these devices but also the growth of open source libraries, potentially significantly reduces programming time and enables the development of more complex codes.

Exploiting the concurrency within FPGAs requires one to rethink CPU-optimized codes to a dataflow style of computing. To target low latency in real-time settings, the fundamental question which this PhD research is looking to answer is, given the aforementioned advances in FPGAs, what algorithmic techniques are most appropriate to enable a step change in capability in high frequency algorithmic trading through HLS? This research will involve the development of new dataflow techniques enabling high-throughput and low latency processing of streaming network data on next-generation FPGAs, ultimately enabling an increase in the processing work that can be completed in an acceptable timeframe window.

Poster File URL:  View Poster File


Author Name:  Mehdi Salakhi
Poster Title:  Using HPC for computational fluid dynamics modeling of Methane pyrolysis in a microwave-assisted fluidized bed
Poster Abstract: 

Methane pyrolysis can be exploited for zero-emission hydrogen production. This process also produces solid carbon as a valuable by-product. Computational fluid dynamics (CFD) modeling of methane pyrolysis in a microwave-assisted fluidized bed requires the combination of detailed reactions kinetics, fluid flow, particle dynamics, population balance model, and electromagnetic model. Due to the super complexity and expensive computational time, typical desktops are not capable of modeling such phenomena. Therefore, the current project seeks to perform numerical simulations of methane pyrolysis as a CO2-free technology by employing high-performance computing (HPC). Supercomputers provide a tremendously parallel computing infrastructure. The GNU parallel not only drastically reduces the computational time but also increases output data obtained from CFD simulations. Afterward, the obtained data will be used to train a reliable artificial neural network (ANN). ANN obviates the demand for simulating the various combinations of input data. HPC can also be a precious asset to train the ANN to predict the objectives of the research within the blink of an eye. Given all, HPC programming will play an integral role in the project, reducing the required time to obtain the results from months to a fraction of a second.


Poster File URL:  View Poster File


Author Name:  Aaron Mishkin
Poster Title:  Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Poster Abstract: 

We develop fast algorithms and robust software for convex optimization of two-layer neural networks with ReLU activation functions. Our work leverages a convex re-formulation of the standard weight-decay penalized training problem as a set of group-l1-regularized data-local models, where locality is enforced by polyhedral cone constraints. In the special case of zero-regularization, we show that this problem is exactly equivalent to unconstrained optimization of a convex "gated ReLU" network. For problems with non-zero regularization, we show that convex gated ReLU models obtain data-dependent approximation bounds for the ReLU training problem. To optimize the convex re-formulations, we develop an accelerated proximal gradient method and a practical augmented Lagrangian solver. We show that these approaches are faster than standard training heuristics for the non-convex problem, such as SGD, and outperform commercial interior-point solvers. Experimentally, we verify our theoretical results, explore the group-l1 regularization path, and scale convex optimization for neural networks to image classification on MNIST and CIFAR-10.

Poster File URL:  View Poster File


Author Name:  Nikolas Wittek
Poster Title:  Worldtube excision method for intermediate-mass-ratio black hole inspirals: scalar-field toy model in 3+1D
Poster Abstract: 
Numerical Relativity solves the full Einstein equations on supercomputers to predict properties of gravitational waves emitted by merging binary black holes. These simulations are essential for finding and interpreting gravitational waves. However, they become particularly costly when one black hole is much more massive than the other because the Courant limit forces a reduction in the time-step. We attempt to make simulations of mass ratios 10-1000 feasible in order to produce much needed waveforms for detectors of the near future. We do this by incorporating analytical calculations into the simulations to greatly reduce the need for numerical resolution and thereby avoid the limitation imposed by the Courant limit. We use our new code SpECTRE by the Simulating eXtreme Spacetimes (SXS) collaboration which employs a nodal discontinuous Galerkin method, task-based parallelism and adaptive mesh refinement to run on exascale computing clusters.
Poster File URL:  View Poster File


Author Name:  Khodr Jaber
Poster Title:  Multi-node Multi-GPU Simulation of a 3D Lid-Driven Cavity at High Reynolds Number using a Lattice Boltzmann Framework
Poster Abstract: 

Simulation of turbulent flows is usually prohibitively expensive due to the resolution required in resolving the large range of physical scales of the underlying coherent structures. This has motivated the development of turbulence models such as Large Eddy Simulation, where effects of the smaller scales are filtered out, and efficient and parallelizable numerical methods for the governing equations of fluid flow such as the Lattice Boltzmann Method (LBM) - an emerging competitor to standard methods for computational fluid dynamics. Recently, it has been demonstrated that turbulent modeling can be combined with the LBM to produce a class of methods that possess significant advantages such as purely explicit time-marching and straightforward parallelization. In the present work, a multi-GPU multi-node parallel scheme has been developed for simulating a three-dimensional lid-driven cavity at Re up to 10^5 based on a CUDA-aware MPI framework. The LBM is used in conjunction with Smagorinsky subgrid-scale modeling for the unresolved scales. An in-house code has been prepared in C++, and simulations have been performed on SciNet's Mist GPU-cluster. The TECPLOT software was used to generate isosurfaces of the coherent structures, defined in this work by three vortex identification criteria: vorticity magnitude, and the Q- and Lambda2- criteria.

Poster File URL:  View Poster File


Author Name:  Filippo Gori
Poster Title:  Impact of CFD modeling and optimization algorithms on wake steering control strategies in wind energy applications
Poster Abstract: 

Wind energy is gaining international momentum around the globe driven by the urgent environmental need to reduce carbon emissions and the significant decrease in the cost of renewables. On average, aerodynamic interactions between turbines placed in a farm arrangement lead to power losses ranging from 10% to 30%. Wind farm control plays a major role in reducing the drawbacks of wake effects, with wake steering being a promising strategy. Currently, feedback control remains a major challenge due to the great computational cost associated with performing CFD simulations. The current research aims at determining the sensitivity of wake steering strategies to both the choice of CFD wake models and optimization algorithms. Analytical wake models, as well as the high-order flow solver Incompact3d, are employed to conduct optimization. A further research objective is the development of a novel computational efficient feedback control framework that includes state reconstruction, parameter tuning, and solution correction algorithms to account for time-varying inflow conditions. The use of High-Performance Computing is a key aspect of the project as it determines the suitability of the control framework in real-life applications.

Poster File URL:  View Poster File


Author Name:  Michail Papadourakis
Poster Title:  FEPrepare: A web-based tool for automating the setup of relative binding free energy calculations
Poster Abstract: 

  Relative binding free energy perturbation (FEP) calculations in drug design are becoming a useful tool in facilitating lead binding affinity optimization in a cost- and time-efficient manner.1 However, they have been limited by technical challenges such as the manual creation of large numbers of input files to set up, run, and analyze free energy simulations. Therefore, an automation of this procedure is required to enhance the general applicability of these protocols. For this purpose, we created FEPrepare, a novel web-based tool, which automates the setup procedure for relative binding FEP calculations for the dual-topology scheme of NAMD, one of the major MD engines, using OPLS-AA force field topology and parameter files.2 FEPrepare uses a hybrid single/dual topology protocol, allowing the simultaneously assignment of a single- and a dual-topology region. In this approach, the linear scaling of the parameters of the atoms from the one state to another is performed only for the perturbed atoms of the simulations (dual topology), while the nonperturbed atoms are simulated using the single-topology approach. This reduces the number of perturbations needed for FEP/MD calculations and leads to faster convergence of the simulations.  FEPrepare provides the user with all necessary files needed to run a FEP/MD simulation with NAMD. FEPrepare can be accessed and used at https://feprepare.vi-seem.eu/.3

 

References

(1)             Cournia, Z.; Allen, B.; Sherman, W. Relative Binding Free Energy Calculations in Drug Discovery: Recent Advances and Practical Considerations. J. Chem. Inf. Model. 2017, 57 (12), 2911–2937.

(2)             Dodda, L. S.; Cabeza de Vaca, I.; Tirado-Rives, J.; Jorgensen, W. L. LigParGen Web Server: An Automatic OPLS-AA Parameter Generator for Organic Ligands. Nucleic Acids Res. 2017, 45 (W1), W331–W336.

(3)             Zavitsanou, S.; Tsengenes, A.; Papadourakis, M.; Amendola, G.; Chatzigoulas, A.; Dellis, D.; Cosconati, S.; Cournia, Z. FEPrepare: A Web-Based Tool for Automating the Setup of Relative Binding Free Energy Calculations. J. Chem. Inf. Model. 2021, 61 (9), 4131–4138.


Poster File URL:  View Poster File


Author Name:  Alexios Chatzigoulas
Poster Title:  Predicting protein-membrane interfaces of peripheral membrane proteins using ensemble machine learning
Poster Abstract: 

Abnormal protein-membrane attachment is involved in deregulated cellular pathways and in disease. Therefore, the possibility to modulate protein-membrane interactions represents a new promising therapeutic strategy for peripheral membrane proteins that have been considered so far undruggable [1]. A major obstacle in this drug design strategy is that the membrane binding domains of peripheral membrane proteins are usually not known. The development of fast and efficient algorithms predicting the protein-membrane interface would shed light into the accessibility of membrane-protein interfaces by drug-like molecules. Herein, we describe an ensemble machine learning methodology and algorithm for predicting membrane-penetrating amino acids. We utilize available experimental data in the literature for training 21 machine learning classifiers and a voting classifier [2]. Evaluation of the ensemble classifier accuracy produced a macro-averaged F1 score = 0.92 and an MCC = 0.84 for predicting correctly membrane-penetrating amino acids on unknown proteins of a validation set.  Predictions were further verified in independent test sets. The python code is available at https://github.com/zoecournia/DREAMM.

The allosteric modulation of peripheral membrane proteins by targeting protein-membrane interactions with drug-like molecules represents a new promising therapeutic strategy. We also present a drug design pipeline for drugging protein-membrane interfaces using the DREAMM (Drugging pRotein mEmbrAne Machine learning Method) web-server [3]. DREAMM works in the back-end with a fast and robust ensemble machine learning algorithm for identifying protein-membrane interfaces of peripheral membrane proteins. Additionally, DREAMM also identifies binding pockets in the vicinity of the predicted membrane-penetrating residues in protein conformational ensembles provided by the user or generated by DREAMM. DREAMM has been made accessible via a web-server at https://dreamm.ni4os.eu.

 

References

1) Cournia Z, Chatzigoulas A (2020) Allostery in Membrane Proteins, Curr Op Struct Biol, 2020, 62, 197-204, doi: 10.1016/j.sbi.2020.03.006

2) Chatzigoulas A, Cournia Z (2022) Predicting protein-membrane interfaces of peripheral membrane proteins using ensemble machine learning. Brief Bioinform, 2022, 23, bbab518. doi: 10.1093/bib/bbab518

3) Chatzigoulas A, Cournia Z (2022) DREAMM: A web-based server for drugging protein-membrane interfaces as a novel workflow for targeted drug design, Bioinformatics, submitted


Poster File URL:  View Poster File