Poster ID:
Poster Title:
Poster Abstract:
Poster Flle: [[Poster Flle]]
Author first name:
Author surname:



Poster Title:  Towards Dynamic Pharmacophore Models Through the Use of CG MD Simulations
Poster Abstract: 

Pharmacophore models play a crucial role in computer aided drug discovery e.g. in virtual screening, de novo drug design, and lead optimization. Due to the increased numbers of protein structures elucidated, structure-based methods for developing pharmacophore models have started gaining in popularity and are becoming of particular importance. There has been a number of studies combining such methods with the use of MD simulations to model protein flexibility. In the MARTINI forcefield, four heavy atoms are represented with the use of a single interaction centre. Four types of interactions have been parametrised in the model, polar, charged, non-polar and apolar. These interactions coincide with some of the typical features found in a pharmacophore model, allowing the CG atoms to be used as pharmacophoric probes. These probes are then used in CG MD simulations in order to explore protein interaction propensities. This approach, in combination with cavity detection methods, allows for the identification of potential ligand binding sites and the detection of ‘hotspot’ interactions that enhance ligand binding. Using a wide range of test targets, we demonstrate the ability of this method to recapitulate the positions of moieties that contribute to binding interactions commonly observed for ligands. A comprehensive and accurate map of the interactions which play a role in ligand binding is generated. The interaction sites identified are then compared with holo crystal structures and are shown to correctly identify the moieties which contribute to the binding interactions. By calculating the ΔGint values of the interaction maps, we are able to further focus on the areas of interest and identify which parts of the moieties are the driving force behind binding.

Poster ID:  C-15
Poster File: 
Poster Image: 
Poster URL: 


Poster Title:  Cross-phenotypic Association Testing Using Biobank PheWAS Results
Poster Abstract: 

Mapping the relationship for phenotypic traits is an essential way to help understand the mechanisms underlying common human diseases. However, limited researches has been conducted to investigate the interconnections among diseases using a combination of genotype data and phenotypic information, and existing methods for evaluating such association are not scalable to large number of phenotypic trait pairs. Our study utilizes parallel programming to efficiently quantify the cross-phenotypic association among 1,403 common human diseases using hypergeometric test based on phenotype information extracted from electronic health records (EHR) and results from phenome­wide association study (PheWAS) from UK Biobank. 

Poster ID:  D-20
Poster File:  PDF document Cross-phenotypic Association Testing Using Biobank PheWAS Results.pdf
Poster Image: 
Poster URL: 


Poster Title:  Unlocking the History of the Galaxy: Modeling the Milky Way Stellar Halo
Poster Abstract: 

The stars currently present in the outskirts of the Milky Way (called the “stellar halo”) preserve a record of the Galaxy’s formation history. Galaxies form primarily through mergers, by absorbing smaller galaxies to grow larger. While most of the stars in the center and disk of a galaxy are formed in situ, the stars in the stellar halo primarily originated from the many small galaxies that the central host galaxy accreted over billions of years. Currently, though, we lack ways to identify which halo stars originated in which dwarf galaxies or even reliably identify which stars were accreted. Selecting stars with specific chemical signatures may provide a way forward. Using high-resolution cosmological simulations, we find that r-II halo stars may have chiefly originated in early dwarf galaxies that merged to form the Milky Way. The r-II stars we observe today could thus play a key role in understanding the smallest building blocks of the Milky Way. This work is a first step towards creating a detailed theoretical model of stellar halo evolution. With such a model, we will be able to interpret forthcoming stellar halo data obtain a deep understanding of how our galaxy, and the galaxies around us, formed.

Poster ID:  D-10
Poster File:  Powerpoint 2007 presentation kbrauer_ihpcss.pptx
Poster Image: 
Poster URL: 


Poster Title:  Simulations of black holes and gravitational waves with discontinuous Galerkin methods
Poster Abstract: 

I develop numerical techniques to simulate black holes, neutron stars and their gravitational radiation, with a particular focus on enabling current and future gravitational wave observatories to detect signals from these astrophysical objects. To this end, I work with the Simulating eXtreme Spacetimes (SXS) collaboration on a next-generation numerical relativity code that aims to dynamically resolve the violent interactions between gravity and matter that make these simulations so demanding. We incorporate nodal discontinuous Galerkin methods, task-based parallellism, and adaptive mesh refinement to distribute computation to supercomputers. My main area of research is the development of numerical schemes for the general relativistic initial data problems in our simulations. Since these involve solving large linear systems of equations, I currently develop preconditioning techniques that are optimized for our simulations and must be highly parallelizable.


Poster ID:  A-3
Poster File: 
Poster Image: 
Poster URL:  https://trello.com/b/g5Tcs6yy/simulations-of-black-holes-and-gravitational-waves-with-discontinuous-galerkin-methods


Poster Title:  Numerical Study of Heat Transfer Phenomena from Oil Flow to Air Flow in Heat Exchangers for Aero Engines
Poster Abstract: 
Nowadays, fin-type heat exchangers are used for oil cooling in aero engines. The air cooling fin itself is a familiar device. However, air cooling fins in aero engines are exposed to faster flow than conventional fins, and it is necessary to understand the optimal fin shapes. In our previous study, some calculations of the flow field around the fin-type heat exchanger were done for that purpose.

During the study, we faced difficulties in the calculation without HPC system. Previously, we modeled heat transfer from oil flow to a solid structure and simulated only the heat transfer from solid structure to air. However, it has been found that the heat transfer between the oil and the solid structure is also important as the phenomena around air-cooling fins. At this point, we need to simulate the heat transfer from the oil to the air through the solid structure simultaneously, which requires HPC.

The computational code for the new study is still in the middle of production. In this presentation, the on-going action to make the new in-house code is shown.

Poster ID:  D-3
Poster File:  PDF document D-3.pdf
Poster Image: 
Poster URL: 


Poster Title:  Hierarchical Coded Matrix Multiplication
Poster Abstract: 

Coded matrix multiplication is a technique to enable straggler-resistant multiplication of large matrices in distributed computing systems. In this work, we first present a conceptual framework to represent the division of work amongst processors in coded matrix multiplication as a cuboid partitioning problem. This framework allows us to unify existing methods and motivates new techniques. Building on this framework, we propose hierarchical coded matrix multiplication which is able to exploit the work completed by all processors (fast and slow), rather than ignoring the slow ones, even if the amount of work completed by stragglers is much less than that completed by the fastest workers. On Amazon EC2, we achieve a 37% improvement in average finishing time compared to non-hierarchical schemes.



Poster ID:  A-5
Poster File:  PDF document HMM.pdf
Poster Image: 
Poster URL: 


Poster Title:  ESPRESO - Highly Parallel Finite Element Package for Engineering Simulations
Poster Abstract: 

The latest technological advances in computing have brought a significant change in the concept of new product design, production control, or autonomous systems. In the last few years, we have been witnessing the considerable transition to virtual prototyping and gradual pressure on integrating large part of the industrial sector in the fourth industrial revolution, or Industry 4.0.

The main objective of our work is to create a robust open-source package applicable for a wide range of complex engineering simulations in areas such as mechanical engineering, civil engineering, biomechanics and energy industry. The free licence for the developed package allows automatized simulation chains, such as automatized systems for shape or topological optimization to be created on the top of the ESPRESO framework. For all the framework components, development of highly scalable methods allowing full utilization of the computational capacity of state-of-the-art supercomputers was strictly enforced. ESPRESO is a massively parallel framework based on the Finite Element Method for Engineering Application. 

Poster ID:  A-9
Poster File: 
Poster Image: 
Poster URL: 


Poster Title:  Data-enhanced high-fidelity earthquake simulator for improved seismic risk analysis
Poster Abstract: 

Nowadays, modern disaster mitigation strategies are strongly supported by wider accessibility to solid and efficient computational resources, as well as to vast labeled databases. Earthquake engineering and seismology are among the disciplines that thrives on this auspicious tide.
In the past, the analysis of a very limited amount of back-then-available seismic data steered the comprehension of the earthquake phenomenon. Supported by solid geophysical and mathematical models, observational geophysics steered the earthquake prediction science, fostered by an increasing number of high-quality seismic traces, recorded worldwide. From the latter, empirical and statistical methods and correlations sprouted, providing engineers with reliable estimations of earthquake intensity measures and realistic synthetic waveforms, alongside the associated uncertainty.
However, whenever High-Performance-Computing started making ground in seismology, they assumed a prominent role in constructing 3-D numerical deterministic physics-based earthquake scenarios and in rendering realistic ground motion on urban areas. However, numerical simulations did not take down data analysis, due to (1) the unresolved trade-off between computational burden and desired wave-motion resolution; and to (2) the difficulty in performing several realizations of the same earthquake scenario, spanning the complex and multi-variate uncertainty space, so to attach consistent statistics to the synthetic prediction.

Therefore, a newly adopted strategy is emerging, which overtakes the traditional dichotomy between data analysis and numerical simulations, providing an data-integrated computational tool for earthquake prediction and uncertainty quantification. This has been made possible by the prominent role of machine learning in modern engineering.

This talk clarifies some aspects of this strategy, providing possible exploitation of numerical simulations in conjunction with data analysis.

First of all, I present some achievements we made in constructing high-fidelity 3-D broad-band (0-7Hz) Source-to-Site (BBS2S) earthquake scenarios, as the result of the synergistic efforts of the three French research teams at CentraleSupélec, Institut de Physique du Globe de Paris (IPGP) and the Commissariat à l'Énergie Atomique et aux Énergies Alternatives (CEA), within the SINAPS@ project framework.
The innovative holistic philosophy to investigate a ground shaking event is portrayed. With this all-embracing approach in mind, the community research outcome aims at providing a High-Performance (HP) and portable multi-tool computational platform, capable of dealing with the manifold nature of an earthquake phenomenon itself, i.e. Spanning among the simulation of the source mechanism, the reproduction of the heterogeneous rheology of the geomaterials embodying the Earth's crust structure, the presence of surface/buried topography, of the bathymetry and of the ocean. All the mentioned features feed a high-performance 3-D wave propagation numerical solver, capable of virtually reproduce the multi-scale/-dimension earthquake phenomenon, with ever decreasing numerical dispersion.

In a second phase, I present the ANN2BB kernel, i.e. A tool to produce broad-band (0-30Hz) synthetic ground motion wave-forms, exploiting BBS2S numerical simulations and Artificial Neural Networks (ANN). ANN2BB was crafted during our collaboration with Politecnico di Milano (Italy) with the intent of facing the present need to transition from engineering seismology studies, limited at 10 Hz at most, towards structural dynamics analyses, that need to be fed with realistic input motion up to 30 Hz. The proposed approach makes use ANNs trained on a set of strong-motion records, to predict the response spectral ordinates at short periods. The essence of the procedure is, first, to use the trained ANN to estimate the short-period response spectral ordinates, using as input the long-period ones obtained by the BBS2S, and, then, to enrich the BBS2S time histories at short periods by scaling iteratively their Fourier spectrum, with no phase change, until their response spectrum matches the ANN target spectrum. The proposed approach reproduces in a realistic way the engineering features of earthquake ground motion, including the peak values and their spatial correlation structure.

Finally, the talk will be concluded by showing the outcomes of BBS2S and ANN2BB applied to the study of three test cases: (1) the 2007 Niigata Chuetsu Oki earthquake and the seismic behaviour of the Unit 7 of the Kashiwazaki-Kariwa Nuclear Power Plant.

Poster ID:  A-15
Poster File:  PDF document Data-enhanced high-fidelity earthquake simulator for improved seismic risk analysis.pdf
Poster Image: 
Poster URL:  Data-enhanced high-fidelity earthquake simulator for improved seismic risk analysis


Poster Title:  Big data assimilation for weather forecasting
Poster Abstract: 

Data assimilation is a statistical method to estimate the state (and its time evolution) of a system by combining observation and numerical model simulation. Numerical weather forecasting technique has greatly improved since its first operational use in 1950s. Along with the exponential growth of computational resources and high-resolution observation data by remote sensing technology, the development of forecast models and data assimilation also play an important role. The effective parallelization of the computation is also crucially important for today's numerical weather forecasting in two different ways; domain decomposition and ensemble forecasting.

Poster ID:  A-11
Poster File:  PDF document Slides_IHPCSS_Amemiya_short.pdf
Poster Image: 
Poster URL: 


Poster Title:  Parallelization of Matrix Partitioning in Construction of Hierarchical Matrices using Task Parallel Languages
Poster Abstract: 

Hierarchical matrix (H-matrix) is an approximated form to represent N*N correlations o N objects. H-matrix construction is done by partitioning a matrix into submatrices, followed by calculating element values of these submatrices. This research is proposes implementations of the matrix partitioning using task parallel languages, Cilk Plus and Tascell. The matrix partitioning is divided into two steps: cluster tree construction by dividing objects into clusters hierarchically, and block cluster tree construction by finding out all cluster pairs that satisfy an admissiblity condition from the cluster tree. Because the cluster tree constructed and traversed in these steps is unpredictably unbalanced, it is expected that we can parallelize these both steps efficiently using task parallel languages. To obtain sufficient parallelism in the cluster tree construction, I not only execute recursive calls in parallel, but also parallelized inside of each recursive step. In the block cluster tree construction, I assigned each workers its own space to store the cluster pairs without using locks. As a result, comparing to a sequential implementation in C, I achieved up to a 12.2-fold speedup by Cilk Plus and a 14.5-fold speedup using Tascell.

Poster ID:  B-13
Poster File: 
Poster Image: 
Poster URL: