Poster ID:
Poster Title:
Poster Abstract:
Poster Flle: [[Poster Flle]]
Author first name:
Author surname:



Poster Title:  Prediction of Pressure Reduction Rate in Large Liquid Hydrogen Tank
Poster Abstract: 

Kawasaki Heavy Industries, Ltd. (KHI) established a CO2free Hydrogen Energy Supply Chain project which aims to introduce a large amount of liquefied hydrogen (LH2) from Australia to Japan. In this project, it will be necessary to reduce the pressure in the tank to 1 atmosphere for the safety reason. The purpose of this research is to predict the pressure reduction rate through a thermodynamic approach and to investigate liquid behavior by numerical simulation. The theoretical results agreed with the experimental results, and this proved that the theoretical model was able to predict the pressure reduction rate. To reveal the influence of heat conductivity / transfer and liquid behavior, numerical simulations were conducted with the in-house code, called 'CIP-LSM' (CIP based Level Set & MARS). As a preliminary study, one case of rapid pressure reduction situation was calculated. As a result, it is found that the boiling gas pushed up the liquid level, because of the slow rising speed of bubble.

Poster ID:  D-8
Poster File: 
Poster Image: 
Poster URL: 


Poster Title:  Moment representation in the lattice Boltzmann method on massively parallel hardware
Poster Abstract: 
The widely-used lattice Boltzmann method (LBM) for computational fluid dynamics is highly scalable, but also significantly memory bandwidth-bound on current architectures. This work introduces a new regularized LBM implementation that reduces the memory footprint by only storing macroscopic, moment-based data. We show that the amount of data that must be stored in memory during a simulation is reduced by up to 47%. We also present a technique for cache-aware data re-utilization and show that optimizing cache utilization to limit data motion results in a similar improvement in time to solution. These new algorithms are implemented in the hemodynamics solver HARVEY and demonstrated using both idealized and realistic biological geometries. We develop a performance model for the moment representation algorithm and evaluate the performance on Summit.
Poster ID:  D-17
Poster File:  PDF document D-17Vardhan_IHPCSS2019.pdf
Poster Image: 
Poster URL: 


Poster Title:  Qualitative and quantitative prediction of protein complexes through parallel simulations
Poster Abstract: 

In the last few decades due to new high-throughput techniques, such as mass spectrometry, we gained a large amount of data about Protein-Protein Interactions and Domain-Domain Interactions. This opened the opportunity for computational methods to predict protein complexes. Here we propose a simulation based protein complex prediction tool, which uses Gillespie's Multiparticle algorithm to calculate how the composition and abundance of protein complexes changes proteome-wide. The approach combines the classic Gillespie's algorithm with diffusion by splitting the simulation space into several thousands of sub-volumes which are small enough that we can assume that the reactants are able to find each other. In given times the molecules diffuse between the SVs and participate in binding and unbinding reactions. An earlier prototype could run these simulation with a greatly reduced number of proteins and only considered a two dimensional space. In our new simulation tool I implemented a three dimensional simulation space and improved the performance. To achieve this, we are using OpenMP for CPU level parallelization and planning to use MPI as well as GPUs to speed up spatial simulations for a further increased performance.


Poster ID:  D-14
Poster File:  Powerpoint 2007 presentation KHBence_posterlikestuff.pptx
Poster Image: 
Poster URL: 


Poster Title:  Geometry optimization of polymers via ONIOM combined elongation method
Poster Abstract: 

Quantum chemical methods are computationally expensive, especially for large molecules. Therefore, many approximate reduced-scaling methods are proposed. In this work elongation (ELG), one of such methods is considered. ELG is based on simulation of the polymerization process. In this method, a polymer is divided into small units. On every step of calculation molecular orbitals of only several units are calculated, and then they are localized. On the next step next several units are calculated and so on, until the last unit is reached. Reduced-scaling is reached since only a small part of the polymer is considered along with the whole calculation. In previous works, electrostatic embedding was implemented via the introduction of partial atomic charges on positions of atoms in units not included in the current step of the calculation. This approach improved the reproduction of electronic structure of full polymer via the inclusion of long-range electrostatic interactions. In this work, we introduce also mechanical embedding in ELG based on ONIOM method. Following the ONIOM, all system should be divided into parts simulated on the different level of theory – high-level and low-level parts. In this approach, we consider simulated units as high-level part and all other units as low-level part. Therefore, the high-level part is conventional ELG, and low-level part provides mechanical constraints for units in high-level part. Mechanical embedding combined with electrostatic embedding inside conventional ELG is called elongation with mechanical and electrostatic embedding (ELG-IMEE). Such an approach improves geometry optimization of the full polymer. Results of optimization for a number of polymers will be shown in the presentation.


Poster ID:  B-6
Poster File:  PDF document B-6.pdf
Poster Image: 
Poster URL: 


Poster Title:  Bifidelity Data-assisted Neural Networks In Nonintrusive Reduced-order Modeling
Poster Abstract: 

We present a new nonintrusive reduced basis method when a cheap low-fidelity model and expensive high-fidelity model are available. The method relies on proper orthogonal decomposition (POD) to generate the high-fidelity reduced basis and a shallow multilayer perceptron to learn the high-fidelity reduced coefficients. In contrast to other methods, one distinct feature of the proposed method is to incorporate the features extracted from the low-fidelity data as the input feature, this approach not only improves the predictive capability of the neural network but also enables the decoupling of the high-fidelity simulation from the online stage. Due to its nonintrusive nature, it is applicable to general parameterized problems.

Poster ID:  B-5
Poster File:  PDF document ICERM.pdf
Poster Image: 
Poster URL: 


Poster Title:  Tools for Easing your Work in Numerical Simulations
Poster Abstract: 

My research has always concentrated on improving numerical simulations. First I have worked to address the automatic detection of parallelism in its source code and the generation of an efficient counterpart for heterogeneous architectures. The proposal generates OpenMP code for multicores, and OpenHMPP (an OpenACC precursor) for GPUs, focusing on the locality of reference. It has been licensed and is being exploited by the spin-off company Appentra for the creation of the Parallware tools.

Then I have continued collaborating on the improvement of the locality of reference in scientific codes by rebuilding affine loop nests from a trace of memory accesses. The method does not require user intervention nor usage of source/binary codes. Potential applications include hardware and software prefetching, data placement for locality optimization, dependence analysis, optimal design of embedded memory systems, etc. 

My next project is to break the traditional paradigm in the analysis of the results of computational simulations. It typically starts after the simulation has finished but, thanks to the distributed streaming dataflow engine Apache Flink, data can be processed while being generated. In this way, it could be beneficial to reduce communications using the same cluster for simulation and analysis. Resources would need to be managed carefully, and the real-time resource scaling for Big Data workloads seems a good point to explore.

Poster ID:  A-2
Poster File:  PDF document slides.pdf
Poster Image: 
Poster URL:  http://gac.udc.es/~jandion/


Poster Title:  Minimizing User Input Effort for Chinese and English Languages
Poster Abstract: 

The goal of the larger sub-project is to minimize gestures required to input World Language text. Chinese and English (presumably two very different languages) are used for prototyping. Discussion of the Graphical User Interface as well as the software strategy for producing the underlying data structures.  Preliminary research suggests that an effective 99+% can be input with 4 or few gestures.  In particular: 

  • 1 Gesture: Chinese(33%) English(44%)
  • 2 Gestures: Both Languages (~70%)
  • 3 Gestures: Both Languages (~97%)
  • 4 Gestures: "Approaching"/"Effective" (100%) 
Poster ID:  B-15
Poster File:  PDF document Minimizing Chinese Input User Effort.pdf
Poster Image: 
Poster URL: 


Poster Title:  Utilizing Visualization and Interactivity to Analyze Scientific Data
Poster Abstract: 

As datasets continue to become larger and more complex, simulations and experiments require being adjusted to be able to handle such data. As a result, also, the output data is generally just as large and complex, if not more so. Because of this, we need an alternative way to not only visualize, but also interact, with such data. One way to do this is through means such as virtual and augmented reality (VR/AR). By providing an additional step to the traditional pipeline of using scientific visualization to visualize data from HPC simulations, we are able to not only visualize additional data, but also interact with additional sets of data that would have previously made a visualization potentially too difficult to understand due to information overload. This way, we can not only visualize additional data, but also interact with the data in a meaningful way that could potentially lead to additional scientific discoveries.

Poster ID:  A-17
Poster File:  Powerpoint 2007 presentation Heinemann_Poster.pptx
Poster Image: 
Poster URL: 


Poster Title:  Hydrodynamics of flocking chiral ferromagnetic nanoparticles
Poster Abstract: 

Nanoparticles have applications in many branches. For example, they are used as drug carriers as they can keep drugs out of healthy tissue and slip selectively into cancerous tissue (active targeted drug delivery). We perform nanoscale simulations to study their hydrodynamic behavior using the fluctuating lattice-Boltzmann method and its extensions developed in Softsimu Group [1], where for the first time, thermal fluctuation was implemented in the lattice-Boltzmann method in a physically correct way. In our simulations, chiral ferromagnetic nanoparticles energized by a global rotating magnetic field are propelled in fluid. We study their individual and collective behaviors and report on the emergence of a large-scale collective motion: global rotation in the system of swimming nanoparticles. We identify the spontaneous symmetry breaking by the rotation of the left-handed chiral particles and rotational alignment of particle velocities to be the physical mechanisms leading to this behavior. Our findings provides new information on the onset of spatial and temporal coherence in a large class of active synthetic systems.

Reference:

[1] Ollila, S. T., Denniston, C., Karttunen, M., & Ala-Nissila, T. (2011). Fluctuating lattice-Boltzmann model for complex fluids. The Journal of chemical physics134(6), 064902.

Poster ID:  D-2
Poster File:  PDF document slides.pdf
Poster Image: 
Poster URL: 


Poster Title:  Extensive deep neural networks for transferring small scale learning to large scale systems
Poster Abstract: 

We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with Image ID:c8sc04578j-t2.gif scaling. We use a form of domain decomposition for training and inference, where each sub-domain (tile) is comprised of a non-overlapping focus region surrounded by an overlapping context region. The size of these regions is motivated by the physical interaction length scales of the problem. We demonstrate the application of EDNNs to three physical systems: the Ising model and two hexagonal/graphene-like datasets. In the latter, an EDNN was able to make total energy predictions of a 60 atoms system, with comparable accuracy to density functional theory (DFT), in 57 milliseconds. Additionally EDNNs are well suited for massively parallel evaluation, as no communication is necessary during neural network evaluation. We demonstrate that EDNNs can be used to make an energy prediction of a two-dimensional 35.2 million atom system, over 1.0 μm2 of material, at an accuracy comparable to DFT, in under 25 minutes. Such a system exists on a length scale visible with optical microscopy and larger than some living organisms.

Poster ID:  B-8
Poster File: 
Poster Image: 
Poster URL: