CFD: parallel sustainability with TPLS

Author: Mike Jackson
Posted: 28 Apr 2014 | 16:20

Mathematical modelling of complex fluid flows has practical application within many industrial sectors including energy, the environment and health. Flow modelling can include oil and gas flows in long-distance pipelines or refinery distillation columns, liquid cooling of micro-electronic devices, carbon capture and cleaning processes, water treatment plants, blood flows in arteries, and enzyme interactions. Multi-phase flow modelling models flows consisting of gases, fluids and solids within a single system eg steam and water, or oil and gas within a pipe, or coal dust in the air.

Simulations of this sort are highly computationally-intensive, so high-performance computing (HPC) resources are required. However, current commercial computational fluid dynamics (CFD) codes are limited by a lack of efficient multi-phase models, poor numerical resolution and inefficient parallelisation features. This severely restricts their application within both academia and industry. Industry, for example, continues to rely on empirical modelling and trial-and-error pilot-scale runs, which incur significant capital cost investments and delays before commissioning.

TPLS

TPLS (Two-Phase Level-Set) is a CFD code developed by Prashant Valluri of the Institute of Materials and Processes, School of Engineering at The University of Edinburgh and Lennon Ó Náraigh of the School of Mathematical Sciences, University College Dublin. TPLS uses an ultra-high 3D Direct Numerical Simulation approach combined with the Level-Set method for tracking the developing interface between phases. TPLS employs a 2D message passing interface (MPI) process decomposition coupled with a hybrid OpenMP parallelisation scheme to allow scaling to 1000s of CPU cores. TPLS has been optimised for HECToR, the UK's former national supercomputer. This performance and scalability work was undertaken in collaboration with Iain Bethune and David Scott of EPCC and funded by HECToR dCSE and EPSRC grants. TPLS is designed to address the limitations of commercial CFD codes and provide a simulation capability that is unrivalled in computational efficiency and numerical accuracy.

The recent launch of ARCHER, the UK's new national supercomputing service, and successor to HECToR, also saw the launch of a complementary eCSE (Embedded Computational Science and Engineering) funding scheme. The TPLS team applied for eCSE funding to continue the parallelisation and optimisation of TPLS. Coincident with this, the team applied to The Software Sustainability Institute for consultancy in software development and open source best practice as part of the Institute's open call. Both applications have been successful and EPCC is now working with the TPLS team on both projects.

Parallelisation and optimisation

The ARCHER eCSE will continue to improve the robustness, flexibility and performance of TPLS. This will allow TPLS to continue both to serve Lennon’s and Prashant's research group, other TPLS users (including Imperial College London, Brunel University London, University of Science and Technology of China, Tata Institute for Fundamental Research, India, and the Université de Lyon) and to provide further incentives for increasing its uptake within both academia and industry.

The eCSE funding is for 8 months and the key activities include:

  • Parallel I/O: TPLS's serial I/O routines, which write out formatted text files containing the flow variables (pressure, velocities, phase-field), will be replaced with a more scalable and performant parallel I/O method based on NetCDF with HDF5 compression.
  • Optimised solvers: Currently the Level-Set equations are solved using a bespoke Jacobi/SOR approach, but a pre-release version of TPLS now supports the Diffuse Interface Method (DIM) as an optional alternative. Additional physics (heat/mass transfer and density difference across the interface) are currently in development via several EPSRC Research grants. These will be merged into TPLS, and TPLS will be updated to use the PETSc (Portable, Extensible Toolkit for Scientific Computation) solver.
  • Scaling and correctness: Evidence of scalability and performance improvements arising from the above, alongside evidence that the correctness of TPLS has been preserved, will be gathered and presented.

Completing these activities will deliver a number of benefits:

  • A new public release of TPLS, including parallel I/O and additional physics (including heat/mass transfer and density difference).
  • Parallel I/O, reducing file sizes by at least 3x, and I/O performance by at least 20x on 1024 cores.
  • Addition of PETSc for the momentum and interface solvers, giving a 15% speedup in this part of the code on 1024 cores.

Usability, maintainability and sustainability

The Institute's open call consultancy will help to improve the usability, maintainability and sustainability of TPLS. The Institute's support is for 2.5 months and the key activities include:

  • Documentation: producing a quick start guide for new users and developers, a configuration options reference and an architecture document.
  • Usability: refactoring TPLS to make it configurable via the command-line or files, so researchers don't need to hack the source code, or even have to be developers.
  • Code quality: proposing a set of coding standards, design and documentation improvements and developing an initial suite of automated tests.
  • Maintainability: reviewing a selection of TPLS researcher-specific versions to extract out application programming interfaces (APIs) to make TPLS mode pluggable and configurable, refactoring TPLS to provide this API and merging in one researcher-specific version as an example.
  • User community: drafting support, contributions and governance policies to help TPLS move towards being an open source project.

Completing these activities will deliver a number of benefits:

  • Users will be able to get started with TPLS, understand its capabilities, deploy TPLS as a binary package and apply TPLS within their own research, all without having to modify any code, or understand FORTRAN-90. This will allow TPLS to be used by non-developers.
  • There will be no need to strip out researcher-specific configuration parameters from the code before updating the version of TPLS on SourceForge.
  • TPLS will be more configurable and modular, reducing the need for researchers to develop their own versions of TPLS. Rather, they will be able to develop components that plug-in to TPLS.
  • TPLS developers will be able to test TPLS to ensure that any changes they have made have not introduced any bugs.
  • Users interested in extending, modifying, or fixing TPLS will be able to understand the architecture of TPLS, to implement their changes, and know how contribute these changes to TPLS, if they wish to do so.
  • The TPLS team will be able to manage and integrate contributions from users and from their own PhD researchers, as well as requests for help and support in an open and systematic way.

Conclusion

It is intended that these complementary collaborations will help to increase the number of users and developers of TPLS, within both academia and industry, by improving both the ease with which users can adopt TPLS and the effectiveness with which TPLS can scale up to exploit powerful HPC resources like ARCHER. Together, it is hoped these will allow TPLS provide a simulation capability that is unrivalled in computational efficiency and numerical accuracy. We look forward to reporting on our progress.

Authors

By Mike Jackson and Iain Bethune, EPCC, The University of Edinburgh, Lennon Ó Náraigh, School of Mathematical Sciences, University College Dublin, and Prashant Valluri, Institute of Materials and Processes, School of Engineering, The University of Edinburgh.

Image: Julia Franco, Flickr