Nick Brown's blog
Posted: 5 Jul 2019 | 11:13
The EU VESTEC research project is focused on the use of HPC for urgent decision-making and the project team will be running a workshop at SC’19.
VESTEC will build a flexible toolchain to combine multiple data sources, efficiently extract essential features, enable flexible scheduling and interactive supercomputing, and realise 3D visualisation environments for interactive explorations.
Posted: 17 Jun 2019 | 14:24
When is enough, enough? With so many parallel programming technologies, should we focus on consolidating them?
At the ISC conference in June I will moderate a panel discussion on whether it is time to focus on the consolidation and interoperability of existing parallel programming technologies, rather than the development of new ones.
Posted: 24 Apr 2019 | 11:51
The blog post below is based on the abstract of a talk at the PASC mini-symposium 'Modelling Cloud Physics: Preparing for Exascale' (Zurich, 13 June 2019).
The Met Office NERC Cloud model (MONC) is an atmospheric model used throughout the weather and climate community to study clouds and turbulent flows. This is often coupled with the CASIM microphysics model, which provides the capability to investigate interactions at the millimetre scale and study the formation and development of moisture. One of the main targets of these models is the problem of fog, which is very hard to model due to the high resolution required – for context the main UK weather forecast resolves to 1km, whereas the fog problem requires 1metre or less.
Posted: 15 Nov 2018 | 16:21
With jobs submitted to a batch system, supercomputing has traditionally been centred around an offline, non-interactive approach to running codes such as simulations. However, it is our belief that there is great potential in fusing HPC with real-time data for use as part of urgent decision-making processes in response to natural disasters and crises.
Posted: 27 Jul 2018 | 15:25
We are working on a machine-learning project with Rock Solid Images (RSI), a geoscience consulting firm that provides borehole characterisation with the goal of reducing exploration drilling risk for oil and gas companies.
RSI is one of the main players in the interpretation of seismic data with well log data and it has built its business on using advanced rock physics methods combined with sophisticated geologic models to deliver highly reliable predictions of where oil and gas might be found.
Posted: 19 Jun 2018 | 18:39
“There is not a moment to lose” – I don’t know if you have ever read any of the Aubrey-Maturin books by the late Patrick O’Brian, set at the turn of the eighteenth to nineteenth centuries and describing life in the Royal Navy. Even if you have only flicked through one of the books, you will probably have picked up an almost constant sense of urgency (a realistic representation of what pervaded the Navy at that time) in the books, much to the annoyance of the decidedly un-Navy-like Dr Maturin!
Considering the modern pace of change I think this sentiment is truer today, especially in scientific fields, than it has ever been before. Certainly from my perspective there is an urgency to try and push forward the state-of-the-art in HPC and share it, before other people’s activities supersede my work. However, I think this same sense of urgency also applies to other, non-technical, aspects of our community. Diversity is a prime example here and, whilst there are some excellent initiatives being adopted by the likes of the SuperComputing (SC) and ISC conferences, we still have a long way to go.
Posted: 5 Apr 2018 | 10:28
The INTERTWinE project has spent a lot of effort addressing interoperability challenges raised by task-based models. By rethinking parallelism in the paradigm of tasks, one reduces synchronisation and decouples the management of parallelism from computation.
This is really attractive but existing models typically rely on shared memory, where the programmer expresses input and output dependencies of tasks based upon variables, which in turn limits the technology to a single memory space – often a node of an HPC machine. So to then scale up beyond a single memory space we must combine a task model with distributed memory technology, such as MPI or GASPI, and this has been a focus for a number of activities in INTERTWinE.
Posted: 3 Apr 2018 | 13:18
Here at EPCC we are looking forward to the 5th Exascale Applications and Software Conference (EASC 2018), which will be held here in Edinburgh in a couple of weeks. This will be the third time we have hosted EASC and it is always a great opportunity to hear about the cutting edge of HPC research.
Posted: 31 Jan 2018 | 11:17
At EPCC we are currently looking to hire both Application Developers and Application Consultants in data science (closing date 22nd of February.) I think this is a really great opportunity for people to become involved in an exciting, fast moving field with great potential and EPCC is a major player in this area.
Our situation in the University makes us fairly unique. We have a diverse mix of both research and commercial projects so it is possible to work on developing the state of the art and also with some big names on work that has a real-world impact. Unlike many technical/research positions at the University, which are short-term Post-Docs, at EPCC our positions are longer term (initially a 2-year fixed-term contract) and many staff have been working at EPCC for several years.
Posted: 23 Jan 2018 | 09:11
Last month I attended a collaboration workshop in Japan between the Centre for Computational Sciences (CCS) at the University of Tsukuba and EPCC. I was talking about the INTERTWinE project, which addresses the problem of programming-model design and implementation for the Exascale, and specifically our work on the specification and implementation of a resource manager and directory/cache.