Rob Baxter's blog

Edinburgh International Data Facility Summer 2020 update

Author: Rob Baxter
Posted: 30 Jul 2020 | 11:44

Despite the challenges of COVID-19, work is proceeding on the development of the Edinburgh International Data Facility (EIDF) and the advanced computing facility that will host it.

The EIDF is the underpinning data infrastructure of the activities of the Edinburgh and South-East Scotland Data Driven Innovation (DDI) Programme. It supports data and computing activities across all areas of the Programme and was, until February 2020, known by its original name of World Class Data Infrastructure (WCDI).

Supporting the national response to COVID-19

Author: Rob Baxter
Posted: 29 Jul 2020 | 15:42

EPCC and the Edinburgh International Data Facility (EIDF) have been working with the NHS and Public Health Scotland to create a secure data and computing environment for urgent research in Scotland into COVID-19. 

Edinburgh International Data Facility: an overview of Phase 1

Author: Rob Baxter
Posted: 22 Nov 2019 | 12:10

Developed by EPCC, the Edinburgh International Data Facility (EIDF) will facilitate new products, services, and research by bringing together regional, national and international datasets.

Investigating high-performance data engineering

Author: Rob Baxter
Posted: 8 Jan 2018 | 16:03

Big data has always been a part of high-performance computing and the science it supports, but new open-source technologies are now being applied to a wider range of scientific and business problems. We’ve spent time recently testing some of the big data toolkits.

Building earthquake resilience in at-risk communities

Author: Rob Baxter
Posted: 15 Jun 2017 | 13:33

Earthquakes have caused over three-quarters of a million deaths already in this century, and economic losses of over a quarter of a trillion US dollars since 1980, make them by far the most destructive of the natural hazards. EPCC has been involved in developing a new app that will lessen the danger of aftershocks.

Big data, big compute

Author: Rob Baxter
Posted: 17 Dec 2015 | 10:46

Gathering data for their own sake is pointless without the analysis techniques and computing power to process them. But what techniques, and what kind of computing power? The differing nature of data-driven analysis in different fields and areas of application demands a variety of computational approaches. 

Through our current activities in big data at EPCC, we’re aiming to understand how big data and big compute can be brought together to create, well, big value.

Research data infrastructure: where next?

Author: Rob Baxter
Posted: 30 Jul 2014 | 16:11

The rise of data-driven science, the increasing digitisation of the research process, has spawned its own jargon and acronyms. “Research data infrastructure” is one such term, but what does it mean? 

Data infrastructure: highlights of the EUDAT Conference 2013

Author: Rob Baxter
Posted: 13 Nov 2013 | 14:27

EUDAT - the European Data Infrastructure project - has reached the end of its second year and has, with some success, distilled the first version of a common, collaborative, horizontal data infrastructure from among the vertical stacks of its various partners.

Data interoperability is a state of mind

Author: Rob Baxter
Posted: 9 Sep 2013 | 09:58

The research data tsunami is firmly upon us. Open access to data is very much on the agenda. One of the hopes for capturing and preserving all these data is that reuse and recombination may yield new science. Improving the interoperability of data from different domains is key to making this a reality.

Now, data interoperability is not technically hard, so why are we not further on?

Wait a minute, where *are* my data?

Author: Rob Baxter
Posted: 7 Aug 2013 | 11:56

Policy restrictions on data storage can make the straightforward technological problems complex, over-constrained and potentially insoluble.

Pic credit:  Jeff Rowley Big Wave Surfer

As the slowly toppling wave of research data begins to overwhelm us all, we're increasingly looking for new ways to automate the management of all these bits. Keeping human curators and data managers in the loop becomes ever more unscalable and unsustainable. So, we're storing data in the Cloud, auto-replicating them five ways so we don't lose any, letting the systems manage the data for us.

Pages

Blog Archive