Rob Baxter's blog
Posted: 22 Nov 2019 | 12:10
Developed by EPCC, the Edinburgh International Data Facility (EIDF) will facilitate new products, services, and research by bringing together regional, national and international datasets.
Posted: 8 Jan 2018 | 16:03
Big data has always been a part of high-performance computing and the science it supports, but new open-source technologies are now being applied to a wider range of scientific and business problems. We’ve spent time recently testing some of the big data toolkits.
Posted: 15 Jun 2017 | 13:33
Earthquakes have caused over three-quarters of a million deaths already in this century, and economic losses of over a quarter of a trillion US dollars since 1980, make them by far the most destructive of the natural hazards. EPCC has been involved in developing a new app that will lessen the danger of aftershocks.
Posted: 17 Dec 2015 | 10:46
Gathering data for their own sake is pointless without the analysis techniques and computing power to process them. But what techniques, and what kind of computing power? The differing nature of data-driven analysis in different fields and areas of application demands a variety of computational approaches.
Through our current activities in big data at EPCC, we’re aiming to understand how big data and big compute can be brought together to create, well, big value.
Posted: 30 Jul 2014 | 16:11
The rise of data-driven science, the increasing digitisation of the research process, has spawned its own jargon and acronyms. “Research data infrastructure” is one such term, but what does it mean?
Posted: 13 Nov 2013 | 14:27
EUDAT - the European Data Infrastructure project - has reached the end of its second year and has, with some success, distilled the first version of a common, collaborative, horizontal data infrastructure from among the vertical stacks of its various partners.
Posted: 9 Sep 2013 | 09:58
The research data tsunami is firmly upon us. Open access to data is very much on the agenda. One of the hopes for capturing and preserving all these data is that reuse and recombination may yield new science. Improving the interoperability of data from different domains is key to making this a reality.
Now, data interoperability is not technically hard, so why are we not further on?
Posted: 7 Aug 2013 | 11:56
Policy restrictions on data storage can make the straightforward technological problems complex, over-constrained and potentially insoluble.
Pic credit: Jeff Rowley Big Wave Surfer
As the slowly toppling wave of research data begins to overwhelm us all, we're increasingly looking for new ways to automate the management of all these bits. Keeping human curators and data managers in the loop becomes ever more unscalable and unsustainable. So, we're storing data in the Cloud, auto-replicating them five ways so we don't lose any, letting the systems manage the data for us.
Posted: 1 May 2013 | 11:00
I've recently returned from a very interesting week-long tour of the southwestern USA. Work-related, of course. I and a handful of European colleagues from the EUDAT project were graciously hosted by three groups all engaged in data infrastructure work on the other side of the Atlantic.
After flying into what must be one of the world's smallest and cutest airports in Santa Fe, our first stop was Los Alamos National Lab and the Web science group led by Herbert Van de Sompel.
Posted: 21 Mar 2013 | 16:32
Monday 18th March, a chilly day in Gothenburg, Sweden, and the formal launch of the Research Data Alliance. With keynotes from EU Commissioner Neelie Kroes, Australian Ambassador to the EU Duncan Lewis and NSF Director of Computer and Information Science and Engineering Farnam Jahanian this was a significant event, and indication of the importance that policy makers and funders are now attaching to the management of, and access to, research data worldwide.