Posted: 11 May 2017 | 00:06
As part of the ARCHER Knights Landing (KNL) processor testbed, we have produced and collected a set of benchmark reports on the performance of various scientific applications on the system. This has involved the ARCHER CSE team, EPCC's Intel Parallel Computing Center (IPCC) team, and various users of the system all benchmarking and documenting the performance they have experienced.
Posted: 5 Sep 2016 | 10:35
06/09/16: As pointed out by my colleague Stephen in the comments after this post, the way to solve most of these issues is to tunnel the key authentication and therefore bypass the need to have private keys anywhere but on my local machine. I'm always learning :)
Password vs key
Having to remember a range of passwords for systems that I don't use regularly is hard.
You can use a password manager, but that only helps if I'm only ever trying to log in from my own laptop. If I have to log in from someone else's machine for any reason then I'd need to know the password.
Posted: 6 Jul 2016 | 14:36
Safe havens allow data from electronic records to be used to support research when it is not practicable to obtain individual patient consent while protecting patient identity and privacy. EPCC is now the operator of the new NHS National Services Scotland (NSS) national safe haven in collaboration with the Farr Institute of Health Informatics Research which provides the infrastructure.
Posted: 27 Jun 2016 | 15:01
ARCHER Champions began with a vision: every research organisation that could benefit from ARCHER should have someone local who knows about the routes to access ARCHER and who can help potential users to get started.
We want Champions to tell us how we can improve support for them and their local users, and how to start joining up all the HPC facilities and the people with the expertise around the UK.
Posted: 21 Jun 2016 | 17:13
There's been a lot of discussion about the latest Top500 list, released this week at ISC16. Most of the interest has been in the whopping new Chinese system, Sunway TaihuLight, which has come in at number 1 on the list with a massive 93 PFlop/s rpeak Linpack performance, and 125 PFlop/s rmax theoretical peak performance (3 times bigger than the previous number 1 system).
Whilst this is a very interesting system, and much bigger than is currently planned elsewhere, it's not unknown for very large systems to come in and dominate the list like this. Back in 2002, the Japanese Earth Simulator system became the number 1 machine with an rpeak of ~5x that of the previous number 1 system, and it stayed as the top machine for a number of years.
Posted: 26 Apr 2016 | 13:07
I had a recent query from some users with a problem with the default version of the Intel Fortran compiler on ARCHER (v22.214.171.124). It was a nice query to get because the users had done all the work already; they'd identified the problem, found a test code that demonstrated it, and told me what the solution would be for them.
Fortunately, the solution was easy, this bug has been fixed in a newer version of the compiler (126.96.36.199), which is installed and available on ARCHER, but just isn't the default (we tend to keep the default version slightly behind the latest release but as new as possible), so they simply have to swap the compiler modules then their code can compile and run correctly.
Posted: 15 Feb 2016 | 15:38
In early December we added a visualisation of the most heavily used application codes to the ARCHER website. At the moment it only shows data for the current month, but we've been recording the data since the ARCHER service began back in 2013 (table below).
Posted: 11 Sep 2015 | 13:41
It's not often that the internecine rivalries of the HPC research and development community spill over into the public arena. However, a video recently posted on YouTube (and the associated comments), ostensibly a light-hearted advert for a SC15 tutorial on heterogenous programming, shows how real and deep these rivalries can be.
Posted: 30 Jul 2015 | 14:40
Posted: 20 Jul 2015 | 17:12
Experiences of porting and optimising code for Xeon Phi processors
EPCC is jointly organising a symposium at the ParCo conference on experiences from those working on porting and optimising codes for this architecture about the challenges and successes they have experienced when working with the Xeon Phi, and how these also apply to standard parallel computing hardware.