Hackathons as an educational tool

Author: Emmanouil Farsarakis
Posted: 23 Sep 2015 | 09:33

They go by many names: “Hackathons”, “Hackdays, “Hackfests”, or my personal favourite “Code Dungeons”. Despite having heard most of these terms repeatedly over the years, I had no personal experience of them. To me, they sounded like competitive events to show off one’s skills. However, after attending Eurohack 2015 (yes, another alias) this past July, as well as a similar event organised earlier in the year by Intel, I was surprised to discover a whole new aspect of Hackathons: education and scientific advancement.

A bit of background…

To give some background, Eurohack 2015 was hosted by CSCS, which is the Swiss National Supercomputing Centre, located in Lugano Switzerland. This was a GPU Hackathon, focused on the utilisation of OpenACC for GPGPU programming. Groups of 3-6 developers working on a potentially scalable application could participate, the goal being to optimise an already functioning GPU application, to port the application to GPU (for CPU-only applications) or to at least leave the workshop with a clear roadmap to how the team can get there. This would be done through a 5-day hands-on workshop and with the assistance of talks, vendor representatives as well as on-site team mentors.

Team mentors included people from CSCS and elsewhere who are experienced in OpenACC, as well as a significant number of representatives from NVIDIA, PGI and Cray, the main players in OpenACC. I was one of the mentors to the team developing the cosmological n-body simulation application GADGET3, led by Dr Klaus Dolag.

Common problems with “traditional education”

Often with traditional courses, participants may find themselves lost in translation, waiting for the point at which the lecturer will start talking about something which relates to their work. By the time they realise that the lecturer was merely “leading up” to the practical applications of what he or she was talking about, the student has fallen so far behind that it appears pointless to try and catch up.

Another possibility is that of someone who participates in a course or workshop, finds they understood everything and genuinely enjoyed the practical sessions on “example code”, but goes home without the faintest idea how to apply this newly acquired knowledge to their own research.

Using “real-world code”

Hackathons do away with these problems to a great extent by requiring participants to work on their own code. This characteristic is perhaps the most significant benefit of Hackathons. Participants must find how the technologies they are learning fit into their own research as a prerequisite to learning. Participants are already experts in the code they are working on, so no time is wasted on getting familiar with the code. The code itself is as “real-world” as it gets. And finally, having participants work on their own code provides a higher level of motivation to attendees than any set-up example ever could. The GADGET3 team is perfect example of this: One of their goals was to get an initial working GPU version of GADGET3 out of the Hackathon, ideally one fast enough to justify funding for further investigation.

Ad-hoc lectures

Lecturing at this particular Hackathon also differed greatly from what you would find on a traditional course or workshop. Lectures were short and spread out across the 5 days. They had a very “unplanned” feel to them. There was no specific schedule for these lectures. If there was an aspect of OpenACC which many teams were having difficulty with or could benefit from, Alistair Hart of the Cray Centre of Excellence at EPCC – who provided most of the training – seemed to pick and choose the most appropriate from a pool of lectures. In this way, participants did not need time to understand how the training material could benefit them. On the contrary, participants knew they needed the training material and why, before it was even provided.

Collective learning

One final aspect of this experience, which greatly differed from most traditional training I have attended, was the role of the mentors.

Mentors assigned to teams, like myself, were advisors who had experience with OpenACC, the subject code or both. The more interesting role was that of the “floating mentors”. These were people from CSCS, NVIDIA, PGI and Cray. Problems which arose and were related to any aspect of the underlying technology (systems, accelerator, compilers) could be dealt with on the spot. There were two ways that the presence of vendor representatives was beneficial. The more obvious benefit was that teams could quickly overcome such backbone problems in a matter of minutes through this “gold standard customer service”. What was surprisingly successful was the impact the workshop had on the underlying technology, especially the compiler developers.

Multiple compiler bugs were discovered by the teams at Eurohack 2015. Usability issues were highlighted. Notably, the GADGET3 group discovered a problem with the PGI compiler, where passing a struct by value to an inlined function resulted in the function not being executed. The function merely returned a “zero” value and silently continued. The team also discovered some strange behaviour with the PGI compiler where the two versions of (seemingly identical) code seen in Figure 1 mysteriously gave different results:

Version 1

NList[expNodeCount[task]] =  1;


Version 2

NList[expNodeCount[expNodeCount[task]++]] = 1;

Usability wise, it was highlighted to the Cray people that their known issue of no type-checking in OpenACC compiler directives (ie misspelling a variable in an OpenACC reduction clause does not cause a compiler error) is indeed something that needs to be addressed.

In addition to this, a series of changes to the OpenACC standard itself were suggested by team members throughout the event. Such suggestions included the provision for allowing structure members or single array elements to be used as reduction variables (currently only scalars are supported) or the introduction of a “zero-difference validation” mechanism for GPU execution which is one of the primary methods for validating code in the climate modelling community. These suggestions were significant for two reasons: they were provided by people who are truly the end-users of these technologies and, furthermore, they were made directly to people who can influence the future of these technologies. 


I believe that from all the above it becomes clear that such an event is much more than just a course, workshop or competition. Participating teams in Eurohack 2015 did not only learn a few compiler directives and flags. They gained hands-on experience with OpenACC in action on real-world code (their code) they interacted directly with the experts in the field and were given a clear understanding of where their lack of experience ended and where the infancy of these technologies began. The policy makers of these technologies on the other hand, were given a unique opportunity to discover how end-users interact with their product, where their product excels and where it falls short.

I am not blind to the fact that organising such an event would be far from easy. Obviously, bringing together so many people from so many fields would be much more demanding than organising a traditional course. However, having seen this event aid research and progress not only with regard to the future work of participating teams but also the OpenACC standard, compilers and GPU accelerators themselves, it seems to me that “Hackathons as an educational tool” is something which deserves a closer look.

In case you were wondering what happened with GADGET3…

The GADGET3 team started the Eurohack 2015 with minimal knowledge of the basic principals of OpenACC. Over the first few days they managed to port this complex legacy code to GPUs with a speedup of 0.01 and through work carried out at the Hackathon and further work carried out thereafter -based on the roadmap laid out at the Hackathon- they have so far brought the code to a speedup of 0.91 compared to the CPU-only MPI+OpenMP version (1 full CPU only node vs 1 full GPU enabled node comparison). This may not sound very promising, but given the nature of the code… it is.

Through contacts made at the Hackathon, they have been granted some time on the CSCS machine to continue their work a bit further. They hope to acquire future funding for additional porting and optimisation of a GPU-enabled GADGET code and have also been contacted by NVIDIA who are working on the same idea themselves.

Special thanks to Antonio Ragagnin from the GADGET3 team for sharing his thoughts on the experience and the team’s results on GADGET3.


Emmanouil (Manos) Farsarakis 






Hi Manos,

Great write-up, thanks. I agree that hackathons can be really successful in getting progress and understanding on applications and technologies, and don't need to be limited to new technologies or software. I know we've had success in the past with ARCHER hackathons and I know software package developers who do similar intensive work to get code features implemented. The fact that you come out of the process with something concrete, i.e. your code working on gpus or ARCHER, as well as learning something, can be doubly beneficial.

I guess the only issue with them is scalability. As you mention they must take a lot of effort to organise and run; and can only take a limited number of people. Whereas a conventional course has the scope to teach many more. I often wonder if trying some small cohort courses to enable intensive tutoring of attendees might be an interesting experiment as well.


Hi Adrian,

Regarding your point on small cohort courses, I think the answer is that it greatly depends on the topic.

For something like MPI or OpenMP Intro or basic programming skills, this would probably make more sense. Software Carpentry courses are something similar to this. The numbers of attendees are not too low, but modules are meant to be very hands-on and in addition to a main instructor there are multiple "helpers" who go around the room, actively ensuring everyone is keeping up.

When you have people who are already at a fairly advanced level though, where the topic of interest is something like OpenACC (or Cuda, or Intel Xeon Phi programming for that matter) where so many parties are involved and the technologies themselves are still relatively new (and unstable), perhaps not. One of the greatest benefits of the Eurohack was that they had representatives from most vendors involved with OpenACC. When something weird was happening at compilation, you called over the compiler guy and in 5-10 minutes you knew this was a compiler bug (or sooner if it was a known bug). This avoids a great amount of futile effort, frustration and distraction. In fact, even at Eurohack with people from SCSC, NVIDIA, PGI and Cray Inc., the absence of people from the tools companies was apparent, GPU profiling and debugging being the only area in which I feel my team left not feeling confident at all. For a small group of people, I don't see how bringing in so many people could be justified.


Blog Archive