The EPCC seminars listed here are open to everybody, and take place 2–3pm on Wednesdays unless otherwise advertised.
Real-world Machine Learning and AI (21/11/18 - 12:30 - Appleton Tower 2.14)
Ted Dunning (MapR)
Machine learning and AI doesn't work in the real world the way it appears to in the classroom. There are lots of reasons why but two of the largest are logistics and the value of cheap learning (as opposed to deep learning). I will describe what you need to know about both topics, illustrating with stories of real systems that have made real billions.
Ted Dunning is Chief Application Architect at MapR and has years of experience with machine learning and other big data solutions across a range of sectors. Ted was the chief architect behind the MusicMatch (now Yahoo Music) and Veoh recommendation systems. He built fraud detection systems for ID Analytics (later purchased by LifeLock) and he has 24 patents issued to date plus a dozen pending. Ted has a PhD in computing science from the University of Sheffield and is active with open source projects as committer, PMC member, mentor and currently serving as a board member for the Apache Software Foundation. When he’s not doing data science, he plays guitar and mandolin. He also bought the beer at the first Hadoop user group meeting.
(Postponed - tbc) An introduction to Blockchain and Hyperledger Fabric
Charaka Palansuriya (EPCC)
Many of you may have heard about Blockchain in the context of crypto-currencies like Bitcoin. Blockchain is the technology behind Bitcoin and it is one of the most promising technologies to emerge recently that can have a significant impact on pretty much anything you can think of, for example: Medical/Healthcare; Education; Cybersecurity; RealEstate; Legal; Networking and IoT; Government and Voting; Travel; Insurance; Public Transportation and Ride sharing; Online Music; Banking and Payments - the list goes on. Whilst Machine Learning and AI are currently getting most of the attention, Blockchain and its capabilities are some-what under the radar.
Hyperledger Fabric is an implementation of Blockchain that began at IBM and is now under being developed by the Linux Foundation. Hyperledger Fabric is particularly suitable for business networks where some privacy may be needed from some participants of a Blockchain network.
In this presentation, I will provide a gentle introduction to Blockchain and explain how it works - highlighting how trust is provided in a purely peer-to-peer network, without a central authority. I will point out main advantages of using Hyperledger Fabric and introduce the key components and how they work.
NEXTGenIO - the quest to improve I/O performance
Michele Weiland (EPCC)
Wednesday November 7th, 2pm, Bayes Centre ground floor seminar room (G.03)
One of the major challenges to achieving sustained performance in large-scale (HPC) applications is the I/O bottleneck. Processing performance has increased significantly over the years, but with I/O performance not improving at the same rate, reading and writing large amounts of data (in particular from/to a parallel file system) can be the limiting factor to an application's time to solution and scalability.
NEXTGenIO is an EC-funded project that is working on addressing to performance bottleneck by introducing byte-addressable persistent memory (Intel's Optane DC Persistent Memory) to the compute nodes, as well as increasing both the I/O performance and node-local memory & storage capacity. The project is developing (and soon deploying) a prototype system together with a system software stack that will greatly speed up data intensive applications.
In this talk, I will present the system architecture and the use cases we are going to support. I will also talk about how you may get access to the system once it has arrived in Edinburgh.
Detecting and correcting for measurement quality degradation in a laser triangulation system
Álvaro Fernández Millara (University of Oviedo, HPC Europa visitor at University of Edinburgh)
Tuesday October 30th, 3pm, Chrystal Macmillan Building Seminar Room 1 (G.01)
Laser triangulation is a commonly used technique for 3D scanning. Many of the applications of 3D scanning, and especially quality control, require narrow error margins. Therefore, some thought has to be given as to how the capabilities of a laser triangulation system may degrade with time, and what can be done about that.
In this talk, I will describe one specific laser triangulation system (namely, a quality control system for rail manufacturing) and some of the factors that may affect the quality of its measurements. I will then show some approaches that can be used to quantify these factors, and discuss how I am using high-performance computing to develop techniques to compensate for them.
Large Scale Mutation Testing
Pablo Cerro Cañizares (Complutense University of Madrid, HPC Europa visitor at EPCC)
Wednesday October 17th, 2pm, Dugald Stewart Building room 1.20
Large-scale systems has been widely adopted due to its cost-effectiveness and the evolution of networks. In general, large scale systems can be used to reduce the long execution time of applications that require a vast amount of computational resources and, especially, techniques that are usually deployed in centralized environments - like testing - can be deployed in these systems. Currently, one of the main challenges in testing is to obtain an appropriate test suite. In essence, the main difficulty lies in the elevated number of potential test cases. Mutation testing is a valuable technique to measure the quality of test suites that can be used to overcome this difficulty. However, one of the main drawbacks of mutation testing is the high computational cost associated with this process.
In this work, we propose two improvements based on our previous approach, called OUTRIDER, a set of strategies to optimize the mutation testing process in distributed systems. Although OUTRIDER efficiently exploits the computational resources in distributed systems, several bottlenecks have been detected while applying these strategies using large-scale systems. For this reason, this proposal is three folded: i) Providing a hybrid algorithm designed to reduce the communications between the master and the worker processes while maintaining a high level of resources usage, ii) Improving the compilation phase, iii) Comparing the proposal with other distribution solutions, such as Spark or Cloud systems.
How do you optimise a system like Hyperloop?
Emil Hansen (Continuum Industries)
Wednesday October 24th, 2pm, Dugald Stewart Building room 1.20
Hyperloop is a new mode of ground-based transport to move people or cargo at speeds that will reduce journey times between cities from hours to minutes. Current efforts to develop and test the technology are almost entirely private funded, however, governments in North America, Europe and Asia are now positioning themselves to be the first to take advantage, given its potential to help tackle housing shortages, reduce inequality and improve productivity.
Continuum Industries was formed with the goal of creating the first ever digital model of an entire Hyperloop system, to help cut the technology development time and identify ways to reduce costs and improve performance over the lifecycle of Hyperloop systems. This digital model will allow for rapid development, evaluation and optimisation of Hyperloop systems, spanning from the technical subsystems involved to the overall system behaviour and economics, using multivariable optimisation. During our presentation, we will talk about Hyperloop in general, our take on it and our needs for high performance computing.
What's in a NAME?
Kevin Stratford (EPCC)
Wednesday October 10th, 2pm, Bayes Centre ground floor seminar room (G.03)
NAME is a Lagrangian/Eulerian model used by the Met Office to study particle dispersion: this encompasses volcanic eruptions, smoke from fires, air polution and air quality, the airbourne spread of radioactivity, viruses and other disease vectors. (NAME also incorparates a midge model.) It is used both operationally and for research.
In some recent work I produced a distributed memory parallelisation of NAME to augment the existing thread-level parallelism. This talk will give an overview to the background of NAME, describe a little about how it works, and discuss the steps taken in parallelisation. I will also mention some of the testing/development strategy used and tools used to help this process.
Mesoscopic simulations for tailoring active particles
Francisco Alarcón Oseguera (Universidad Complutense de Madrid, HPC Europa visitor at University of Oxford)
Tuesday September 18th, 2pm, Bayes Centre ground floor seminar room (G.03).
Active matter is concerned with the study of systems composed of self-driven units and active particles capable of converting energy into systematic movement, while the Brownian particles are subject to thermal fluctuations. A remarkable feature of active matter is its natural tendency to self-organize. One striking instance of this ability is to generate what have been called living clusters, where clusters broadly distributed in size constantly move and evolve through particle exchange, breaking or merging. Experiments in this field are developing at a rapid pace and a new theoretical framework is needed to establish a “universal” behaviour among these internally driven systems.
In this talk, I will show results of numerical simulations of active particles forming living clusters, such structures are similar than clusters observed in experiments of both, Janus chemotactic and dipolar active particles. I will present the influence of both the hydrodynamic and the anisotropic interactions in the formation of clusters by measuring morphological and dynamical features of the system.
New ways of detecting spatial modes of light using machine learning
Adam Valles Mari (Institute of Photonic Sciences Barcelona, HPC Europa visitor at Heriot-Watt University)
Wednesday September 19th, 2pm, Bayes Centre ground floor seminar room (G.03).
Photons can be described in terms of their spatial modes – the “patterns” of light. As there are an infinite number of spatial modes, entanglement in this degree of freedom offers the opportunity to realize high-dimensional quantum states. In this seminar, we will review some applications where the patterns of light can be used, studying the advantages and disadvantages of using such entangled states for ghost imaging experiments, or also as a means to encode information for secure quantum communication channels, considering the preservation of entanglement through noisy channels, e.g., a turbulent atmosphere. We will explain how to create such states in the laboratory and how to improve their detection by using machine learning techniques.
Simulating rare events from astrophysical populations with application to double compact object mergers as gravitational-wave sources
Floor Broekgaarden (University of Amsterdam, HPC Europa visitor at Universities of Edinburgh and Birmingham)
Thursday September 13th, 2pm, Dugald Stewart Building room 1.20.
Rare events are often also among the most interesting in our Universe. An example is the merger of double compact objects (DCO), which produces gravitational waves that are observed by interferometers used in the LIGO and Virgo collaborations. Their detection has opened up a new era in Astronomy by providing unique insights into their extreme gravity, the plethora of possible accompanying electromagnetic transients, the death of massive stars, and the evolution of binary systems.
However these systems are a rare outcome in current models of stellar evolution, making the study of a large population of DCO mergers an extremely computationally expensive endeavor. We therefore developed the algorithm STROOPWAFEL to substantially improve population synthesis studies of rare events. The method is based on adaptive importance sampling techniques and generates samples from a distribution function that is automatically adapted to focus on areas of the parameter space found to produce the events of interest. We implement the method in the stellar evolution code COMPAS and find that it can drastically improve the performance of double compact object simulations over traditional sampling methods. We describe the algorithm and showcase the performance of STROOPWAFEL explicitly by demonstrating that it maps the parameter space with higher resolution and estimates rates and distribution functions with smaller uncertainties.
Parallel-in-time integration for time-dependent partial differential equations
Daniel Ruprecht (University of Leeds)
Wednesday August 29th, 2pm, Appleton Tower LT3.
The rapidly increasing number of cores in high-performance computing systems causes a multitude of challenges for developers of numerical methods. New parallel algorithms are required to unlock future growth in computing power for applications and energy efficiency and algorithm-based fault tolerance are becoming increasingly important. So far, most approaches to parallelise the numerical solution of partial differential equations focussed on spatial solvers, leaving time as a bottleneck. Recently, however, time stepping methods that offer some degree of concurrency, so-called parallel-in-time integration methods, have started to receive more attention.
I will introduce two different numerical algorithms, Parareal (by Lions et al., 2001) and PFASST (by Emmett and Minion, 2012), that allow to exploit concurrency along the time dimension in parallel computer simulations solving partial differential equations. Performance results for both methods on different architectures and for different equations will be presented. The PFASST algorithm is based on merging ideas from Parareal, spectral deferred corrections (SDC, an iterative approach to derive high-order time stepping methods by Dutt et al. 2000) and nonlinear multi-grid. Performance results for PFASST on close to half a million cores will illustrate the potential of the approach. Algorithmic modifications like IPFASST will be introduced that can further reduce solution times. Also, recent results showing how parallel-in-time integration can provide algorithm-based tolerance against hardware faults will be shown.
Optimizing the mutation testing process using HPC systems
Currently, one of the main challenges in testing is to obtain an appropriate test suite. Mutation testing is a widely used technique aimed at generating high quality test suites. However, the execution of this technique requires a high computational cost. In order to alleviate this issue we propose OUTRIDER, an HPC-based optimization that contributes to bridging the gap between the high computational cost of mutation testing and the parallel infrastructures of HPC systems. This optimization is based on our previous work called EMINENT, an algorithm focused on parallelizing the mutation testing process using MPI.
Molecular dynamics of the Ca2+ signalization in neuronal potassium channels
Ion channels are protein molecules that span across the cell membrane, allowing the passage of ions from one side of the membrane to the other. In combination with the current state-of-the-art computational algorithms and high-performance computing facilities, molecular dynamics simulations have become a prominent tool to investigate membrane protein organisation and dynamics.
The availability of structural information at atomic resolution of the Kv7 family has improved significantly in recent years. We have discovered the critical requirement of the interaction with the Ca2+ sensor calmodulin (CaM) for the functioning of a class of voltage-dependent potassium channel, named Kv7.2, essential for the control of neuronal excitability. Recently, we have generated by NMR an atomic model of the complex, one of the largest solved using this technique, which provides a remarkable substrate for the proposed molecular dynamic studies. Using FRET and NMR, we have seen that Ca2+ causes a global structural change of the intracellular C-terminal domain of Kv7.2. This information, together with the recently solved near-complete structure of the homologous Kv7.1 channel by cryo-EM in a membrane will help rendering a full description of the signalling events.
We use the NAMD molecular dynamics software to simulate and analyse the molecular movements caused by Ca2+ signaling, and VMD for visualization.
IoT-Hub: New IoT Data-Platform for Virtual Research Environments
Rosa Filgueira Vicente (EPCC)
Wednesday July 11th, 2pm, James Clerk Maxwell Building, room 4325A
This work presents IoT-Hub, a new scalable, elastic, efficient, and portable Internet of Things (IoT) data-platform based on microservices for monitoring and analysing large-scale sensor data in real-time. IoT-Hub allows us to collect, process, and store large amounts of data from multiple sensors in distributed locations, which could be deployed as a backend for Virtual Research Environments (VRE) or Science Gateways. In the proposed data-platform, all required software, which involves a variety of state-of-the-art open-source middleware, is packed into containers and deployed in a cloud environment. As a result, the engineering and computational time and costs for deployment and execution is significantly reduced.
Turbo-charge your browser with WebAssembly
James Perry (EPCC)
Wednesday June 27th, 2pm, James Clerk Maxwell Building, room 4325A
WebAssembly is a new browser technology with the potential to enable more performant and efficient client-side web applications. Most major browsers are now shipping with WebAssembly support. In this seminar I will give a brief introduction to WebAssembly and walk through how to get started with it. I will also be demonstrating a practical application that we have ported to WebAssembly: an acoustic ray tracer developed last year as part of the A3 project, which aims to develop fast and accurate acoustic simulation tools for architects.
ARCHER CRAY XC30 Compute Node Stripdown
Martin Lafferty (CRAY)
Wednesday May 9th, 2pm, James Clerk Maxwell Building, room 4325A
Users of supercomputers are often presented with a black box into which they log in and run their applications, which are sometimes exceedingly large and complex. The ARCHER CRAY XC30 supercomputer consists of 26 cabinets, contains 4920 compute nodes and hence nearly 120,000 compute cores, connected with a large array of various interconnections.
This talk gives users a chance to see the hardware that is part of the system they use. A presentation into the physical architecture of a CRAY XC30 will be followed by a rare chance to see and touch the components involved. This will include a full strip-down of a CRAY compute module down to the individual subassemblies, CPUs, etc.
"Why is MPI so slow?"
Daniel Holmes (EPCC)
Thursday May 3rd, 2pm, James Clerk Maxwell Building, room 4325A
The MPICH team published a paper entitled “Why Is MPI So Slow?” at SC17. They describe some important optimisation work inside MPICH but there are also several flaws and misconceptions. In seminar, I will cover the good, the bad, and the ugly aspects of this particular paper and sketch out a road-map for holistically addressing the question posed in its title.
Prediction and characterization of low-dimensional structures of antimony, indium and aluminum
Material research is a key factor in the advancement of technology. Their discovery, analysis and, finally, commercialization enable society to cope with technological challenges, economic problems, and ecological issues. Current trends in technology impose several prerequisites for developing devices that would be used: small dimensions, low price, greater efficiency, and better properties.
The largest share of modern technology belongs to the fields of electronics, energy and optics, with applications derived from nanomedicine to astronautics. Increasing the number of chemical elements used increases the number of components in the devices (transistors, batteries, purifiers). Therefore, it is a natural requirement to have smaller dimensions for these components.
Recent research includes 2D materials. Today, the highest attention is drawn to single-layer materials made of only one type of atoms, transition metal dichalcogenides, with general formula MX2, where M is a transition metal (e.g., Mo, W, Ti, Z, Ta, Nb) X of a halogenated element (eg S, Se, Te) and carbides and / or carbonitriles (MX mark) of early transition metals.
In this talk, one-layer (2D) alotropic modifications of antimony, indium and aluminum elements will be proposed: antimonene, indiene and aluminene, respectively. The existing research and plans for future research will be shown, along with preliminary results.
In the last few years, the machine learning community has focused primarily on developing AI approaches known as deep learning. Perhaps everyone has heard about the TensorFlow framework and its application to the Google Translate service. Deep learning is a powerful tool, however it is still a black box – explaining achieved models and analyzing the net nature are tough open problems. Therefore, conventional machine learning techniques are still popular for solving some particular problems where explanation of the model is desired.
For classification challenges, Support Vector Machines (SVMs) are widely used across different scientific disciplines, namely geo and environmental sciences, bioinformatics, and computer vision. You may meet different implementations of the SVM solvers designed for use on graphic cards, shared memory systems, XeonPhis, however, HPC solution does not exist. Therefore, my colleagues from the PERMON team and I decided to develop the PermonSVM.
In my talk, I will introduce the early stage of the PermonSVM development, summarize theoretical background of the SVM lightly, describe approaches for solving multiclass and multilabel problems, and transformation of the SVM model into probability space.
Theory and Simulation of Time-Resolved X-ray Scattering Experiments
Modern pulsed X-ray sources permit time-dependent measurements of dynamical changes in molecules via non-resonant scattering. The planning, analysis, and interpretation of such experiments, however, require a firm and elaborate theoretical framework as well as advanced numerical simulations.
We have derived appropriate expressions that describe the time-resolved X-ray scattering signal by means of quantum electrodynamics and implemented them in a simple algorithm. Their evaluation requires different input, most notably scattering matrix elements that we compute with our own code from wave functions obtained with commercial quantum chemical software. Since these calculations involve the optimisation of several electronic eigenstates with high-level methods and large basis sets for hundreds of points in nuclear coordinate space, the computational costs are significant even for small systems and HPC resources as provided by the EPCC are necessary.
In my talk I will summarise the main aspects of our theory and simulations, highlight their challenges, and illustrate key points with results from our current research.
Performance Portability with Kokkos: An Introduction
Kevin Stratford (EPCC)
Wednesday March 28th 2018, James Clerk Maxwell Building, room 6206
The issue of performance portability - that is, being able to write code which runs effectively on different architectures - is an important one for scientific applications. In this talk I will give an introductory overview of Kokkos, a C++ library developed by Sandia National Labs in the US which addresses performance portability at the node level.
I will try no assume no prior knowledge of specific features of C++, and explain what is required along the way.
I will discuss the central Kokkos idea of a parallel pattern. This is combined with an execution policy and a definition of the computational kernel to provide a level of abstraction which can be compiled to run on different architectures (typically including CPU and GPU). The common parallel patterns of "for" and "reduction" are used as examples, and compared with OpenMP. Memory abstraction and hierarchical parallelism involving thread and vector levels will also be covered.
All the material here is derived from a recent workshop. Kokkos source, tools, and tutorial material are available at: https://github.com/kokkos
An introduction to The Data Lab innovation centre
Brian Hills, Richard Carter, Matthew Higgs, Caterina Constantinescu (The Data Lab)
Wed January 31st 2018, James Clerk Maxwell Building, room 4325A
The Data Lab has been created to deliver economic and social impact to Scotland by catalysing data innovation across the country.
In this seminar Brian (Head of Data) will present an overview of The Data Lab’s focus and impact to date across the three pillars of collaborative innovation, skills and community. Richard, Matt and Caterina (our Data Science team) will present recent projects they have been working on.
The Data Lab will be moving into the Bayes building this year with EPCC and others. The objectives of the session will be to both share knowledge on our work and catalyse further collaboration in the future.
Progressive load balancing of asynchronous algorithms
Justs Zarins (Centre for Doctoral Training in Pervasive Parallelism, EPCC and Informatics, University of Edinburgh)
Wed November 8th, 2017, James Clerk Maxwell Building, room 4325A
Synchronisation in the presence of noise and hardware performance variability is a key challenge that prevents applications from scaling to large problems and machines. Using asynchronous or semi-synchronous algorithms can help overcome this issue, but at the cost of reduced stability or convergence rate. In this paper we propose progressive load balancing to manage progress imbalance in asynchronous algorithms dynamically. In our technique the balancing is done over time, not instantaneously.
Using Jacobi iterations as a test case, we show that, with CPU performance variability present, this approach leads to higher iteration rate and lower progress imbalance between parts of the solution space. We also show that under these conditions the balanced asyn- chronous method outperforms synchronous, semi-synchronous and totally asynchronous implementations in terms of time to solution.
Potholes in the Amazon (Cloud) - AWS Pipelines for the IoT
Alistair Grant (EPCC, University of Edinburgh)
Wed October 11th, 2017, James Clerk Maxwell Building, room 4325A
Road surface potholes can cause problems for all road users, so how do we detect them and prioritise their repair? We will take a look at some of the Amazon Web Services (AWS) technologies that we have been using as part of a data engineering project to build a prototype backend system for data collection, querying and analysis of pothole detection.
We will look at DynamoDB (a NoSQL database service), Amazon Lambda Functions, API Gateway and possibly a few others. We highlight some of the strengths and weaknesses of these technologies by examining them in the context of our example use cases.
Graph-based problems and the SpiNNaker neural HPC architecture
Dr Alan Stokes (Advanced Processor Technologies group, School of Computing Science, University of Manchester)
Wed September 6th 2017, James Clerk Maxwell Building, room 4325A
This talk highlights two of the many issues high performance computers will have to tackle to reach an exascale machine - power and data communication - and how these problems are starting to be solved. We discuss how software applications will need to be adapted for the solutions to these problems and then describe the SpiNNaker hardware platform and its synergies with the solution for HPCs. We then walk though a simple application mapped from standard C code onto SpiNNaker, and its performance. We end with options on how to acquire access to SpiNNaker hardware and training.
SpiNNaker is a novel computer architecture inspired by the working of the human brain. A SpiNNaker machine is a massively parallel computing platform, targeted towards three main areas of research:
• Neuroscience. Understanding how the brain works is a Grand Challenge of 21st century science. We will provide the platform to help neuroscientists to unravel the mystery that is the mind. The largest SpiNNaker machine will be capable of simulating a billion simple neurons, or millions of neurons with complex structure and internal dynamics.
• Robotics. SpiNNaker is a good target for researchers in robotics, who need mobile, low power computation. A small SpiNNaker board makes it possible to simulate a network of tens of thousands of spiking neurons, process sensory input and generate motor output, all in real time and in a low power system.
• Computer Science. SpiNNaker breaks the rules followed by traditional supercomputers that rely on deterministic, repeatable communications and reliable computation. SpiNNaker nodes communicate using simple messages (spikes) that are inherently unreliable. This break with determinism offers new challenges, but also the potential to discover powerful new principles of massively parallel computation.
MONC: an LES for cloud and atmospheric modelling
Dr Nick Brown (EPCC, University of Edinburgh)
Wed August 30th 2017, James Clerk Maxwell Building, room 4325A
For the past three years I have been working with the Met Office on the Met Office NERC Cloud model (MONC.) This replaces a thirty year old model which has been a crucial tool for UK weather and climate communities but which exhibited significant issues around performance, scalability and the code itself. Our replacement, MONC, has been written from scratch, maintaining the science of the previous model but with modern software engineering and parallelisation techniques. The aim has been to enable the scientists to study vastly larger systems, at far higher accuracy over many cores. In addition to computation, scientists also desire to perform analysis the on raw data in order to generate higher level information. This is a challenge because the raw data is very large in size (many TBs) so it is not realistic to write it out to file and analyse offline. Instead this is performed in-situ on the data as it is generated, which raised several challenges that we had to solve. I will talk about both these aspects of MONC, as well as some of the offshoot work that we have looked at such as porting and evaluating aspects of the model on GPUs and KNLs.
Experiences from EPCC's first MOOC: Supercomputing
Dr David Henty (EPCC, University of Edinburgh)
Wed July 26th 2017, James Clerk Maxwell Building room 4325A
As part of PRACE (Partnership for Advanced Computing Europe), EPCC ran its first ever MOOC (Massive Open Online Course) in March this year. The 5-week course used the FutureLearn platfrom - www.futurelearn.com/courses/supercomputing - which hosts many other Edinburgh MOOCs including the Higgs course from SoPA. In this short informal talk I will cover the history of the course, the process of designing our first MOOC, features of the FutureLearn platform and experiences from the first run in March. I will also compare and contrast MOOCs with other online teaching such as the HPC distance-learning courses we run as part of the DSTI (Data ScienceTech Institute) MSc programme. *Note: the next run of the MOOC starts August 28th - register now!*
Solar Panel detection in Satellite Images using Deep Learning
Marc Sabate (EPCC, University of Edinburgh)
Wed July 12th 2017, James Clerk Maxwell Building room 4325A
Deep Learning models have become very popular with the release of libraries such as Tensorflow, Torch, or Theano, allowing to train deep networks in a reasonable amount of time. In this talk I will present how a Convolutional Neural Network can be used to detect solar panels in Satellite Images.
This talk will start with a brief overview of binary classification problems using Logistic Regression. We will see how Logistic Regression models are built under the assumption that classes are linearly separable, and how Neural Networks can overcome this limitation. I will provide a defitinion of Convolutional Neural Networks, a particular type of Neural Network specifically designed for Image Processing problems, and I will finally present a network that successfully detects solar panels in satellite images from four cities in California.
It is all still an ExaHyPE
Dr Tobias Weinzierl (Department of Computer Science, Durham University)
Wed June 28th 2017, James Clerk Maxwell Building room 4325A
ExaHyPE (http://www.exahype.eu) is a H2020 project where an international consortium of scientists writes a simulation engine for hyperbolic equation system solvers based upon the ADER-DG paradigm. Two grand challenges are tackled with this engine: long-range seismic risk assessment and the search for gravitational waves emitted by rotating binary neutron stars. The code itself is based upon a merger of flexible spacetree data structures with highly optimised compute kernels for the majority of the simulation cells. It provides a very simple and transparent domain specific language as front-end that allows to rapidly set up parallel PDE solvers discretised with ADER-DG or Finite Volumes on dynamically adaptive Cartesian meshes.
This talk starts with a brief overview of ExaHyPE and demonstrates how ExaHyPE codes are programmed, before it sketches the algorithmic workflow of the underlying ADER-DG
scheme. We rephrase steps of this workflow in the language of tasks.
We then focus on a few methodological questions: how can we deploy these tasks to manycores, what execution patterns do arise, and are the new OpenMP task features of any use? How can we rearrange ADER-DG's workflow such that we reduce accesses to the memory, i.e. weaken the pressure on the memory subsystem? How can we reprogram
the most expensive tasks such that they exploit the wide vector registers coming along with the manycores? A brief outlook on MPI parallelisation wraps up this methodological talk.
We focus on results obtained on Intel KNL nodes provided by the RSC Group, on Intel Broadwell results from Durham's supercomputer Hamilton, and on results from the SuperMUC phase 2 supercomputer at Leibniz Supercomputing Centre.
This is joint work with groups from Frankfurt's FIAS, the University of Trento, as well as Ludwig-Maximilians-University Munich and Technical University of Munich.