Many parallel computations use a synchronous algorithm.
In such an algorithm all parallel processing elements move forward in lock-step synchrony. As the number of processing elements becomes very large (and it could be in the millions for future supercomputers), this synchronisation can become seriously inefficient.
One way to improve the situation might be to move to an asynchronous algorithm: one in which different processing elements can move forward independently and without the need for synchronisation. At EPCC, we are investigating the use of asynchronous algorithms, particularly in the context of sparse linear algebra.
A second important feature of asynchonous algorithms can be their ability to withstand certain types of faults (which are expected to be frequent on a very large machine). Such fault-tolerance is expected to be an important property of future algorithms.