Just-in-time processing power growth predicted to handle SKA demand

If Moore's Law holds, the power available to SKA will meet the demand, says research

No-one really knows for certain whether the processing power and storage will be available to cope with the huge flow of data generated by the Square Kilometre Array radio telescope if it is completed on schedule early in the next decade. Scientists involved with the international project are banking on the capacity of hardware not yet built and the cleverness of software algorithms that have yet to be developed.

However, studies carried out by Tim Cornwell, computing project lead for Australia’s SKA Pathfinder project (ASKAP) suggest that if Moore’s Law holds, the power available to SKA will meet the demand.

Cornwell’s graphical presentation to the Rutherford Innovation Showcase described the task as “scaling Mount Exaflop”. An exaflop is a quintillion (1 followed by 18 zeroes) floating-point operations per second and is the order of computing power required to handle the eventual SKA demand.

Modern astronomy is driven by surveys of large regions of the sky, says Cornwell. The time required to conduct a survey depends on the breadth of the field of view, the resolution and the amount of “noise” tolerated in the signal.

Ideally, of course, the noise needs to be minimised, so that astronomers can make sure they are interpreting signals coming from actual objects, but doing this drives up the data rate required.

The flow of data to be analysed in real time by the ASKAP project amounts to a DVD-worth every two seconds, Cornwell says. This pathfinder project will test some of the techniques required for the SKA, but the SKA itself will need to cope with data flows several orders of magnitude larger.

ASKAP will consist of an array of 36 radio dishes, each 12 metres in diameter, and is being built at the Murchison radio astronomy site in Western Australia. It is scheduled for completion in 2013.

Crucial to growing the processing power is not only the capacity of hardware, but the ability to co-ordinate the processing of many clustered nodes each with a large number of processing cores.

The consensus view of the exaflop machine is a processor running at 1 GHz in each of a thousand cores in each node of million in a cluster. Clusters of 262,000 nodes are within the capacity of current technology, “so the meganode is not that scary,” says Cornwell “but the kilocore is really scary.”

Various approaches to the kilocore challenge are being researched, some involving novel forms of processor and others involving processors of a similar design to those currently used in laptops and servers.

Techniques for managing very large amounts of memory at high speed are also an important part of the challenge of scaling the exaflop mountain.

If you extrapolate the Moore’s Law increase in the processing capacity of the most powerful supercomputers now available, he says, “the SKA demand will meet the processing trend line in 2022 when the SKA itself is due to be built.”

The algorithms needed to interpret the raw flow of data and deduce what objects are out there are also being redeveloped continually.

ASKAP has revised its algorithms extensively once a year since 2006 and expects that rate to continue. That amounts to a rewrite rather than an enhancement, he says – “the legacy code will die”.

Join the newsletter!


Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments