A good deal of debate still surrounds the difference between grid computing and clustering. Both aim to closely couple the power of several computers to achieve a large task or continuous and reliable service (high availability). Grid computing is generally considered more flexible and harder to implement than clustering.
Various industry sources identify at least five different ways of making the distinction:
1) In a cluster, all servers perform the same task; in a grid, they may concurrently be doing related tasks or different parts of the same larger task.
2) In a cluster, all machines are of the same type and run the same operating system; a grid is more “heterogeneous” in this respect.
3) A cluster consists of a fixed population of machines, plus or minus the odd takedown or replacement for upgrade, repair or maintenance. A grid consists of a constantly shifting population of machines providing a pooled “resource”. Users don’t know which or how many machines are in the pool at any time.
4) A cluster consists of a number of servers under the “same roof” with the same owner; the servers in a grid are typically widely distributed and/or belong to different organisations or individuals.
5) Clusters chiefly aim at high availability; grids are more oriented towards high compute-power.
An Oracle comparison alludes to several of these factors:
“Grids differ from clusters because grids share resources from and among independent system owners. Grids are configured from computer systems that are individually managed and used both as independent systems and as part of a grid. Individual components are not ‘fixed’ in a grid and the overall configuration of a grid changes over time. The result is a system that assesses and optimises its utilisation of resources.”
Yet a “fact sheet” supplied by Oracle as backgrounder to the Australia-New Zealand survey describes clustering as “a popular strategy for implementing grid computing.”