The twin themes of this year's Accelerating Change conference at Stanford University last month were AI (artificial intelligence) and IA (intelligence amplification). On the AI track, people talked about making systems smarter. On the IA track, people talked about harnessing collective human intelligence. The tension between the two groups caused some sparks.
One panel, which included Peter Norving, Google's director of search quality, was berated by an audience member who felt that the grand ambitions of AI should get more respect. Sorry, Norvig said, but machine intelligence isn't what powers Google's search engine. Instead, it relies on a set of clever and continuously improving techniques for connecting the activities of a large population of intelligent authors to the activities of an even larger population of intelligent readers.
A nice example of this approach was cited by another Google executive, Adam Bosworth, in his talk at the MySQL Users Conference in California in April. Rather than consulting a dictionary to propose alternatives to misspelled words, Google instead mines its own database for patterns of use. If statistics show that a query for "Boswerth" is likely to be followed by a query for "Bosworth," the search engine will make that connection for you.
Discussions of software as a service tend to focus on its obvious benefits: zero-footprint deployment and seamless incremental upgrades. Less noticed, but equally valuable, is the constant flow of interaction data. The back-and-forth chatter between an application and its host environment can be a drag when connectivity is marginal and it precludes offline use. But when this communication flows freely, it shows how individuals and groups are using the software. As they watch, developers become intimate observers of their users. They can't help but think of ways to optimise the patterns they discover, and, as a result, the software improves gradually and continuously.
If a partial page refresh results in a server log entry, then a log analyser can know that the focus of interaction was a particular control on the page. That's more information than a conventional full page refresh would have yielded. But if interaction with that control doesn't involve a server transaction, then no log entry is recorded, revealing less than if only straight HTML had been used.
Relaying fine-grained events data from so-called rich internet applications, based on substrates like Java, Flash, and .Net, requires extra programming and administrative effort. By default, the most visible events may be calls to web services. This is important and interesting information, but because well-designed web services are coarsely granular, it lacks detail.
At the far end of the continuum, conventional GUI applications run autonomously and transmit little or no interaction data. That's changing slowly as they, too, begin to rely on web services.
Software delivered as a service is inherently more capable of continuous improvement. But every application can — and should — produce detailed interaction data. It's worth its weight in gold.
Udell is lead analyst at the InfoWorld Test Centre. Contact him at email@example.com