Some of the technology shown in last year's blockbuster movie Minority Report may soon be a reality and a centrepiece of the intelligence community's war on terrorism. In the futuristic thriller, Tom Cruise played the head of a police unit that uses psychic technology to arrest and convict murderers before they commit their crimes.
Research into new intelligence technology is taking place as part of a $US54 million program known as Genoa II, a follow-on to the Genoa I program, which focused on intelligence analysis.
In Genoa II, the Defense Advanced Research Projects Agency (DARPA) is studying potential IT that may not only enable new levels of collaboration among teams of intelligence analysts, policy-makers and covert operators, but could also make it possible for humans and computers to "think together" in real time to "anticipate and preempt terrorist threats", according to official program documents.
"While Genoa I focused on tools for people to use as they collaborate with other people, in Genoa II, we also are interested in collaboration between people and machines," said Tom Armour, Genoa II programme manager at DARPA, speaking at last year's DARPATech 2002 conference in Anaheim, Calif. "We imagine software agents working with humans . . . and having different sorts of software agents also collaborating among themselves."
Genoa II may be shelved because of its central role in the controversial Terrorism Information Awareness program, but private-sector researchers say many significant advances are still possible and are, in fact, already happening.
For example, private-sector researchers are studying cognitive amplifiers that can enable software to model current situations and predict "plausible futures." Researchers are also on the verge of creating practical applications to support cognitive machine intelligence, associative memory, biologically inspired algorithms and Bayesian inference networks, which are based on a branch of mathematical probability theory that says uncertainty about the world and outcomes of interest can be modeled by combining common sense with evidence observed in the real world.
The goal of all of this research is to find a way to make computers do the one thing they aren't very good at: mimicking the human brain's ability to reduce complexity. Computers are good at doing things like playing chess but are incapable of "seeing" and deciphering a word within an image. Biologically inspired algorithms — the mathematical underpinnings of cognitive machine intelligence — could change that.
"One way to make computers more intelligent and lifelike is to look at living systems and imitate them," says Melanie Mitchell, an associate professor at Oregon Health & Science University's School of Science & Engineering in Portland and author of a book on genetic algorithms. "People have already done that with the brain through neural networks, which were inspired by the way the human brain works."
"In the brain, you have a huge number of simple elements — neurons — that are either on or off and are working in parallel. And in ways that are still fairly mysterious, that seems to collectively produce very sophisticated learning," says Mitchell.
But there are other examples of astounding possibilities, all of which have potential applications in the war on terrorism. For example, Mitchell points to ongoing studies in genetic algorithms that are inspired by evolution — a computer program that evolves a solution to a problem rather requiring a person to try to engineer one. Likewise, researchers are beginning to produce security applications that mimic the human immune system, she says.
Despite formidable technological challenges, there have been successes that could become real products and applications in the next 12 to 24 months. One of those successes has been in the development of inference networks.
"Some of the core algorithms we are working with have been around for centuries," says Ron Kolb, director of technology at San Francisco-based Autonomy, a firm that makes advanced pattern-recognition and knowledge management software. "It's just now that we're finding the practical applications for them."
For example, Autonomy uses a proprietary blend of Bayesian statistics and Claude Shannon's Information Theory, which says it's possible to separate critical elements of information from large streams of audio data, to produce algorithms that are making computers smarter and able to learn.
"We're able to produce an algorithm that says here are the patterns that exist, here are the important patterns that exist, here are the patterns that contextually surround the data, and as new data enters the stream, we're able to build associative relationships to learn more as more data is digested by the system," says Kolb.
The computers of tomorrow will also know when two or more intelligence analysts are interested in or working on the same problem and will automatically link those analysts and their data, he says.
In fact, many automotive and aerospace manufacturers have used rudimentary pieces of this type of capability and have saved millions of dollars by leveraging developmental expertise across functional areas, says Kolb. Likewise, such computers could be able to spot a person leaving a suspicious bag at an airport and automatically alert security. "We're no longer looking for information, information is looking for us," Kolb says.
But Grant Evans, CEO of A4Vision in Cupertino, California, and an expert in cognitive machine intelligence and biologically inspired algorithms, says he thinks he has an idea of where it's all leading. "The algorithms today, particularly biometric algorithms, are very intuitive, meaning the more you use them, the more they learn," says Evans. "Now we're integrating cognitive machine intelligence in the form of video with avatars [3-D digital renderings of real people] that can see and track you. That's the computer of the future."