DARPA seeks deep-learning AI to cope with flood of information
The growing use of UAVs to loiter over enemy territory and send images and streaming videos back to HQ has created a glut of information; DARPA seeks a better, deeper, and more layered artificial intelligence to help the intelligence community cope with the avalanche of information coming in
For an intelligence officer, the only thing worse than having too little information is having too much of it. What with the increasing capabilities for collecting raw information — just think of the sheer quantity of images and video streams sent back by satellites and UAVs loitering the skies — there is a need to find a way to sift through this growing information hay stack in order to find the needles. DARPA — who else? —has a new plan to create powerful artificial intelligences. Lewis Page writes that the Deep Learning machines will be used to sift through petabytes of video from UAVs (we note the appeal of the animal kingdom for explanatory purposes: teachers in Junior High, explaining the facts of life to their students, talk about the birds and the bees; DARPA, explaining the differences between shallow and deep learning, talk about horses, cows, sheep, and goats).
According to DARPA, explaining the purpose behind the Deep Learning technology, the U.S. military and intelligence communities are drowning in surveillance and intelligence data. Hence the need for artificial intelligence to help them cope with the information flood:
A rapidly increasing volume of intelligence, surveillance, and reconnaissance (ISR) information is available to the Department of Defense (DOD) as a result of the increasing numbers, sophistication, and resolution of ISR resources and capabilities. The amount of video data produced annually by Unmanned Aerial Vehicles (UAVs) alone is in the petabyte range, and growing rapidly. Full exploitation of this information is a major challenge. Human observation and analysis of ISR assets is essential, but the training of humans is both expensive and time-consuming. Human performance also varies due to individuals’ capabilities and training, fatigue, boredom, and human attentional capacity.
One response to this situation is to employ machines …
There are already basic “shallow learning” AIs in use, including “Support Vector Machines (SVMs), two-layer Neural Networks (NNs), and Hidden Markov Models (HMMs).” These, however, are not much better than a human with poor “attentional capacity.” The trouble with the shallow learners is that they can learn only at a shallow level:
Shallow methods may be effective in creating simple internal representations … A classification task such as recognizing a horse in an image will use these simple representations in many different configurations to recognize horses in various poses, orientations and sizes. Such a task requires large amounts of labeled images of horses and non-horses. This means that if the task were to change to recognizing cows, one would have to start nearly from scratch with a new, large set of labeled data.
Page correctly points out that a specialized horse-spotter machine unable to recognize a cow is not much use for sorting the sheep from the goats. This is why DARPA wants “deeply layered” learning machines, able to identify horses, cows, sheep, and goats.
Deeply layered methods should create richer representations that may include furry, four-legged mammals at higher levels, resulting in a head start for learning cows and thereby requiring much less labeled data when compared to a shallow method. A Deep Learning system exposed to unlabeled natural images will automatically create high-level concepts of four-legged mammals on its own, even without labels.