The dizzying speed of the growth of the surveillance state and the increasing sophistication of the tools used to build it are paid for in large measure by funds doled out by the Army’s Defense Advanced Research Projects Agency (DARPA).
At The New American we have chronicled the various projects sponsored by the über-secret research and development arm of the military. One of the newest technologies being pursued by DARPA will not only widen the field of vision of government’s never-blinking eye, but it purports to predict the behavior of those being watched.
Forbes reports that DARPA has contracted with scientists at Carnegie Mellon University to develop “an artificial intelligence system that can watch and predict what a person will ‘likely’ do in the future using specially programmed software designed to analyze various real-time video surveillance feeds. The system can automatically identify and notify officials if it recognized that an action is not permitted, detecting what is described as anomalous behaviors.”
Deployment of the devices is anticipated at “airports and bus stations,” but there is little doubt that should these predictive monitors prove successful, they will be installed right there next to the red light cameras already mounted at nearly every intersection in America.
Forbes also reports that “Carnegie Mellon is one of 15 research teams and commercial integrators that is participating in a five-year program, started in 2010, to develop smart video software.”
Several aspects of this “Minority Report” come-to-life sound substantially similar to another contest of sorts being concurrently sponsored by DARPA at a secret campus near George Mason University in Virginia.
In a statement announcing the progress of the research, DARPA spokesmen Mark Geertsen said the goal of the project was “to invent new approaches to the identification of people, places, things and activities from still or moving defense and open-source imagery.”
In the statement, DARPA described several concepts being worked on by six teams of researchers chosen to live and labor in the “DARPA Innovation House,” outside George Mason University.
While the descriptions of the projects provided by DARPA spokesman Mike Geertsen were brief, greater detail of the technologies were discovered by The New American.
The first of the projects reportedly being cooked up in the DARPA test kitchens is called PetaVision. The DARPA statement describes PetaVision as one of the “Multi-Modal Approaches to Real-Time Video Analysis. Biologically-inspired, hierarchical neural networks to detect objects of interest in streaming video by combining texture/color, shape and motion/depth cues.”
While that summary is admittedly vague, a website maintained by the Los Alamos National Laboratory (LANL) provides a bit more information not only on the technology, but why the federal government might find it useful in its quest to place every American under constant surveillance and to identify potential “domestic terrorists.”
We seek to understand and implement the computational principles that enable high-level sensory processing and other forms of cognition in the human brain. To achieve these goals, we are creating synthetic cognition systems that emulate the functional architecture of the primate visual cortex. By using petascale computational resources, combined with our growing knowledge of the structure and function of biological neural systems, we can match, for the first time, the size and functional complexity necessary to reproduce the information processing capabilities of cortical circuits. The arrival of next generation supercomputers may allow us to close the performance gap between state of the art computer vision approaches by bringing these systems to the scale of the human brain.
Admittedly, the potential uses for PetaVision are obscured behind the scientific jargon used in its description. However, empowering the federal government with any technology that can simulate the human brain’s ability to see and process information for the purpose of “detect[ing] objects of interest” in streaming video is terrifying.