By Jon Bardin
(Possible Method Used With James Holmes?) In a small, anonymous office in the Trump Tower, 28 floors above Wall Street, a man sits in front of a computer screen sifting through satellite images of a foreign desert. The images depict a vast, sandy emptiness, marked every so often by dunes and hills. He is searching for man-made structures: houses, compounds, airfields, any sign of civilization that might be visible from the sky. The images flash at a rate of 20 per second, so fast that before he can truly perceive the details of each landscape, it is gone. He pushes no buttons, takes no notes. His performance is near perfect.
By Jon Bardin
Or rather, his brain’s performance is near perfect. The man has a machine strapped to his head, an array of electrodes called an electroencephalogram, or EEG, which is recording his brain activity as each image skips by. It then sends the brain-activity data wirelessly to a large computer. The computer has learned what the man’s brain activity looks like when he sees one of the visual targets, and, based on that information, it quickly reshuffles the images. When the man sorts back through the hundreds of images—most without structures, but some with—almost all the ones with buildings in them pop to the front of the pack. His brain and the computer have done good work.
That display was a demonstration of a new technology being developed through a collaboration between the Defense Advanced Research Projects Agency, the military’s research arm, and a private company called Neuromatters, which was founded by a team led by the Columbia University bioengineer Paul Sajda. The hope is that, in the near future, military analysts might use the technology to eliminate worthless images in seconds, speeding up their review of satellite images by orders of magnitude. By the looks of it, it’s working.
The program, called Neurotechnology for Intelligence Analysts, or NIA, is just one of many being pursued by Darpa, as the agency is known, to translate basic neuroscience research into tools that will make the military more able and efficient. Other projects Darpa finances include one to test whether sending electricity through the brain can accelerate learning; another that seeks to use psychology and neuroscience to understand which types of communication best convince those living in occupied lands that they should yield to American forces, a sort of Propaganda 2.0; and a project aimed at developing drugs that would reduce or erase traumatic memories.
Some critics view these projects with suspicion and raise ethical objections: They see Darpa initiating a military invasion of the mind that warps the goals of basic research to fit the battlefield. “As a scientist I dislike that someone might be hurt by my work. I want to reduce suffering, to make the world a better place, but there are people in the world with different intentions, and I don’t know how to deal with that,” Vincent P. Clark, an associate professor of psychology at the University of New Mexico whose work with brain stimulation has influenced the military, told The Guardian earlier this year.
For others, however, such military projects are just another outgrowth of years of basic-science research, the natural siblings of other clinical and bioengineering applications.
Either way, the NIA project makes clear the often unpredictable routes that basic-science findings take on their way to becoming something useful in the wider world. Because of that unpredictability, support for basic biological science occasionally comes under attack for lacking clear, direct benefits to society. But in 2009, in a speech before the National Academy of Sciences, President Obama spoke about the value of such research: “The fact is, an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not.” Now, in an age of increasing interest in bioengineering and, specifically, tapping into the computational power of the brain, these Darpa-financed projects are proof that basic-science discoveries in the biological sciences do lead to unexpected places, including to war.
In the 1990s, the military began to realize it had a problem: too many pictures and not enough eyeballs.
Specifically, it had a glut of satellite images, photos covering every inch of the planet, waiting to be sifted, scrutinized, and analyzed for any precious bits of intelligence. Paul Sajda, who would later found Neuromatters and develop the NIA program, learned of this problem on a visit to the National Photographic Interpretation Center, in Washington, D.C., in 1995. The center was staffed with hundreds of analysts whose job was to sort painstakingly through piles and piles of satellite images, looking for communications lines one day, rebel camps the next.
At the time, Sajda was working for the nonprofit David Sarnoff Research Center, in Princeton, N.J. Sarnoff had many contracts with the Department of Defense, including a project Sajda himself had been working on to apply the military’s computer technologies to the analysis of radiological images of potential cases of breast cancer, hoping to improve diagnostic screenings. It was that project that brought him to NPIC, to see its image-analysis process in action.
During his visit, Sajda was struck by how the analysts could tell, from only a few pixels, what they were looking at. It was analysts at the center, for example, who first discovered, in a set of grainy photos taken during flyovers of Cuba by American U-2 planes, the Russian cache of nuclear missiles that led to the Cuban missile crisis. These analysts were good.
Nevertheless, looking through images was a slow and laborious process, and while computer technology had improved the program’s results, the gains were limited. Further, as the sites of important intelligence became more widely distributed, the number of potentially significant images ballooned.
Sajda was amazed at how many gigabytes of images went unanalyzed, even unviewed. “Here was this huge pile of data, and no one could even look at it. There just wasn’t enough manpower,” he told me. In 1996, the government merged the NPIC with several related organizations to form the National Imagery and Mapping Agency, hoping to improve its success. But the problem did not go away, and in 2001 a Congressionally appointed committee released a report condemning the agency for its poor performance.
For Sajda, the problem was an intriguing one, and it held his attention starting with that first visit. “I thought then,” Sajda remembers, “that there has to be a way we can speed this up.”
Though Sajda was an engineer, he had studied the human visual system as a graduate student, developing models for how the brain picks apart a scene, identifying what is important and what is not. He knew that the brain still outperformed any computer at identifying important features of images like satellite photos. Most importantly, Sajda was familiar with a long literature, dating back to the mid-1960s, that related rapid changes in brain activity to visual processing of important information.
What is most remarkable about Sajda’s attempt at solving the military’s problem is that it is based primarily on that 1960s-era research. In particular, a series of EEG studies published starting in 1964 in the journals Nature and Science demonstrated, for the first time, specific markers of cognitive processing in the brain activity of people while they viewed images.
One of those studies in particular is a clear precursor of Sajda’s work. It was carried out by a young psychologist named Robert Chapman, and it showed that brain activity was quite different while people viewed images that held important information than while they viewed images that meant nothing to them.
Chapman’s experimental design would seem primitive to psychologists today, but it worked. Subjects sat in a chair in a dimly lit room. In front of them were two illuminated boxes. In one box, a single number was shown, while in the other, a series of numbers, interspersed with plus signs, flashed in front of the subject. The numbers were selected randomly, via holes punched into a piece of paper that was fed by a motored gear through the illuminated machine (the days of experiments presented on computer screens had not yet arrived). A subject had to decide, with each number flashed on the right, whether the number on the left was smaller. Chapman then used a hulking computer, made by Packard Bell, to average all the data surrounding the different types of trials—those with numbers, and those with blanks or plusses.
This data averaging itself was a major step forward. In the early 1960s, the use of EEG to study brain activity was about 40 years old, but the brain’s signals were still poorly understood. In the 1930s, for example, the originator of the EEG technique, Hans Berger, had shown that the squiggly lines representative of electrical brain activity changed significantly when people closed their eyes, or did math in their heads. But, because such early EEG researchers had to do all analysis by looking at the data visually and counting important events or changes, it was almost impossible to conduct and analyze complicated cognitive experiments.
With the introduction of computers, however, researchers could look not just at the continuous EEG over long periods of time but also at the changes that occurred around specific events by averaging the data from a large number of painstakingly timed trials. Most researchers began using this newfound capability to study sensory responses—placing electrodes over the visual cortex at the back of the head, for example, and analyzing how the EEG signal changed when flashes of light of different durations were presented to subjects. Chapman was one of the first to apply that approach to cognitive tasks.
What Chapman found in his study immediately excited him: When subjects viewed any stimulus, there was a quick change in brain activity, the size of which depended on how bright the stimulus was. But when subjects were shown a number, crucial to performing the task before them, the EEG registered a huge spike in brain activity about 300 milliseconds after the stimulus appeared. When a plus sign was shown instead of a number, the spike was notably smaller.
That simple task had revealed something profound: a clear EEG marker of the perception and processing of information relevant to a decision. Samuel Sutton, in a series of experiments published in 1965 in the journal Science, continued to explore that class of responses, focusing specifically on the spike that occurred 300 milliseconds after the stimulus. Eventually, that spike was named the P300 response.
Since those early findings, the P300 has been used to study almost every conceivable topic in neurology and neuroscience: decision-making, consciousness, Alzheimer’s disease, schizophrenia, and, quite prominently, as a brain-computer interface to allow paralyzed people to spell using EEG.
At the time of his visit to the National Photographic Interpretation Center, Sajda was already familiar with the P300 literature, and he began to wonder if there was some way that brain activity itself could be used to speed up image analysis.
The idea was not so far-fetched. In 1996 a paper was published in Nature about a technique called rapid serial visual presentation: RSVP. Researchers demonstrated that images shown extremely rapidly could still be parsed by the visual system, that the telltale signs of visual processing in the EEG were still there. “This was a big inspiration for me,” Sajda remembers. If he could find a difference in brain activity between the rare images that had important targets and those that didn’t, he could use that signature to create a system to analyze the military’s images. And the Nature paper suggested it could be done extremely rapidly, faster than 10 images per second. What’s more, the P300 effect had been shown to be modulated by expertise: Analysts who spent all day looking through images would have particularly robust brain responses.
In 2003, at the urging of a Darpa program officer named Amy Kruse, Sajda wrote a proposal and brought the idea to the agency’s attention. First, he wrote, the system would take advantage of state-of-the-art computer vision techniques, weeding out images that could be easily analyzed without human involvement. Once the more difficult images were isolated, he would train a computer to recognize what an analyst’s brain activity looked like after viewing an image with a target, and one without a target. Then he would present images to analysts at a rapid rate, up to 20 times per second. If his algorithm worked, the computer could generate an “interest score” for each image simply by looking at how robust the P300 response was. Analysts could then spend their time studying the images that mattered, those with the highest scores. After several false starts, the project was backed by Darpa, and Sajda founded Neuromatters to do the product development and engineering. Darpa also set up and financed seven other groups to pursue the technique.
By the time I saw the project, only two groups remained in the hunt, and Sajda’s approach was in the process of being tested by government analysts. According to all of Neuromatters’ studies, the project was a huge success, ready for the field: They claim to have achieved a 300-percent increase in the speed of image analysis by peeking in on the brain. The government might, finally, be able to analyze most of those images.