In the winter of 2017, members of an FBI hostage rescue team heard an unfamiliar sound: the buzz of small drones.
Deployed by a criminal gang, a swarm of tiny, unmanned aircraft made a series of high-speed passes at the agents, the better to flush them from an elevated observation post.
And it worked. “We were then blind,” Joe Mazel, the head of the agency’s operational technology law unit, later said.
Picture a similar scenario in a future war zone—with the agents replaced by soldiers and the drones armed and fully autonomous—and you can begin to imagine how artificial intelligence (AI) promises to reshape global security.
Once fodder for science fiction, machines that can think and learn like humans are becoming a crucial component of military operations in cyberspace, outer space, and all points in between. Last year, the Department of Defense established the Joint Artificial Intelligence Center (JAIC) to coordinate the Pentagon’s AI efforts and accelerate the delivery of AI-enabled capabilities. China and Russia are also moving to develop and deploy AI-based weaponry.
The stakes are high. According to Secretary of Defense Mark Esper, “whoever gets to robotics and AI first, it’ll be a game-changer on the battlefield”; more grandiosely, Russian President Vladimir Putin has proclaimed that the nation that masters AI will “become the ruler of the world.”
Is the United States ready for the age of military AI? How does it stack up against its geopolitical rivals? Who should grapple with the ethical dilemmas presented by machines that can pull the trigger without human control?
The Atlantic recently gathered lawmakers, military officials, and technology and cybersecurity experts at The Atlantic Festival in Washington, D.C. to discuss those questions. Underwritten by Booz Allen Hamilton, the conversation explored a changing security landscape in which algorithms and Big Data are becoming as vital as rifles and tanks.
Here are three key takeaways:
The Robots Are Here To Help
No piece of media has done more to shape public perception of how AI will affect the military than the Terminator film series, in which humans are hunted to the brink of extinction by remorseless, red-eyed death machines.
But killer robots are the stuff of Hollywood. In fact, a 2012 DOD directive stipulates that “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
“I don’t know of any commander I’ve ever been associated with that wants [independent, unsupervised] systems making decisions about life or death that they don’t have some play into,” said JAIC Director Lt. Gen. John Shanahan.
Rather than replace soldiers with decision-making machines, the Pentagon is working on AI projects that help humans make better and faster decisions. For example, Project Maven uses AI to identify objects of interest in surveillance video collected by military drones—a difficult task that can severely stress the Air Force’s human analysts.
“Our problem was a basic problem,” Shanahan said. “Intel analysts are sorting through with [their] eyeballs far too much information than they could ever deal with, 12 hours at a time.
“It’s very, very excruciating to do that. And you make mistakes. You miss things…. It’s just not conducive to good decision-making.”
Other areas where human operators can benefit from AI assistance include speech translation, signal analysis, and preventive maintenance of crucial equipment, said Graham Gilmer, a Booz Allen principal who works on AI for defense and intelligence clients.
“You’re able to catch something that maybe human eyes wouldn’t, and that a machine-analyzed image may reveal,” he said. “For predictive maintenance, billions of dollars are spent every year. If we could cut even a few percentage points into that, it adds up quickly.”
Sen. Martin Heinrich (D-NM), the co-founder of the Senate Artificial Intelligence Caucus, said that helping-hand AI could potentially act as a battlefield guardrail, preventing humans from firing on the wrong targets or causing collateral damage to civilians.
“It’s very easy to jump immediately to the killer robot scenario, and it certainly makes for great motion pictures,” he said. “But there are also ways that we can use this technology to be more responsible, even in a war setting.”
The AI Arms Race Is Also a Race For Talent
The United States isn’t the only global power seeking to enhance its military through AI.
Two years ago, China announced its intention to become the world’s AI leader by 2030. Though much of the country’s blueprint covers commercial aspirations, it also encourages its army to work with academia and the private sector to develop military applications.
China’s arms industry reportedly is working on unmanned submarines, driverless vehicles sporting rocket launchers, and machine gun-equipped drones that can operate in offensive swarms. Chinese companies also have developed smart surveillance cameras and other AI-based technologies that have been used by state police to oppress and detain Uighur Muslims and other ethnic and religious minorities.
While some observers have cautioned against framing the United States and China as engaged in an AI “arms race,” Rep. Will Hurd (R-TX) believes that a geopolitical contest already is underway.
“It’s a competition and our adversary, our opponent, is China,” he said. “And this is the context in which we got to see this new world … I would say, best case scenario, we’re tied.”
To improve the government’s understanding of AI and accelerate its development inside and outside the Pentagon, Rep. Jerry McNerney (D-CA), the co-chair of the House Artificial Intelligence Caucus, has proposed the creation of a center of excellence that would provide AI information to all federal agencies, including the DOD.
Keeping up with China, said Booz Allen CEO Horacio Rozanski, also will require developing science, technology, engineering, and mathematical (STEM) talent.
“We talk about this as a technology race with China,” he said. “It is a talent race with China. And the country with the best talent base, with the most focused talent base on these topics, is ultimately going to win this race.”
Hurd and McNerney said that targeted education and immigration policies could help build America’s AI talent base.
“If data is a coin of the realm, then coding is a lingua franca, and we should be introducing coding at middle school,” Hurd said. “On the immigration piece, if China is going to continue to steal our technology, let’s steal their engineers.
“We should be attracting that talent. And if you’re going to Texas A&M University and getting a degree in some advanced science, and when you have your diploma, we should be slipping a visa in that tube as well to keep you here working in American companies.”
In addition to academic programs and partnerships to train new workers, it’s critical to create opportunities for the existing workforce to update and learn new skills for an AI-driven future, Judi Dotson, a Booz Allen executive vice president who supports a variety of defense clients, pointed out after the event.
Panelists also agreed that the United States will be at a disadvantage if the Pentagon and private sector tech firms are unable to collaborate on AI projects.
Last year, Google left Project Maven after employees objected to the company’s involvement in drone warfare. It also dropped out of bidding for the opportunity to build the DOD’s cloud-computing infrastructure, citing ethical reasons. “There are some frictions that are going to come,” Hurd said. “But I think increased transparency…can address some of those problems.”
Lawmakers Need To Engage With The Rules of AI Engagement
Though AI-enabled lethal autonomous weapons systems (LAWS) aren’t here yet, many defense analysts believe they are coming—and soon.
Earlier this year, West Point professor and combat engineer Gordon Cooke wrote that making a “cheap, fully automated system that can detect, track and engage a human with lethal fires is trivial and can be done in a home garage with hobbyist-level skill.”
Automated gun turrets used by paintball hobbyists, he wrote, already can hit more than 70 percent of moving targets—by comparison, U.S. Army soldiers only need to hit 58 percent of stationary targets to qualify as a marksman on a particular weapon. In the future, Gilmer said, the Pentagon may have to consider developing and deploying more autonomous systems for defensive purposes—particularly when the speed or scope of attacks would overwhelm a human response.
As AI advances, should America engage the international community and join current efforts to preemptively restrict or ban autonomous weapons? Alternatively, should the Pentagon attempt to beat potential adversaries to the punch? And if the military does decide to build armed, autonomous robots, then what kind of rules should govern their use?
McNerney said that it’s not too early for lawmakers to grapple with these questions. “Do we want autonomous weapon systems killing people? And I think the answer is clearly that we don’t. So that’s the tension that we have to deal with.”
Kara Frederick, Associate Fellow for the Technology and National Security Program at the Center for a New American Security, says that this presents another opportunity for the U.S. to lead. “We as Americans have a comparative advantage right now. It’s about leaning into those practices and institutions [like Congress],” she said. “Technology is going to increasingly reflect what society looks like. If we make sure that is a society of openness and transparency, we lean into a free press and an independent judiciary, we encourage an engaged citizenry, that is going to help us, especially when it comes to national security applications of AI.”