Google Wants More Work from the Defense Department
A senior vice president ruled out working directly on weapons programs, but said other areas are fair game.
If you thought that Google was getting out of the national security business, think again. The company’s senior vice president for global affairs said Tuesday that the search giant has Pentagon contracts to work on cybersecurity, business automation, and deepfake detection — and is looking for more.
“It’s been frustrating,” said Google’s Kent Walker, referring to rising public perceptions that the company is opposed to doing national security work — and to the narrative pushed by some Google rivals, such as Palantir’s Peter Thiel and their allies in Congress, such as Rep. Josh Hawley, R-Missouri.
Speaking at the National Security Commission on Artificial Intelligence conference, Walker said that Google is “fully engaged in a wide variety of work with different agencies” within the Defense Department. Its work with the Joint Artificial Intelligence Center includes cybersecurity, healthcare, and business automation. It is working with DARPA to “ensure the robustness of AI” and “progress the operation of hardware” beyond the expiration of Moore’s Law.
“As we take on those kind of things, we are eager to do more, [and are] pursuing actively additional certifications” to provide cloud services for classified data and other services, he said.
Walker stopped short of saying that Google would work again on the Air Force’s Project Maven, the AI-infused intelligence program that led to employee protests and some resignations. But he did say that Google’s 2018 decision to leave the program was focused on “a discrete contract, not a broader statement about our willingness or our history about working with the Department of Defense.” Rather, he said, Google had decided “to press the reset button until an opportunity to develop our own set of AI principles, our own work on internal standards and review processes.”
JAIC leader Lt. Gen. Jack Shanahan described the public relations issues surrounding Maven as a "canary in a coal mine.” Shanahan said Google and the military lost the narrative, in part, because the company wasn’t fully transparent with its employees about its participation. “The company made a strategic decision” not to talk about what they were doing, he said at the conference. That, in part, led to confusion and a lot of faulty coverage about what Project Maven was. “It was not a weapons project. It is not a weapons project,” he emphasized.
On the subject of weapons, specifically, Walker said, “It’s a nascent technology. We want to be very, very careful about the application of AI in this area. So that’s not an area that we’re pursuing, given our background.”
Walker also put in a plug for the newly released Defense Innovation Board principles on artificial intelligence, calling it a “thoughtful document,” in line with guidelines that Google had published, but with greater emphasis on process. “The report devotes a couple of pages to the principles and a long section to implementation,” he said, calling that critical to help large organizations build safety standards and processes for AI tools.
Michael C. Horowitz, a political science professor at the University of Pennsylvania, told Defense One in a direct message, “Walker's comments show that Google wants to turn the page after the Project Maven controversy. They signal that Google, a key leader in AI research, is willing to partner with the US national security community on applications of AI.”
Horowitz said Walker’s statements “seem to represent an evolution of thinking at Google. They signal that more cooperation between the Defense Department and Silicon Valley on AI may soon become a reality.”
NEXT STORY: The Air Force’s ‘Doomsday Plane’ Is in the Shop