From the moment the word “robot” was first uttered in a Czechoslovakian play nearly 100 years ago, man has feared his creation will someday kill the creator. It's a narrative that has stuck with us, said Patrick Tucker, Defense One’s Technology Editor, at a recent event in Washington called Genius Machines: The Next Decade of Artificial Intelligence: “The idea of artificial intelligence eventually killing us is actually borne into our first fever dreams about what it would be.” The 1920 play was R.U.R., subtitled in English as Rossum’s Universal Robots.
Now 98 years later, robotics researchers across the globe are seeking inroads into AI, machine-learning, and human-machine teaming. And it's all happening under the changing shadows of great-power dynamics.
Dominant players: China and the U.S., two of the three most militarily powerful countries in the world. They're also the two clear leaders in AI research and investment. Not to be left out, Russia has begun making a concerted effort in AI, advancing plans as recently as last summer. With these great-power dynamics taken together, that old 1920's man-vs-machine death trap play begins to look pale and simple (maybe even desirable) by comparison. But the reality of AI is much more banal—for now.
The truth is AI is already deeply embedded in our lives. It has very probably helped you or someone you know shop online, or use Google Translate, or even glimpse ads that have been virtually following you for months. It's a technology already in the hands of thousands of organizations, millions of finely tuned algorithms influencing everything from advertising to state-sponsored disinformation.
Some experts in AI say the technology is as transformative as electricity. Others predict it will change warfare as much as gunpowder or nuclear weapons did.
Today AI is already at work. It's scanning hospital radiological databases, routing taxis and enabling a real-time conversation with someone in a foreign language. Startups everywhere from Israel to Spain, China (the so-called "Saudi Arabia of data"), France and Silicon Valley are testing more applications wherever they can find what AI feeds on: massive amounts of data.
Bulk data is not terribly hard to purchase — or steal. You may still find on Reddit's Machine Learning directory, for example, an alleged batch of raw data on European soccer matches is up for sale, should you happen to be running your own DIY AI system. Too soon to mention the hack of the U.S. Office of Personnel Management, in which 21.5 million government employees and applicants had their sensitive data stolen, most probably by China? More recently, the developing saga of Cambridge Analytica and Facebook has shown the world how quietly bulk data theft, or compromises, can occur.
AI applications in China:
- Online shopping, and in-person facial recognition-based purchasing (via the company Baidu, sometimes referred to as “China’s Google”);
- Cloud and quantum computing (Alibaba);
- Medical diagnostics (Tencent);
- Image and facial recognition (SenseTime)
- Autonomous cars (Baidu, again — the name for its self-driving car software: Apollo);
- Real-time language translation (iFlytek);
- Facial recognition for policing (Megvii Technology’s Face++ software);
- Swarm drone operations (entertainment and People's Liberation Army surveillance);
- Global ship tracking (PLA);
- Satellite imagery fusion and analysis (PLA).
U.S. companies/entities working in AI:
- Defense Department (sifting through drone footage);
- FBI (fingerprint database search);
- CIA (research in predictive analytics);
- Google/Alphabet (autonomous cars, cloud computing, commercial use);
- Apple (voice and image recognition);
- Facebook (image recognition);
- Uber (autonomous cars)
- WalMart (commercial);
- Amazon (cloud computing, commercial);
- OpenAI (research and robotics);
- Microsoft (image and voice recognition);
- IBM (Watson and quantum computing);
- Nvidia (chipmaker, cloud computing, autonomous cars);
- Twilio (cloud software);
- chipmaker Micron Technology;
- and Intel (cloud computing, medical diagnostic imaging, fraud detection).
Which is to say: We have all been using or subject to AI in a number of not-quite-fully revealed — perhaps even nonconsenting — ways for quite a while now. But the way we think about AI (and the way AI will use us) is almost certain to undergo sweeping changes, very possibly before the end of the next decade — especially if China delivers on its ambitious plans.
China's goal: become "the world's primary AI innovation center" to leverage what Beijing predicts will be a $150 billion industry. To do that, President Xi Jinping said in October, he will be “promoting the deep integration of the Internet, big data, and artificial intelligence with the real economy.”
Groundbreaking: The exclamation point to six years of surging AI investment in China was the January announcement that it has spent $2 billion to begin building what it calls an “AI park” in western Beijing. By 2030, China expects the lot to host some 400 companies.
But as dystopian filmmakers love to remind us, AI is a technology with at least as much peril as promise. And that, too, would seem to be the case in China today.
The view from the top: AI is seen by Chinese leaders as a tool to monitor their population and control unfavorable speech. Consider, for example, a 2017 framework published by the State Council, referred to as China's "AI road map." The framework draws attention to AI’s potential to "significantly elevate the capability and level of social governance, playing an irreplaceable role in effectively maintaining social stability."
And so it was little surprise that we learned that Chinese police used AI-enabled facial recognition to track citizens during the Lunar New Year travel rush. In its first week, the program reportedly netted more than a half-dozen fugitives — and contributed to the arrest of more than two dozen others on charges of having fake identification.
The bottom line: It's almost impossible to know precisely how much money any country is investing in AI-related research, the New York Times reported in February. The technology is still evolving, as are the motives nations and non-state actors might have for remaining secretive about their development.
An unmanned systems future, really for almost every facet of our life, is inevitable. It is not going away, so we need to deal with it head-on.Brig. Gen. Frank Kelley, Deputy Assistant Secretary of the Navy for Unmanned Systems
For the U.S., the AI race has focused increasingly on the national security stakes. The U.S. is still seen as a global tech leader, largely thanks to innovations out of Silicon Valley. But in the last three to five years, that competition tilted toward China. Setting off major alarm bells was the 2017 departure of Microsoft global executive vice president Qi Lu to China's Baidu.
The latest jab thrown in the U.S.-China tech fight — tariffs worth some $50 billion — quickly drew a round of retaliatory tariffs from China. Beijing also said it would take the matter up with the World Trade Organization.
Since he took the campaign trail, Donald Trump has criticized China for what he calls its "unfair" trade practices, including allegations of corporate espionage and attempts to route Chinese investment dollars into select U.S. tech companies. See, for example, the White House's latest report on Chinese tech ambitions from the Office of the United States Trade Representative (PDF).
Biggest tech deal ever, nixed. All of that contributed to Trump's March 12 decision to block Singapore chipmaker Broadcom's proposed $105 billion acquisition of wireless chip giant Qualcomm. The fear among White House officials was if Qualcomm fails, Chinese tech giant Huawei could leap to the top of the global 5G industry. Trump said he had seen "credible evidence" (possibly this March 5 letter from the U.S. Treasury to Broadcom) that the takeover "threatens to impair the national security of the United States."
I just want to go faster than [Russia and China] can keep up. If there’s a bear in the woods, you just have to be faster than the slowest person.Dawn Meyerriecks, the CIA’s head of technology development
The military has officially begun moving AI from the laboratory to the battlefield. In April 2017, a 12-person team launched the Pentagon's secretive Project Maven to apply AI to the war on ISIS. That integration began in earnest in fall 2017.
The idea: help automate the analysis of video feeds coming from large drones. Below, Defense One's Marcus Weisgerber tells us a little more about how Project Maven works — and how it continues to learn as it's deployed.
This is only the beginning of what the military says is the "future of human-machine teaming,” according to Air Force Lt. Gen. John Shanahan, the Pentagon general who is overseeing Project Maven.
What the U.S. military is turning to next: a program called Data to Decision, or D2D. Its mission: "fuse text, video, and virtually every potential source of data or information together through AI," Defense One's Patrick Tucker reported in February.
The headline-grabbing goal: to get these syncronized systems to where it can "shoot someone in the face at 200 kilometers."
Swept up in this effort: "A wide variety of data," Tucker writes, "extending well beyond traditional aerial surveillance footage to potentially include, well, everything: social media posts, live-streaming diagnostic data off of jets, drones, and other aircraft, attainable whether data, pilot biophysical data from soldier-worn sensors, and more."
“We haven’t cracked the nut on man-machine teaming yet. I don’t believe anybody has. The closest we’ve gotten is the extremely high level of information we’ve pushed to aviators in cockpits.”Gen. Paul Selva, Vice Chairman of the Joint Chiefs of Staff
These military AI applications — while still very early in their development, and still reliant on humans to pull any triggers — are predominantly the realm of the U.S. military.
In Russia, “The government has taken a very active role in trying to define how artificial intelligence, unmanned systems and high-tech weapons are to be used," said Samuel Bendett, an analyst in Russian unmanned systems at the Washington think tank CNA Corp.
"The Ministry of Defense is taking the lead in that," said Bendett. "It’s establishing centers; it’s establishing all kinds of organizations within the MoD structure. It is now running artificial intelligence competitions to design and develop new technologies. It’s encouraging military industrial complex to step up and develop various artificial intelligence tools as well.”
Russian military AI-enabled programs:
- Data and imagery collection and analysis from the Black Sea to Syria;
- Object avoidance for unmanned aerial and ground combat systems;
- Swarm testing with various UAS.
For the last several years, Russia has been steadily improving its ground combat robots. Just last year, Kalashnikov, the maker of the famous AK-47 rifle, announced it would build “a range of products based on neural networks,” including a “fully automated combat module” that promises to identify and shoot at targets.
According to Bendett, Russia delivered a white paper to the UN saying that from Moscow's perspective, it would be “inadmissible” to leave UAS without any human oversight. In other words, Russia always wants a human in the loop and to be the one to push the final button to fire that weapon.
Worth noting: "A lot of these are still kind of far-out applications," Bendett said.
The same can be said for China's more military-focused applications of AI, largely in surveillance and UAV operations for the PLA, said Elsa Kania, Technology Fellow at the Center for a New American Security. Speaking beside Bendett at the Genius Machines event in March, Kania said China's military applications appear to be at a a “fairly nascent stage in its development.”
That is to say: There's nothing to fear about lethal AI applications yet — unless you're an alleged terrorist in the Middle East. For the rest of us, we have our Siris, Alexas, Cortanas and more, helping us shop, search, listen to music, and tag friends in images on social media.
Until the robot uprising comes, let us hope there will always be clips of the swearing Atlas Robot from Boston Dynamics available online whenever we need a laugh. It may be better to laugh before these robots start helping each other through doorways entirely independent of humans. (Too late.)