Putin Urges AI Limits — But for Thee, Not Me?
An attempt to parse the two themes sounded by Russian leaders on artificial intelligence.
The prevailing notion that Russia has no ethical framework for implementing artificial intelligence may not be quite correct — though it remains unclear just what limits Moscow will subscribe to.
Russian President Vladimir Putin and his government generally sound two themes when discussing AI: that it is absolutely essential to Russia’s future, necessitating an extensive domestic research and development effort; and that it could be dangerous, requiring an international approach to limiting it. In his September speech at the United Nations, Putin offered an argument for the latter. The Russian leader reiterated that like other innovations, digital technologies tend to spread uncontrollably and, like conventional weapons, can fall into the hands of radicals and extremists. He declared that people should retain the rights to privacy, property, and security despite rapid digital development. He called on the international community to both foster AI and restrict its use for the benefit of humanity. And he asked UN members to seek AI regulations that support military and technological security, as well as traditions, law and morality.
Putin is not the only high-level Russian to appeal for international AI control and oversight. In April 2019, Russian Security Council Secretary Nikolai Patrushev called on the international community to develop a comprehensive regulatory framework “as quickly as possible” to keep AI from undermining national and international security.
Putin’s UN address seemed like a serious appeal for global AI ethics rules — yet it contrasts with Moscow’s stance on international regulation of lethal autonomy. Russia disagrees with the UN’s contention that Lethal Autonomous Weapon Systems should be regulated and limited by the international community, arguing that it’s simply too early to raise this point— that AI is not yet ready to drive truly autonomous weapons, though it advocates ”meaningful human control” over future LAWS as a point of agreement with the international community.
One interesting bit of rhetoric suggests that Putin will continue to argue that Russia must be free to develop AI and other technologies even as the world talks about limiting them. He calls AI crucial not just to Russia’s future but to protecting its “unique civilization” — not just a mere country.
For its part, the Russian Ministry of Defense routinely mentions AI as part of the future of war, and offers assurances that humans will always be in the loop. At August’s international ARMY-2020 military expo, First Deputy Minister of Defense Ruslan Tsalikov said AI won’t replace troops and supporting civilians, but will help them obtain more information, move and process data faster and more accurately, speed up decision-making, and improve control systems operations. Tsalikov declared that AI should not replace a human, but should become her assistant. Deputy Prime Minister for Defense Yuriy Borisov said much the same.
And Russian civilian leaders are seeking to incorporate ethics in AI development and implementation. On Aug. 19, Russian Prime Minister Mikhail Mishustin approved the “Concept for the development of regulation of relations in AI technologies and robotics until 2024.” The concept seeks new regulation tools and rules for the ethical development, implementation, and application of AI and robotics. It declares that AI’s ultimate goal should be the protection of human rights and freedoms, and says that country’s AI developers should make a “reasonable assessment” of risk to human life and health. In what is a first-of-its kind government order in the country, it directs that Russian technological and AI development should be based on basic ethical standards and include prioritizing human well-being and safety, prohibitions on causing harm to a person taken on the initiative of AI or a robotic technology system; and compliance with the state law, including safety requirements.
As the international community looks to Russia’s initial steps in AI ethics debate, it is important to remember that publishing AI ethics principles and doing the work necessary to implement and abide by those principles are two very different things. When the U.S. Defense Department issued its own set of principles, for example, the defense secretary also set in motion a lot of initiatives related to AI implementation and training.
It is important to ask what behaviors Russia could exhibit, other than making statements and issuing official directives that should lead other countries to overcome their initial skepticism. What could Russia do to credibly back up their communications? One wonders how Moscow’s appeal for humanitarian values will be upheld by Russian government’s recent conduct in Syria and its attacks on the political opposition.
Russia’s own proverb “Не словом, а делом” (“not by word, but by deed”) is more than applicable here. Perhaps evaluating Russian AI ethics is too early at this time. Yet it is also clear that Moscow is defining AI as an integral part in the development of national defense and civilian life. U.S. policymakers will want to keep a keen eye on its words and actions.