Dear Noble Robo-Witness,
On October 1st, I attended “The New Nuclear Landscape and its Implications for International Security,” a presentation hosted by the Bulletin of The Atomic Scientists. You may recall an earlier posting, THE ROBO-REPORTER (4/24/17): Perils of AI; “Are We Really Going To Live FOREVER?” Within this prior post, I cited the 3/3/17 meeting of the Bulletin, and how its stated concerns gave us some reason to hope!
On October 1st, Darren Riesberg led off the presentation by noting that this was the Launch Event for the University of Chicago’s 75th Anniversary of the 1st atomic reaction – – on December 2, 1942. A video entitled, NUCLEAR REACTIONS, explained how, on this date, Fermi and his team created the first controlled nuclear reaction. This important Historical Event, it was pointed out, has enormous implications for GOOD and EVIL!
Dr. Bill Foster – – a Democrat who is the only physicist in Congress – – was a special invited speaker. The important facts he revealed about our Nuclear Weaponry will not be shared, here, because they are not directly relevant to our robotic-AI theme. Dr. Foster DID end his talk with some dramatic comments regarding robots and Artificial Intelligence: “Our economy is about to be transformed by robots and AI! Our economy already has its stores being strained by robots and automation at Amazon! Are you worried about AI? Go Google, lethal autonomous weapons systems. This new technology is transformative! – – We might even get hit on the head by drones!”
“Will Artificially Intelligent Weapons Kill The Laws of War?”
The above sub-heading was actually the title of an important September 18th article published by the Bulletin of The Atomic Scientists. I took this article with me, to the Oct. 1st meeting, and submitted a question about it to Dr. Rachel Bronson, Executive Director and Publisher of the Bulletin. The article has been posted at http://thebulletin.org/will-artificially-intelligent-weapons-kill-laws-war11124.
The article’s author, Herbert Lin, provided some eye-opening insights into what the Russian government is now thinking with regards to Future Use of AI! Lin led off his article by stating that, “On September 1, Vladimir Putin spoke . . . about science in an open lesson, saying that ‘the future belongs to artificial intelligence’ and whoever masters it first will rule the world. ‘Artificial Intelligence is the future, not only for Russia, but for all humankind,’ “he added. “‘It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.'”
Even though Putin stated that he will “share Russian AI with the rest of the world,” Lin recommended that we assume roughly equal future levels of AI sophistication for both Russia and the Western democracies. Thus, he asks us, “What would it mean for the future of armed conflict to integrate equal levels of artificial intelligence (AI) into future military systems . . .?”
Lin especially notes the grave concerns about just how much AI-enabled weapons systems will follow the traditional laws of war. Of particular concern for Western nations, is how these AI-enabled weapons will be able to distinguish between innocent civilian areas versus real military targets. He observes that, from a purely military viewpoint, the nations that comply the least with the international laws of war, are most likely to have a significant advantage during combat!
He cited the horrific examples of unrestricted submarine warfare during both World Wars. During these periods, the subs would sink civilian ships (such as merchant ships), without giving any warning! Such civilian warfare violated various Naval Treaties among nations, but it was still widely practiced, because merchant ships often carried war materiel. Lin reluctantly ended his paper with a sigh: “It’s not unimaginable that a similar fate might await the laws of war when AI-enabled weapons become ubiquitous.”
My question submitted to the Panel of Experts at the Bulletin meeting, was basically this one: “AI and International Security seem to be coming INEXTRICABLY ENTWINED! Now, AI is presently doubling itself about every 2 years (according to Moore’s Law)! This AI doubling is OUT OF OUR CONTROL, so that an AI “Intelligence Explosion” is being WIDELY PREDICTED! If our ultimate goal is CONTROLLING nuclear weapons, then shouldn’t they NOT become AI-enabled?” Most unfortunately, Dear Witness, the Panel never even heard this pointed question of mine! – – Sigh!
[NOTE: Versions of this article will soon appear on both LinkedIn.com, as well as on Facebook.com.]