The recent Vatican document on the potential dangers and benefits of advances in ‘A.I.’ – artificial intelligence – warns that the powerful algorithms could to the destruction, of humans, either in part, or in whole. A mass self-extinction event, if you will:
Since it is a small step from machines that can kill autonomously with precision to those capable of large-scale destruction, some AI researchers have expressed concerns that such technology poses an “existential risk” by having the potential to act in ways that could threaten the survival of entire regions or even of humanity itself. (101)
Years ago, I read through Isaac Asimov’s I, Robot series, this caveat brought to mind his (fictional) ‘three laws of robotics’:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One might think the rules foolproof. Or, more to the point, the algorithm is airtight. But is it? You can read some of the novellas and find out, but you can likely guess.
The problem is, there is no such algorithm written into A.I. in any of its iterations. It’s a jungle of bits of data, incomprehensible and unpredictable to any human mind, never mind to itself. For it’s not a mind, only a simulacrum thereof, based on human input, vast swathes of which which A.I. collates and mimics. Hence, if humans would kill other humans – and we see no lack of evidence – then A.I. will as well. Not because it wants to, or thinks it should. For the inaptly-named A.I. can neither want nor think. It just does what it does, inexorably, mechanically. There is no fail safe, except unplugging the machine, not likely to happen, barring some Carrington event.
I doubt this will lead to some dystopic SkyNet, with maniacal robots taking over the world and enslaving us all. But some disaster may loom, unless we, as the Magisterium warns, control what we have made, and do so soon.