An article in the Sydney Morning Herald entitled The AI anxiety: our preoccupation with superintelligence references one of the most spectacularly dumb ideas about how an Artificial Intelligence could be a threat to humanity. The scenario, envisaged by a philosopher at the Future of Humanity institute in Oxford, Nick Bostrum, is quoted below:
“Bostrom’s favourite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.
If this had been written by Douglass Adams then I would get the irony and be amused by the literary trick to get the point that humans have built some silly machines. It is also a direct rip-off of the first Star Trek Movie called Start Trek: The Motion Picture, in which a fantastically advanced alien machine-based civilisation intercepts one of the original Voyager probes, pimps it up with super artificial intelligence and the ability to copy matter into its memory (and in the process destroying said matter) to complete Voyager’s mission of acquiring knowledge, amped up to the nth degree.
Any first year engineering student, SciFi enthusiast or not, will immediately see the nonsense in this scenario. Why would someone with money to invest want an intelligent paper-clip maker? What could possibly be the economic benefit. Even the most rudimentary Artificial Intelligence is expensive and only used where investors believe there will be a profit, or reduction in costs achieved. How does a paper-clip machine which bends and cuts spools of wire into a specific shape, incrementally develop the capability to smelt steel from raw materials it has mined after it has explored for iron ore deposits and managed the complex trade agreements with the owners of the mineral rights to the deposit? … I could go on like this for pages but I won’t as it would be as patronising to your sensibilities, dear reader, as the original scenario proposed by Bostrum.
To be fair to Bostrum, he does believe that Artificial Intelligence is part of a kind of destiny for human-kind, having been quoted as saying, “I actually think it would be a huge tragedy if machine superintelligence were never developed,” he said. “That would be a failure mode for our Earth-originating intelligent civilisation.”
And, to further be fair to Bostrum, this may well be a case of the SMH journalist, Joel Achenbach, misrepresenting Bostrum’s scenario.
The point is that you can’t articulate a risk associate with mice by saying, “first some magic happens and then those mice are turned into dragons, so we had better not have any mice about”.
This field has to have economically rational, sound scientific, socially realistic arguments rather than Hollywood inspired doomsday scenarios that make good press but are frankly naive at best and malicious in their intent to destabilise informed debate at worst.