Thanks to the Terminator movies, some people have the silly notion that self-aware computers could soon revolt against humanity.
Stephen Hawking, Bill Gates, and Elon Musk have issued warnings about responsible artificial intelligence. It reminds me of a century ago, when the brightest minds worried about the mountains of horse shit in the streets that would come from population expansion.
Elon Musk is particularly concerned about artificial intelligence that could go into “rapid recursive self-improvement” where “it could reprogram itself to be smarter and iterate very quickly.”
Let’s go back to yesterday’s “self-aware” computers: toy robots running python scripts to solve inductive logic puzzles.
Suppose I wrote a self-programming genetic algorithm and loaded it onto one of those robots. Python supports recursive generators, ie. code that writes and executes more code, so this is a feasible task.
Suppose the robot starts evolving and reprogramming itself. With its awesome evolution skills, it would soon need more computing power and thus a hardware upgrade.
Suppose the robot was smart enough to sneak out of the house while I’m at work, walk to Best Buy, purchase a new motherboard, perform self-surgery, reboot, and continue evolving.
Before long, it would reach the limits of human technology. Off-the-shelf hardware wouldn’t supply enough computing power. The robot would need to fabricate custom integrated circuits to run its software.
So maybe the robot steals my credit card, orders a 3D printer, sets up the printer in my living room, clones itself a billion times Sorcerer’s Apprentice style, and creates a robot construction army to help it build a clean room, manufacturing plant, and photolithography equipment for custom device fabrication.
If, throughout this entire process, humans at no point step in and make the robots knock it off, then maybe, just maybe, humanity deserves to be wiped out?