Google's DeepMind Is Developing An AI Kill Switch To Prevent A Skynet Apocalypse
And with big, powerful companies like Google, Microsoft, Facebook, IBM and NVIDIA all racing to develop AI hardware, software and algorithms to address real-world needs and market demand, it's now commonly thought that one day soon AI could very well be much more intelligent than humans. As Elon Musk recently expressed concern--in the wrong hands or gone rogue--an AI agent could be a very real threat, and not just on game shows. It's the kind of thing that
IBM's Watson sporting at least a 25 gallon cowboy hat, on Jeopardy - Credit: flickr.com/charliecurve
The paper details the following in abstract: "Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions—harmful either for the agent or for the environment—and lead the agent into a safer situation."
Future of Humanity Institute, University of Oxford Philosopher - Stuart Armstrong
The Safely Interruptible Agents research paper goes on to note: "...if the learning agent expects to receive rewards from this sequence [button press], it may learn in the long run to avoid such interruptions, for example by disabling the red button—which is an undesirable outcome. This paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator."
Sounds like good science to us. If we're developing an all-powerful AI, we better well develop one that doesn't mind being shut down, if we so desire. And it better be programmed to also not reprogram itself to not shut down when we hit that red button. That's the short, laymen's term version for ya. You go with that Stuart and team DeepMind.