Elon Musk, Google DeepMind, AI Researchers Sign Pact Not To Develop Skynet-Like Terminator Bots

terminator
Artificial intelligence is all the rage these days, as it's being used in everything from our smartphones to our digital assistants to even the vehicles we drive (or rather, are driven for us). However, as AI becomes even more powerful, there are those that say that the technology should have limits on where it can be applied. Most often, those limits are envisioned for robots that would be placed on the battlefield and would have the ability to target and potentially kill without human oversight.

A group of researchers and companies that has expertise in the AI field have come together with a pledge to not develop or participate in the development of machines that autonomously carry out lethal attacks on humans. "In this light, we the undersigned agree that the decision to take a human life should never be delegated to a machine," these individuals and organizations write in a joint statement.

"There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable."

Campaign Killer Robots

The pledge, which was organized by the Future of Life Institute, has an all-star list of signatures including Alphabet's Google DeepMind, HiBot Corp, Lucid.ai, Clearpath Robotics/OTTO Motors, and Tesla Motors/SpaceX CEO Elon Musk.

The thought seems to be that if all of the top talent in the industry along with the top tech companies in the world dedicated to AI and robotics refuse to participate in "killer robot" endeavors, there will be less of an incentive for governments to attempt developing such systems. In addition, shaming them also has its benefits according to the Future of Life Institute.

"We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons," the group adds. "Stigmatizing and preventing such an arms race should be a high priority for national and global security."

Elon Musk has been especially wary of AI being used for militaristic purposes and has warned that AI could take away human jobs, is more worrisome than North Korea having access to nuclear weapons, and that an AI arms race could lead to a third World War.

Bottom image courtesy Flickr/Global Panorama