'Slaughterbots' Video Depicts a Dystopian Future of Autonomous Killer Drones
A graphic new video posits a very scary future in which swarms of killer microdrones are dispatched to kill political activists and US lawmakers. Armed with explosive charges, the palm-sized quadcopters use real-time data mining and artificial intelligence to find and kill their targets.
The makers of the seven-minute film titled Slaughterbots are hoping the startling dramatization will draw attention to what they view as a looming crisis — the development of lethal, autonomous weapons, select and fire on human targets without human guidance.
The Future of Life Institute, a nonprofit organization dedicated to mitigating existential risks posed by advanced technologies, including artificial intelligence, commissioned the film. Founded by a group of scientists and business leaders, the institute is backed by AI-skeptics Elon Musk and Stephen Hawking, among others.
The institute is also behind the Campaign to Stop Killer Robots, a coalition of NGOs, which have banded together to called for a preemptive ban on lethal autonomous weapons.
The timing of the video is deliberate. The film will be screened this week at the United Nations in Geneva during a meeting of the Convention on Certain Conventional Weapons. Established in 1980, the convention is a series of framework treaties that prohibits or restricts weapons considered to cause unnecessary or unjustifiable suffering. For example, the convention enacted a 1995 protocol banning weapons, such as lasers, specifically designed to cause blindness.
As of 2017, 125 nations have pledged to honor the convention's resolutions, including all five permanent members of the UN Security Council — China, France, Russia, the United Kingdom, and the United States.
The Campaign to Stop Killer Robots is hosting a series of meetings at this year's event to propose a worldwide ban on lethal autonomous weapons, which could potentially be developed as flying drones, self-driving tanks, or automated sentry guns. While no nation or state is openly deploying such weaponry, it's widely assumed that various military groups around the world are developing lethal weapons powered by artificial intelligence.
Get the Space.com Newsletter
Breaking space news, the latest updates on rocket launches, skywatching events and more!
Advocates for a ban on lethal, autonomous weapons argue there is a clear moral imperative: Machines should never decide when a human lives or dies.
The technologies depicted in the short film are all based on viable systems that are up and running today, such as facial recognition, automated targeting, and weaponized aerial drones.
"This short film is more than just speculation," said Stuart Russell, professor of computer science at the University of California, Berkeley, and a pioneer in the field of artificial intelligence. "It shows the results of integrating and miniaturizing technologies we already have."
Representatives from more than 70 states are expected to attend the Geneva meeting on lethal autonomous weapons systems this week, according to a statement from the Campaign to Stop Killer Robots. Representatives from the scientific and technical communities will be stating their case to the assembled delegates.
"Allowing machines to choose to kill humans will be devastating to our security and our freedom," Russell says in a short commentary at the end of the video. "Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast."
Originally published on Seeker.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: community@space.com.