A beginner’s guide to the AI apocalypse: Misaligned objectives

0
10
A beginner’s guide to the AI apocalypse: Misaligned objectives

Welcome to the first article in TNW’s guide to the AI apocalypse. In this series we’ll examine some of the most popular doomsday scenarios prognosticated by modern AI experts. 

You can’t have a discussion about killer robots without invoking the Terminator movies. The franchise’s iconic T-800 robot has become the symbol for our existential fears concerning today’s Artificial Intelligence breakthroughs. What’s often lost in the mix however, is why the Terminator robots are so hellbent on destroying humanity: because we accidentally told them to.

This is a concept called misaligned objectives. The fictional people who made Skynet (spoiler alert if you haven’t seen this 35 year-old movie), the AI that powers the Terminator robots, programmed it to safeguard the world. When it becomes sentient and they try to shut it down, Skynet decides that humans are the biggest threat to the world and goes about destroying them for six films and a very underrated TV show.

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.

Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.

Stephen King’s “Trucks” imagines a world where a mysterious comet’s passing not only gives every machine on Earth sentience, but also the ability to operate without a recognizable power source. And – spoiler alert if you haven’t read this 46 year-old short story – it ends with the machines destroying all humans and paving the entire planet’s surface.

A recent NY Times op-ed from AI expert Stuart Russel began with the following paragraph:

The arrival of superhuman machine intelligence will be the biggest event in human history. The world’s great powers are finally waking up to this fact, and the world’s largest corporations have known it for some time. But what they may not fully understand is that how A.I. evolves will determine whether this event is also our last.

Russel’s article – and his book – discuss the potential for catastrophe if we don’t get ahead of the problem and ensure we develop AI with principles and motivations aligned with our human objectives. He believes we need to create AI that always remains uncertain about any ultimate goals so it will defer to humans. His fear, it seems, is that developers will continue to brute-force learning models until a Skynet situation happens where a system ‘thinks’ it knows better than its creators.

On the other side of this argument are experts who believe such a situation isn’t possible, or that it’s so unlikely that we may as well be discussing theoretical time-traveling robot assassins. Where Russel and Bostrom argue that

Read More

Leave a reply