The 10 most important moments in AI (so far)

0
14
The 10 most important moments in AI (so far)

This article is part of Fast Company’s editorial series The New Rules of AI. More than 60 years into the era of Artificial Intelligence, the world’s largest technology companies are just beginning to crack open what’s possible with AI—and grapple with how it might change our future. Click here to read all the stories in the series.


Artificial Intelligence is still in its youth. But some very big things have already happened. Some of them captured the attention of the culture, while others produced shockwaves felt mainly within the stuffy confines of academia. These are some of the key moments that propelled AI forward in the most profound ways.

1. Isaac Asimov writes the Three Laws of Robotics (1942)

Asimov’s story “Runaround” marks the first time the famed science-fiction author listed his “Three Laws of Robotics” in full:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

“Runaround” tells the story of Speedy, a robot put in a situation where balancing the third law with the first two seems impossible. Asimov’s stories in the Robot series got science-fiction fans, some of them scientists, thinking about the possibility of thinking machines. Even today, many people go through the intellectual exercise of applying Asimov’s laws to modern AI.

2. Alan Turing proposes the Imitation Game (1950)

Alan Turing authored the first benchmark to measure machine sentience in 1950. [Photo: Unknown/Wikimedia Commons]

“I propose to consider the question ‘Can machines think?’” So began Alan Turing’s seminal 1950 research paper that developed a framework for thinking about machine intelligence. He asked why, if a machine could imitate the sentient behavior of a human, would it not itself be sentient.

That theoretical question gave rise to Turing’s famous “Imitation Game,” an exercise in which a human “interrogator” is challenged to differentiate between the text-only responses of a machine and a human being. No machine capable of passing a test like that existed in Turing’s era, or does today. But his test provided a simple benchmark for identifying intelligence in a machine. It helped give shape to a philosophy of Artificial Intelligence.

3. Dartmouth holds an AI conference (1956)

By 1955, scientists around the world had begun to think conceptually about things like neural networks and natural language, but there was no unifying concept to envelop various kinds of machine intelligence. A Dartmouth College math professor named John McCarthy coined the term “Artificial Intelligence” to encapsulate it all.

McCarthy led a group that applied for a grant to hold an AI conference the following year. They invited many of the top advanced science researchers of the day to Dartmouth Hall for the event in summer 1956. The scientists discussed numerous potential areas of AI study, including learning and search, vision, reasoning, language and cognition, gaming (particularly chess), and human interactions with intelligent machines such as personal robots.

The general consensus from the discussions was that AI had great potential to benefit human beings. They yielded a general framework of research areas where machine intelligence could have an impact. The conference organized and energized AI as a research discipline for years to come.


Related: To understand artificial intelligence in 2019, watch this 1960 TV show


4. Frank Rosenblatt builds the Perceptron (1957)

Frank Rosenblatt built a mechanical neural network at Cornell Aeronautical Laboratory in 1957. [Photo: Wikimedia Commons]

The basic structure of a neural network is called a “perceptron.” It’s a series of inputs that report data to a node that then computes the inputs and arrives at a classification and a confidence level. For example, the inputs might analyze different aspects of an image and “vote” (with varying levels of surety) on whether there’s a face depicted in it. The node might then calculate the “votes” and the confidence levels and derive a consensus. Today’s neural networks, running on powerful computers, connect billions of these structures.

But perceptrons existed well before powerful computers did. In the late 1950s, a young research psychologist named Frank Rosenblatt built an electromechanical model of a perceptron called the Mark I Perceptron, which today sits in the Smithsonian. It was an analog neural network that consisted of a grid of light-sensitive photoelectric cells connected by wires to banks of nodes containing electrical motors with rotary resistors. Rosenblatt developed a “Perceptron Algorithm” that directed the network to gradually tune its input strengths until they consistently correctly identified objects, effectively allowing it to learn.

Scientists debated the relevance of the Perceptron well into the 1980s. It was important for creating a physical embodiment of the neural network, which until then had been mainly an academic concept.

5. AI experiences its first winter (1970s)

Artificial Intelligence has spent most of its history in the research realm. Throughout much of the 1960s, government agencies such as the U.S. Defense Advanced Research Projects Agency (DARPA) plowed money into research and asked little about the eventual return on their investment. And AI researchers often oversold the potential of their work so that they could keep their funding. This all changed in the late 1960s and early ’70s. Two reports, the Automatic Language Processing Advisory Committee (ALPAC) report to the U.S. Government in 1966, and the Lighthill Report for the British government in 1973, looked at AI research in a pragmatic way and returned very pessimistic analyses about the potential of the technology. Both reports questioned the tangible progress of various areas of AI research. The Lighthill Report argued that AI for tasks like speech recognition would be very difficult to scale to a size useful to the government or military.

As a result, both the U.S. government and the British government began cutting off funding for university AI research. DARPA, through which AI research funding had flowed freely during most of the ’60s, now demanded that research proposals come with clear timelines and detailed descriptions of the deliverables. That left AI looking like a disappointment that might never reach human-level capabilities. AI’s first “winter” lasted throughout the ’70s and into the ’80s.

6. The second AI winter arrives (1987)

Th

Read More

Leave a reply