![]() |
Photo: Facebook, via Associated Press; The New York Times; Getty Images |
In 2004, Geoffrey
Hinton doubled down on his pursuit of a technological idea called a neural
network. It was a way for machines to see the world around them, recognize
sounds and even understand natural language. But scientists had spent more than
50 years working on the concept of neural networks, and machines couldn’t
really do any of that.
Backed by the Canadian
government, Dr. Hinton, a computer science professor at the University of
Toronto, organized a new research community with several academics who also
tackled the concept. They included Yann LeCun, a professor at New York
University, and Yoshua Bengio at the University of Montreal.
On Wednesday, the
Association for Computing Machinery, the world’s largest society of computing
professionals, announced that Drs. Hinton, LeCun and Bengio had won this year’s
Turing Award for their work on neural networks. The Turing Award, which was
introduced in 1966, is often called the Nobel Prize of computing, and it
includes a $1 million prize, which the three scientists will share.
Over the past decade,
the big idea nurtured by these researchers has reinvented the way technology is
built, accelerating the development of face-recognition services, talking
digital assistants, warehouse robots and self-driving cars. Dr. Hinton is now at
Google, and Dr. LeCun works for Facebook. Dr. Bengio has inked deals with IBM
and Microsoft.
“What we have seen is
nothing short of a paradigm shift in the science,” said Oren Etzioni, the chief
executive officer of the Allen Institute for Artificial Intelligence in Seattle
and a prominent voice in the A.I. community. “History turned their way, and I
am in awe.”
Loosely modeled on the
web of neurons in the human brain, a neural network is a complex mathematical
system that can learn discrete tasks by analyzing vast amounts of data. By
analyzing thousands of old phone calls, for example, it can learn to recognize
spoken words.
This allows many
artificial intelligence technologies to progress at a rate that was not
possible in the past. Rather than coding behavior into systems by hand — one
logical rule at a time — computer scientists can build technology that learns
behavior largely on its own.
The London-born Dr.
Hinton, 71, first embraced the idea as a graduate student in the early 1970s, a
time when most artificial intelligence researchers turned against it. Even his
own Ph.D. adviser questioned the choice.
Drs. LeCun and Bengio
in 2017 with Dr. Hinton, who created a research program dedicated to “neural
computation and adaptive perception” in 2004.CreditRe•Work
“We met once a week,”
Dr. Hinton said in an interview. “Sometimes it ended in a shouting match, sometimes
not.”
Neural networks had a
brief revival in the late 1980s and early 1990s. After a year of postdoctoral
research with Dr. Hinton in Canada, the Paris-born Dr. LeCun moved to
AT&T’s Bell Labs in New Jersey, where he designed a neural network that
could read handwritten letters and numbers. An AT&T subsidiary sold the
system to banks, and at one point it read about 10 percent of all checks
written in the United States.
Though a neural
network could read handwriting and help with some other tasks, it could not
make much headway with big A.I. tasks, like recognizing faces and objects in
photos, identifying spoken words, and understanding the natural way people
talk.
“They worked well only
when you had lots of training data, and there were few areas that had lots of
training data,” Dr. LeCun, 58, said.
But some researchers
persisted, including the Paris-born Dr. Bengio, 55, who worked alongside Dr.
LeCun at Bell Labs before taking a professorship at the University of Montreal.
In 2004, with less than
$400,000 in funding from the Canadian Institute for Advanced Research, Dr.
Hinton created a research program dedicated to what he called “neural
computation and adaptive perception.” He invited Dr. Bengio and Dr. LeCun to
join him.
By the end of the
decade, the idea had caught up with its potential. In 2010, Dr. Hinton and his
students helped Microsoft, IBM, and Google push the boundaries of speech
recognition. Then they did much the same with image recognition.
“He is a genius and
knows how to create one impact after another,” said Li Deng, a former speech
researcher at Microsoft who brought Dr. Hinton’s ideas into the company.
Dr. Hinton’s image
recognition breakthrough was based on an algorithm developed by Dr. LeCun. In
late 2013, Facebook hired the N.Y.U. professor to build a research lab around
the idea. Dr. Bengio resisted offers to join one of the big tech giants, but
the research he oversaw in Montreal helped drive the progress of systems that
aim to understand natural language and technology that can generate fake photos
that are indistinguishable from the real thing.
Though these systems
have undeniably accelerated the progress of artificial intelligence, they are
still a very long way from true intelligence. But Drs. Hinton, LeCun and Bengio
believe that new ideas will come.
“We need fundamental
additions to this toolbox we have created to reach machines that operate at the
level of true human understanding,” Dr. Bengio said.
No hay comentarios:
Publicar un comentario