Geoffrey Hinton, the man widely considered as the “godfather” of artificial intelligence, has left Google — with a message sharing his concerns about potential dangers stemming from the same technology he helped build
WASHINGTON (AP) — Sounding alarms about artificial intelligence has become a popular pastime in the ChatGPT era, taken up by high-profile figures as varied as industrialist Elon Musk, leftist intellectual Noam Chomsky and the 99-year-old retired statesman Henry Kissinger.
Some of the dangers of AI chatbots are "quite scary," Hinton told the BBC. “Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”
In an interview with MIT Technology Review, Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Hinton, 75, says he retired from Google so that he could speak openly about the potential risks as someone who no longer works for the tech giant.
“I want to talk about AI safety issues without having to worry about how it interacts with Google’s business,” he told MIT Technology Review. “As long as I’m paid by Google, I can’t do that.”
Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.
Hinton did not immediately respond to a request for comment from The Associated Press.
At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that’s already getting widely deployed by businesses and governments and can cause real-world harms.
National conversation
“For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn’t only include AI experts and developers,” said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.
“AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like,” Nelson said in an interview last month.
A number of AI researchers have long expressed concerns about racial, gender and other forms of bias in AI systems, including text-based large language models that are trained on huge troves of human writing and can amplify discrimination that exists in society.
“We need to take a step back and really think about whose needs are being put front and center in the discussion about risks,” said Sarah Myers West, managing director of the nonprofit AI Now Institute. “The harms that are being enacted by AI systems today are really not evenly distributed. It’s very much exacerbating existing patterns of inequality.”
Hinton was one of three AI pioneers who in 2019 won the Turing Award, an honor that has become known as tech industry’s version of the Nobel Prize. The other two winners, Yoshua Bengio and Yann LeCun, have also expressed concerns about the future of AI.
Bengio, an academic and AI prioneer, signed a petition in late March calling for tech companies to agree to a 6-month pause on developing powerful AI systems, while LeCun, a top AI scientist at Facebook parent Meta, has taken a more optimistic approach.
Read more