Startup OpenAI released its ChatGPT artificial intelligence tool into the wild late last year with great fanfare
NEW YORK (AP) — Before the artificial intelligence tool ChatGPT was unleashed into the world, the novelist Robin Sloan was testing a similar AI writing assistant built by researchers at Google.
It didn’t take long for Sloan, author of the bestseller “Mr. Penumbra’s 24-Hour Bookstore,” to realize that the technology was of little use to him.
“A lot of the state-of-the-art AI right now is impressive enough to really raise your expectations and make you think, ‘Wow, I’m dealing with something really, really capable,’” Sloan said. “But then in a thousand little ways, a million little ways, it ends up kind of disappointing you and betraying the fact that it really has no idea what’s going on.”
Another company might have released the experiment into the wild anyway, as the startup OpenAI did with its ChatGPT tool late last year. But Google has been more cautious about who gets to play with its AI advancements despite growing pressure for the internet giant to compete more aggressively with rival Microsoft, which is pouring billions of dollars into OpenAI and fusing its technology into Microsoft products.
That pressure is starting to take a toll, as Google has asked one of its AI teams to “prioritize working on a response to ChatGPT,” according to an internal memo reported this week by CNBC. Google declined to confirm if there was a public chatbot in the works but spokesperson Lily Lin said it continues "to test our AI technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon.”
Some of the technological breakthroughs driving the red-hot field of generative AI — which can churn out paragraphs of readable text and new images as well as music and video — have been pioneered in Google's vast research arm.
“So we have an important stake in this area, but we also have an important stake in not just leading in being able to generate things, but also in dealing with information quality,” said Zoubin Ghahramani, vice president of research at Google, in a November interview with The Associated Press.
Ghahramani said the company wants to also be measured about what it releases, and how: “Do we want to make it accessible in a way that people can produce stuff en masse without any controls? The answer to that is no, not at this stage. I don’t think it would be responsible for us to be the people driving that.”
And they weren't. Four weeks after the AP interview, OpenAI released its ChatGPT for free to anyone with an internet connection. Millions of people around the world have now tried it, sparking searing discussions at schools and corporate offices about the future of education and work.
OpenAI declined to comment on comparisons with Google. But in announcing their extended partnership in January, Microsoft and OpenAI said they are committed to building “AI systems and products that are trustworthy and safe.”
As a literary assistant, neither ChatGPT nor Google's creative writing version comes close to what a human can do, Sloan said.
A fictionalized Google was central to the plot of Sloan's popular 2012 novel about a mysterious San Francisco bookstore. That's likely one reason the company invited him along with several other authors to test its experimental Wordcraft Writers Workshop, derived from a powerful AI system known as LaMDA.
Like other language-learning models, including the GPT line built by OpenAI, Google's LaMDA can generate convincing passages of text and converse with humans based on what it's processed from a trove of online writings and digitized books. Facebook parent Meta and Amazon have also built their own big models, which can improve voice assistants like Alexa, predict the next sentence of an email or translate languages in real time.
When it first announced its LaMDA model in 2021, Google emphasized its versatility but also raised the risks of harmful misuse and the possibility it could mimic and amplify biased, hateful or misleading information.
Some of the Wordcraft writers found it useful as a research tool — like a faster and more decisive version of a Google search — as they asked for a list of “rabbit breeds and their magical qualities” or “a verb for the thing fireflies do” or to “Tell me about Venice in 1700,″ according to Google’s paper on the project. But it was less effective as a writer or rewriter, turning out boring sentences riddled with clichés and showing some gender bias.
“I believe them — that they’re being thoughtful and cautious,” Sloan said of Google. “It’s just not the model of a reckless technologist who is in a hurry to get this out into the world no matter what.”
Google's development of these models hasn't been without internal acrimony. First, it ousted some prominent researchers who were examining the risks of the technology. And last year, it fired an engineer who publicly posted a conversation with LaMDA in which the model falsely claimed it had human-like consciousness, with a “range of both feelings and emotions.”
While ChatGPT and its competitors might never produce acclaimed works of literature, the expectation is they will soon begin to transform other professional tasks — from helping to debug computer code to composing marketing pitches and speeding up the production of a slide presentation.
That's key to why Microsoft, as a seller of workplace software, is eager to enhance its suite of products with the latest OpenAI tools. The benefits are less clear to Google, which largely depends on the advertising dollars it gets when people search for information online.
"If you ask the question and get the wrong answer, it’s not great for a search engine,” said Dexter Thillien, a technology analyst for the London-based Economist Intelligence Unit.
Microsoft also has a search engine — Bing — but ChatGPT's answers are too inaccurate and outdated, and the cost to run its queries too expensive, for the technology to pose a serious risk to Google's dominant search business, Thillien said.
Google has said that its earlier large language model, named BERT, is already playing a role in answering online searches. Such models can help generate the fact boxes that increasingly appear next to Google's ranked list of web links.
Asked in November about the hype around AI applications such as OpenAI's image-generator DALL-E, Ghahramani acknowledged, in a playful tone, that “it’s a little bit annoying sometimes because we know that we have developed a lot of these technologies."
“We’re not in this to get the ‘likes’ and the clicks, right?" he said, noting that Google has been a leader in publishing AI research that others can build upon.