Connect with us

Hi, what are you looking for?

Tech News

OpenAI’s new model is better at reasoning and, occasionally, deceiving

Photo collage of a computer with the ChatGPT logo on the screen.
Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.

While AI models…

Continue reading…

You May Also Like

Editor's Pick

David Inserra Last week, Australia dropped its revised Combatting Misinformation and Disinformation Bill 2024, and it’s about two sandwiches short of a picnic. The...

Editor's Pick

Krit Chanwong and Scott Lincicome In a new Cato policy analysis out today, September 19, we show that state and local corporate subsidies have...

Editor's Pick

Colleen Hroncich Erica Paul and Anna Utley were homeschooling their children and attending a Pittsburgh-area co-op for enrichment activities twice a month. “It was...

Editor's Pick

Adam N. Michel Tax policy has taken on an outsized role in this year’s presidential campaign and was mentioned repeatedly in the recent presidential...