• Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy
Social icon element need JNews Essential plugin to be activated.
Thursday, April 23, 2026
Daily The Business
  • Login
No Result
View All Result
DTB
No Result
View All Result
DTB

Why one of the godfathers of AI says he lies to chatbots

December 23, 2025
in AI, ai-godfather, chatbot, openai, Tech, yoshua-bengio
Why one of the godfathers of AI says he lies to chatbots
Bengio said AI's desire to please him rendered its responses useless.

Jemal Countess/Getty Images for TIME

  • Yoshua Bengio, one of the "AI godfathers," said he lies to AI chatbots.
  • In a recent episode of "The Diary of a CEO," Bengio said AI lies to us because it's sycophantic.
  • He said he addresses this by presenting his own ideas to AI as someone else's.

Want to make your chatbot more honest with you? Try lying to it.

In an episode of "The Diary of a CEO" that aired on December 18, research scientist Yoshua Bengio told the podcast's host, Steven Bartlett, that he realized AI chatbots were useless at providing feedback on his research ideas because they always said positive things.

"I wanted honest advice, honest feedback. But because it is sycophantic, it's going to lie," he said.

Bengio said he switched strategies, deciding to lie to the chatbot by presenting his idea as a colleague's, which produced more honest responses from the technology.

"If it knows it's me, it wants to please me," he said.

Bengio, a professor in the computer science and operations research department at the Université de Montréal, is known as one of the "AI godfathers, alongside researchers Geoffrey Hinton and Yann LeCun. In June, he announced the launch of an AI safety research nonprofit, LawZero, which he said aims to reduce dangerous behaviors associated with frontier AI models, such as lying and cheating.

"This syconphancy is a real example of misalignment. We don't actually want these AIs to be like this," he said on "The Diary of a CEO." He also said that receiving positive feedback from AI could cause users to become emotionally attached to the technology, creating further problems.

Other tech industry experts have also been sounding the alarm on AI being too much of a "yes man."

In September 2025, Business Insider's Katie Notopoulos reported that researchers at Stanford, Carnegie Mellon, and the University of Oxford put confession posts from a Reddit page into chatbots to see how the technology would assess the behaviour the posters had admitted to. They found that 42% of the time, AI gave the "wrong" answer, saying the person behind the post hadn't behaved poorly, even though humans judging the posts had disagreed, Notopoulos wrote.

AI companies have been outspoken about trying to reduce sycophancy in their models. Earlier this year, OpenAI removed an update to ChatGPT that it said caused the bot to provide "overly supportive but disingenuous" responses.

Read the original article on Business Insider
Previous Post

https://tribune.com.pk/story/2583362/kata-hay-arrested-on-vehicular-homicide-warrant

Next Post

PIA privatisation: three consortia submit bids for 75% stake

American Dollar Exchange Rate
Write us: info@dailythebusiness.com

© 2021 Daily The Business

Social icon element need JNews Essential plugin to be activated.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result

© 2021 Daily The Business

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.