• Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy
Friday, December 5, 2025
Daily The Business
  • Login
No Result
View All Result
DTB
No Result
View All Result
DTB

A former OpenAI safety employee said he quit because the company’s leaders were ‘building the Titanic’ and wanted ‘newer, shinier’ things to sell

July 10, 2024
in Tech
A former OpenAI safety employee said he quit because the company's leaders were 'building the Titanic' and wanted 'newer, shinier' things to sell
Share on FacebookShare on TwitterWhatsapp
  • An ex-OpenAI employee said the firm is going down the path of the Titanic with its safety decisions.
  • William Saunders warned of the hubris around the safety of the Titanic, which had been deemed “unsinkable.”
  • Saunders, who was at OpenAI for 3 years, has been critical of the firm’s corporate governance.

Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview

Bull

Thanks for signing up!
Access your favorite topics in a personalized feed while you’re on the go.

By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy. You can opt-out at any time by visiting our Preferences page or by clicking “unsubscribe” at the bottom of the email.

Bull

Advertisement

A former safety employee at OpenAI said the company is following in the footsteps of White Star Line, the company that built the Titanic.

“I really didn’t want to end up working for the Titanic of AI, and so that’s why I resigned,” said William Saunders, who worked for three years as a member of technical staff on OpenAI’s superalignment team.

He was speaking on an episode of tech YouTuber Alex Kantrowitz’s podcast, released on July 3.

This story is available exclusively to Business Insider
subscribers.
Become an Insider
and start reading now.

Have an account? .

“During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic?” he said.

Advertisement

The software engineer’s concerns stem largely from OpenAI’s plan to achieve Artificial General Intelligence — the point where AI can teach itself — while also debuting paid products.

“They’re on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling,” Saunders said.

Apollo vs Titanic

As Saunders spent more time at OpenAI, he felt leaders were making decisions more akin to “building the Titanic, prioritizing getting out newer, shinier products.”

He would have much preferred a mood like the Apollo space program’s, which he characterized as an example of an ambitious project that “was about carefully predicting and assessing risks” while pushing scientific limits.

Advertisement

“Even when big problems happened, like Apollo 13, they had enough sort of like redundancy, and were able to adapt to the situation in order to bring everyone back safely,” he said.

The Titanic, on the other hand, was built by White Star Line as it competed with its rivals to make bigger cruise liners, Saunders said.

Saunders fears that, like with the Titanic’s safeguards, OpenAI could be relying too heavily on its current measures and research for AI safety.

“Lots of work went into making the ship safe and building watertight compartments so that they could say that it was unsinkable,” he said. “But at the same time, there weren’t enough lifeboats for everyone. So when disaster struck, a lot of people died.”

Advertisement

To be sure, the Apollo missions were conducted against the backdrop of a Cold War space race with Russia. They also involved several serious casualties, including three NASA astronauts who died in 1967 due to an electrical fire during a test.


Related stories

Explaining his analogy further in an email to Business Insider, Saunders wrote: “Yes, the Apollo program had its own tragedies. It is not possible to develop AGI or any new technology with zero risk. What I would like to see is the company taking all possible reasonable steps to prevent these risks.”

OpenAI needs more ‘lifeboats,’ Saunders says

Saunders told BI that a “Titanic disaster” for AI could manifest in a model that can launch a large-scale cyberattack, persuade people en masse in a campaign, or help build biological weapons.

In the near term, OpenAI should invest in additional “lifeboats,” like delaying the release of new language models so teams can research potential harms, he said in his email.

Advertisement

While in the superalignment team, Saunders led a group of four staff dedicated to understanding how AI language models behave — which he said humans don’t know enough about.

“If in the future we build AI systems as smart or smarter than most humans, we will need techniques to be able to tell if these systems are hiding capabilities or motivations,” he wrote in his email.


Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder of OpenAI, speaks at a conference in Tel Aviv.

Ilya Sutskever, cofounder of OpenAI, left the firm in June after leading its superalignment division.

JACK GUEZ/AFP via Getty Images



In his interview with Kantrowitz, Saunders added that company staff often discussed theories about how the reality of AI becoming a “wildly transformative” force could come in just a few years.

“I think when the company is talking about this, they have a duty to put in the work to prepare for that,” he said.

Advertisement

But he’s been disappointed with OpenAI’s actions so far.

In his email to BI, he said: “While there are employees at OpenAI doing good work on understanding and preventing risks, I did not see a sufficient prioritization of this work.”

Saunders left OpenAI in February. The company then dissolved its superalignment team in May, just days after announcing GPT-4o, its most advanced AI product available to the public.

OpenAI did not immediately respond to a request for comment sent outside regular business hours by Business Insider.

Advertisement

Tech companies like OpenAI, Apple, Google, and Meta have been engaged in an AI arms race, sparking investment furor in what is widely predicted to be the next great industry disruptor akin to the internet.

The breakneck pace of development has prompted some employees and experts to warn that more corporate governance is needed to avoid future catastrophes.

In early June, a group of former and current employees at Google’s Deepmind and OpenAI — including Saunders — published an open letter warning that current industry oversight standards were insufficient to safeguard against disaster for humanity.

Meanwhile, OpenAI cofounder and former chief scientist Ilya Sutskever, who led the firm’s superalignment division, resigned later that month.

Advertisement

He founded another startup, Safe Superintelligence Inc., that he said would focus on researching AI while ensuring “safety always remains ahead.”


Share15Tweet10Send
Previous Post

Chinese steel prices hit 3-month low on poor construction demand

Next Post

Kenya’s president warns of huge consequences after his effort to address an $80 billion debt fails

Related Posts

Reddit's CEO says the platform is ditching a key part that 'sucks'
reddit

Reddit’s CEO says the platform is ditching a key part that ‘sucks’

December 5, 2025
Shoppers are on pace to break Black Friday online spending records and use AI more than ever as sales hit $8.6 billion
adobe

Shoppers are on pace to break Black Friday online spending records and use AI more than ever as sales hit $8.6 billion

November 29, 2025
Pakistan’s First Eco-Chatbot has been Launched by Punjab Government
Tech

Pakistan’s First Eco-Chatbot has been Launched by Punjab Government

November 26, 2025
The AI bubble debate: 16 business leaders, from Sam Altman to Bill Gates to Mark Cuban, weigh in
AI

The AI bubble debate: 16 business leaders, from Sam Altman to Bill Gates to Mark Cuban, weigh in

November 24, 2025
Game Changer! Pakistan Now Officially Connected to One of the World’s Fastest Undersea Cables
Pakistan

Game Changer! Pakistan Now Officially Connected to One of the World’s Fastest Undersea Cables

November 23, 2025
Now You Can Buy iPhone 16 On Easy Monthly Instalments Starting from Rs. 19,190
Tech

Now You Can Buy iPhone 16 On Easy Monthly Instalments Starting from Rs. 19,190

November 22, 2025

Popular Post

  • FRSHAR Mail

    FRSHAR Mail set to redefine secure communication, data privacy

    126 shares
    Share 50 Tweet 32
  • How to avoid buyer’s remorse when raising venture capital

    33 shares
    Share 337 Tweet 211
  • Microsoft to pay off cloud industry group to end EU antitrust complaint

    54 shares
    Share 22 Tweet 14
  • Capacity utilisation of Pakistan’s cement industry drops to lowest on record

    47 shares
    Share 19 Tweet 12
  • SingTel annual profit more than halves on $2.3bn impairment charge

    47 shares
    Share 19 Tweet 12
American Dollar Exchange Rate
  • Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy
Write us: info@dailythebusiness.com

© 2021 Daily The Business

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy

© 2021 Daily The Business

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.