• Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy
Saturday, April 11, 2026
Daily The Business
  • Login
No Result
View All Result
DTB
No Result
View All Result
DTB

‘A billionaire’s chief of staff’: How this AI pioneer puts a dozen agents to work

April 11, 2026
in AI, ai-agents, genai
'A billionaire's chief of staff': How this AI pioneer puts a dozen agents to work
Share on FacebookShare on TwitterWhatsapp
Illia Polosukhin, a coauthor of the seminal transformer paper, "Attention Is All You Need," runs his work with 12 agents, but he doesn't let them off leash.

Courtesy NEAR

  • Illia Polosukhin is a coauthor of the landmark transformer paper, "Attention is All You Need."
  • Polosukhin uses 12 agents with a prompt that tells them to be like a "billionaire's chief of staff."
  • The researcher is building a backend infrastructure to secure our agents' activities.

On a given day, Illia Polosukhin has a dozen agents completing different "missions" for him.

One such mission could be "I want to become a better CEO," he said.

"So it effectively summarizes all of the meeting notes, Google Drive docs, Slack messages, and provides me with a coaching and executive summary of what happened, what I'm missing, and where decisions are stuck," Polosukhin told Business Insider. "So that runs every week."

Polosukhin calls these agents his "billionaire-chief-of-staff level of support." The description is "literally" in the prompt, he said: "You're a billionaire's chief of staff."

It's an early glimpse of the future Polosukhin sees not only for individual workers or CEOs, but for the entire global economy: a world where agents can make trades, coordinate supply chains, and broker transactions on behalf of people and large companies. And in his view, we're wholly unprepared for it.

"I think the bigger issue is that we have fundamentally not prepared the system for AGI (artificial general intelligence) being available," he said. The system being "society, the internet, government institutions, etc."

Polosukhin is one of the key figures behind generative AI. In 2017, he coauthored the seminal research paper "Attention Is All You Need," which introduced the Transformer architecture, a novel approach to building AI models. That groundbreaking paper is the reason there's a "T" at the end of ChatGPT.

Peeling back the black box

Very little about the trajectory of AI surprises the researcher-turned-founder.

The same year the Transformer architecture paper was published, Polosukhin started NEAR AI around the idea that machines could eventually generate software. His thesis was that humans would talk to computers in natural language, like English, and the machines would write the code.

"In 2017, that sounded pretty ridiculous," he said. Today, it's called vibe coding.

Polosukhin is also unsurprised by the capabilities some models are now showing. Anthropic on Tuesday said its latest preview model, Mythos, is so capable of finding and exploiting vulnerabilities that the lab is limiting access.

Polosukhin said he had been warning for years that "models will start breaking everything." He described it to Business Insider as a "cat and mouse" game, where each model iteration can break whatever the previous model fixed.

In a world where people manage their health — or corporations manage logistics — with AI agents, Polosukhin sees a need for a backend trust and security layer meant to guard against those risks.

At NEAR, Polosukhin is building infrastructure to reduce AI agents' dependence on a single company, such as a frontier AI lab, for controlling and overseeing every step of a task.

In practice, that could mean an AI agent — one that handles your login information, books your travel, and moves money to pay for an airline ticket — wouldn't require a user to blindly trust a single gatekeeper.

"This is going to have all your information," Polosukhin said of AI models handling data. "Literally, your life will be there. So you don't want any singular company to have control or access to this."

Another risk Polosukhin wants to guard against is manipulation. People are increasingly using AI to get information, from news summaries to investment suggestions. An AI lab, or a malicious actor within it, could quietly shape those answers, Polosukhin said.

One example came last year from xAI, when Grok repeatedly brought up "white genocide" in unrelated responses after what the company said was an "unauthorized modification" to its backend.

Polosukhin's pitch with NEAR is to develop an open-source, auditable platform that gives users greater visibility into how an AI system operates, rather than treating it as a black box.

Supervision is what AI still needs

At the moment, his own agents are not fully trustworthy.

Polosukhin showed Business Insider how one of his agents can aggregate news around the US-Iran ceasefire and provide market reads. Others are "developer agents" that code and a "growth agent" that can propose steps to increase a certain metric at his company.

As helpful as they are, Polosukhin doesn't let an AI off leash. The researcher said AI systems still need careful attention.

In his view, AI still struggles with sound judgment, even as online conversations about it can overhype its current progress.

"If I just let it go and run and do things, I come back to something that makes no sense," he said of AI models. "So you need to babysit it with your judgment."

Read the original article on Business Insider
Share15Tweet10Send
Previous Post

Aurangzeb, Saudi counterpart discuss ‘strong bilateral relationship’, economic cooperation

Next Post

Gold price drops by Rs700 per tola in Pakistan – Markets

Related Posts

'Tokenmaxxing' has techies debating if leaderboards tracking AI token use are a good idea
AI

‘Tokenmaxxing’ has techies debating if leaderboards tracking AI token use are a good idea

April 9, 2026
Simon Willison says the 'dark factory' is the next big thing in AI
AI

Simon Willison says the ‘dark factory’ is the next big thing in AI

April 5, 2026
POTS explained: The disorder that led OpenAI exec Fidji Simo to take medical leave
AI

POTS explained: The disorder that led OpenAI exec Fidji Simo to take medical leave

April 4, 2026
Meta's AI push is reshaping how work gets done inside the company
AI

Meta’s AI push is reshaping how work gets done inside the company

April 3, 2026
OpenAI's COO says if you're bullish on AI, you can be bullish on legacy software too
AI

OpenAI’s COO says if you’re bullish on AI, you can be bullish on legacy software too

April 2, 2026
Perplexity's CEO said he anticipates some 'temporary job displacement' because of AI — but there's a silver lining
AI

Perplexity’s CEO said he anticipates some ‘temporary job displacement’ because of AI — but there’s a silver lining

March 24, 2026

Popular Post

  • FRSHAR Mail

    FRSHAR Mail set to redefine secure communication, data privacy

    127 shares
    Share 51 Tweet 32
  • How to avoid buyer’s remorse when raising venture capital

    33 shares
    Share 337 Tweet 211
  • Microsoft to pay off cloud industry group to end EU antitrust complaint

    55 shares
    Share 22 Tweet 14
  • Capacity utilisation of Pakistan’s cement industry drops to lowest on record

    49 shares
    Share 20 Tweet 12
  • Inflation is down in Europe. But the European Central Bank is in no hurry to make more rate cuts

    49 shares
    Share 20 Tweet 12
American Dollar Exchange Rate
  • Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy
Write us: info@dailythebusiness.com

© 2021 Daily The Business

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • Advertise
  • Contact Us
  • Daily The Business
  • Privacy Policy

© 2021 Daily The Business

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.