Back to stories
Industry

Bombshell New Yorker Investigation Accuses Sam Altman of 'Consistent Pattern of Lying'

Michael Ouroumis3 min read
Bombshell New Yorker Investigation Accuses Sam Altman of 'Consistent Pattern of Lying'

A sweeping investigation published by The New Yorker on April 6 — and dominating industry discussion today — paints a damning portrait of OpenAI CEO Sam Altman, alleging years of deception toward board members, co-founders, and safety researchers at the company building what many consider the world's most consequential technology.

The report, authored by Ronan Farrow and Andrew Marantz over an 18-month period, draws on interviews with more than 100 sources and over 200 pages of previously undisclosed internal documents.

The Memos

At the heart of the investigation are internal memos compiled by two of OpenAI's most prominent former leaders. Ilya Sutskever, OpenAI's former chief scientist, reportedly spent weeks in fall 2023 assembling roughly 70 pages of Slack messages, HR documents, and analysis. One memo states bluntly: "Sam exhibits a consistent pattern of lying."

Separate notes from Dario Amodei, who left OpenAI to co-found Anthropic — and who stood awkwardly alongside Altman at India's AI summit just weeks before — are equally direct: "The problem with OpenAI is Sam himself."

Safety Infrastructure Gutted

The investigation traces a systematic dismantling of OpenAI's safety apparatus. The superalignment team — announced in mid-2023 with a pledge of 20% of the company's compute — was dissolved in May 2024 after co-leads Sutskever and Jan Leike departed. Insiders told The New Yorker that the team's actual compute allocation was "between one and two per cent," far short of the promised resources.

The AGI-readiness team followed in October 2024 when leader Miles Brundage left. The Mission Alignment team, superalignment's successor, was disbanded in February 2026 after just 16 months. Most strikingly, the investigation found that OpenAI removed the word "safely" from its mission statement in IRS filings and dropped safety from its list of significant activities.

Broader Allegations

The report extends well beyond safety concerns. It alleges that former Y Combinator partners reportedly told co-founder Paul Graham that Altman "had been lying to us all the time" before his 2019 departure from the accelerator. Board member Helen Toner reportedly discovered that GPT-4 features Altman claimed were safety-approved "had never been approved."

The investigation also examines Altman's pursuit of up to $50 billion in funding from Saudi Arabia and the UAE, and a Pentagon partnership that came after rival Anthropic was blacklisted for refusing to remove autonomous weapons restrictions.

OpenAI's Response

Altman told The New Yorker that "my vibes don't really fit with a lot of this traditional A.I.-safety stuff" and that his positions have evolved in good faith. In what critics called a conspicuously timed move, OpenAI announced a new Safety Fellowship for external researchers hours after the investigation went live. The fellowship will run from September 2026 through February 2027 and offer stipends, compute support, and mentorship — though fellows will not have access to OpenAI's internal systems.

What It Means

The investigation lands at a pivotal moment for OpenAI, which is generating $2 billion in monthly revenue and preparing for a potential IPO as early as Q4 2026. Whether these allegations materially affect investor confidence, regulatory scrutiny, or public trust remains to be seen — but they ensure that questions about who should be trusted to steward the most powerful AI systems on Earth will only grow louder.

Learn AI for Free — FreeAcademy.ai

Take "AI for Business: Practical Implementation" — a free course with certificate to master the skills behind this story.

More in Industry

Stanford AI Index: China Closes Gap With US to Just 2.7% as Scholar Inflows Collapse
Industry

Stanford AI Index: China Closes Gap With US to Just 2.7% as Scholar Inflows Collapse

Stanford's 2026 AI Index shows China's top model trailing Anthropic's Claude Opus 4.6 by only 39 Elo points on Arena, while AI scholars moving to the US have dropped 89% since 2017.

6 min ago2 min read
Intel Launches Core Series 3 Chips, Bringing 40 TOPS AI to Budget Laptops
Industry

Intel Launches Core Series 3 Chips, Bringing 40 TOPS AI to Budget Laptops

Intel's new Core Series 3 processors, built on the 18A node, deliver up to 40 platform TOPS and a 17-TOPS NPU to entry-level laptops and edge systems starting April 16, 2026.

1 hours ago3 min read
Upscale AI In Talks for $2B Valuation as AI Networking Race Heats Up
Industry

Upscale AI In Talks for $2B Valuation as AI Networking Race Heats Up

Santa Clara startup Upscale AI is reportedly in talks to raise roughly $200M at a $2 billion valuation, just three months after its Series A, as investors pile into the networking layer of AI infrastructure.

3 hours ago3 min read