A sweeping investigation published by The New Yorker on April 6 — and dominating industry discussion today — paints a damning portrait of OpenAI CEO Sam Altman, alleging years of deception toward board members, co-founders, and safety researchers at the company building what many consider the world's most consequential technology.
The report, authored by Ronan Farrow and Andrew Marantz over an 18-month period, draws on interviews with more than 100 sources and over 200 pages of previously undisclosed internal documents.
The Memos
At the heart of the investigation are internal memos compiled by two of OpenAI's most prominent former leaders. Ilya Sutskever, OpenAI's former chief scientist, reportedly spent weeks in fall 2023 assembling roughly 70 pages of Slack messages, HR documents, and analysis. One memo states bluntly: "Sam exhibits a consistent pattern of lying."
Separate notes from Dario Amodei, who left OpenAI to co-found Anthropic, are equally direct: "The problem with OpenAI is Sam himself."
Safety Infrastructure Gutted
The investigation traces a systematic dismantling of OpenAI's safety apparatus. The superalignment team — announced in mid-2023 with a pledge of 20% of the company's compute — was dissolved in May 2024 after co-leads Sutskever and Jan Leike departed. Insiders told The New Yorker that the team's actual compute allocation was "between one and two per cent," far short of the promised resources.
The AGI-readiness team followed in October 2024 when leader Miles Brundage left. The Mission Alignment team, superalignment's successor, was disbanded in February 2026 after just 16 months. Most strikingly, the investigation found that OpenAI removed the word "safely" from its mission statement in IRS filings and dropped safety from its list of significant activities.
Broader Allegations
The report extends well beyond safety concerns. It alleges that former Y Combinator partners reportedly told co-founder Paul Graham that Altman "had been lying to us all the time" before his 2019 departure from the accelerator. Board member Helen Toner reportedly discovered that GPT-4 features Altman claimed were safety-approved "had never been approved."
The investigation also examines Altman's pursuit of up to $50 billion in funding from Saudi Arabia and the UAE, and a Pentagon partnership that came after rival Anthropic was blacklisted for refusing to remove autonomous weapons restrictions.
OpenAI's Response
Altman told The New Yorker that "my vibes don't really fit with a lot of this traditional A.I.-safety stuff" and that his positions have evolved in good faith. In what critics called a conspicuously timed move, OpenAI announced a new Safety Fellowship for external researchers hours after the investigation went live. The fellowship will run from September 2026 through February 2027 and offer stipends, compute support, and mentorship — though fellows will not have access to OpenAI's internal systems.
What It Means
The investigation lands at a pivotal moment for OpenAI, which is generating $2 billion in monthly revenue and preparing for a potential IPO as early as Q4 2026. Whether these allegations materially affect investor confidence, regulatory scrutiny, or public trust remains to be seen — but they ensure that questions about who should be trusted to steward the most powerful AI systems on Earth will only grow louder.



