OpenAI has altered its mission statement, removing the word "safely" from its commitment to developing artificial general intelligence. The company previously pledged to build AGI that is "safe and beneficial to humanity." The updated language drops the safety qualifier entirely.
What Changed
The original OpenAI charter, published in 2018, centered safety as a core principle. The company was founded explicitly as a counterweight to unchecked AI development, with the stated goal of ensuring powerful AI systems would be developed responsibly.
The revised mission statement now focuses on making AGI "beneficial to humanity" without the safety modifier. While OpenAI has not issued a public statement explaining the change, it was noticed by researchers and policy advocates who monitor the company's governance documents.
The Context
The timing is significant. OpenAI is in the process of restructuring from its unusual capped-profit model into a fully for-profit corporation. This transition has been accompanied by:
- A $41 billion investment from SoftBank, one of the largest single investments in a private company
- Plans for a potential IPO valued at approximately $500 billion later in 2026
- The departure of several safety-focused researchers over the past year
- Dissolution of the company's internal Superalignment team in 2025
Paradoxically, OpenAI and Microsoft simultaneously joined the UK AI Safety Institute's Alignment Project, suggesting the company's relationship with safety is more complex than the mission statement change alone implies.
Critics argue the mission change reflects a company that has systematically deprioritized safety in favor of growth and market dominance. Supporters counter that safety work continues internally regardless of the mission statement's wording.
Industry Reaction
The AI safety community responded with alarm. Multiple researchers pointed out that OpenAI's original appeal — the reason many top scientists joined the company — was its explicit commitment to cautious, safety-first development.
Several former OpenAI employees posted on social media noting the contrast between the company's founding principles and its current trajectory. One former researcher described it as "the final page turn in a story that's been unfolding for two years."
What It Means
Mission statements are symbolic, but symbols matter. OpenAI's original safety commitment served as a benchmark against which its actions could be measured. Removing that language reduces external accountability at precisely the moment the company is building its most powerful systems yet.
Whether OpenAI's actual safety practices have changed is a separate question — but the willingness to drop the word from its public-facing mission suggests where the company's priorities now lie. Meanwhile, the Pentagon has fast-tracked competitor xAI's Grok for classified systems, showing that the military establishment is not waiting for the safety debate to be resolved.



