Back to stories
Research

Anthropic's Massive 81,000-Person Survey Reveals What the World Really Wants — and Fears — From AI

Michael Ouroumis2 min read
Anthropic's Massive 81,000-Person Survey Reveals What the World Really Wants — and Fears — From AI

Anthropic has published results from the largest qualitative study of public attitudes toward artificial intelligence ever conducted. The study, titled "What 81,000 People Want From AI," drew on conversational interviews with 80,508 participants across 159 countries and 70 languages — a scale that dwarfs previous efforts to map global AI sentiment.

How the Study Worked

Over one week in December 2025, Anthropic invited every Claude.ai account holder to sit down with "Anthropic Interviewer" — a specially prompted version of Claude designed to conduct open-ended conversational interviews about how people view AI. Out of 112,846 total interviews, 80,508 met the quality threshold for analysis. Anthropic then built Claude-powered classifiers to categorize each conversation across dimensions including what people want from AI, what they fear, and their overall sentiment.

The company acknowledged a limitation: the respondent pool skews toward active Claude users who have already found value in AI, and nearly half of respondents came from North America and Western Europe.

The "Light and Shade" Paradox

The study's central finding is what researchers call the "light and shade" problem. The features people value most about AI are frequently the same ones that trigger their deepest anxieties. Someone who appreciates AI for emotional support, for instance, is three times more likely to also fear becoming dependent on it. Hope and alarm do not divide people into opposing camps — they coexist as tensions within each individual.

Global Sentiment Breakdown

Globally, 67% of interviewees expressed net positive sentiment toward AI, and no country surveyed dipped below 60%. However, regional differences were stark. Respondents in Sub-Saharan Africa, Latin America, and South Asia were significantly more positive, viewing AI as an economic equalizer that simplifies starting businesses and accessing education. In contrast, users in North America, Western Europe, and Oceania worried more about governance gaps, regulatory failure, and surveillance.

Top Concerns

When asked about their fears, 26.7% of respondents pointed to AI unreliability — hallucinations, inaccuracies, and fabricated citations — as the most pressing issue. Job displacement and economic inequality ranked second at 22.3%, followed closely by loss of human agency at 21.9%. Roughly 16% worried about cognitive degradation from over-reliance on AI, and 15% cited unclear accountability and insufficient regulation.

What People Value

Despite the anxieties, 81% of respondents said AI has already achieved their vision to some extent, citing increased productivity, cognitive collaboration, and accelerated learning as primary benefits.

Implications for the Industry

The study suggests that building public trust in AI is not simply a matter of demonstrating capability. Users are sophisticated enough to recognize that the same power that makes AI useful also makes it potentially dangerous. For AI companies and policymakers, the challenge is addressing the "shade" without dimming the "light" — improving reliability, establishing clear accountability frameworks, and ensuring AI's economic benefits reach the communities that are most optimistic about its promise.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Anthropic's Mythos Is Finding Bugs Faster Than Open-Source Teams Can Patch Them
Research

Anthropic's Mythos Is Finding Bugs Faster Than Open-Source Teams Can Patch Them

Bloomberg reporting this week highlights a lopsided new reality: Anthropic's Mythos model has surfaced thousands of high- and critical-severity vulnerabilities across major operating systems and browsers, but fewer than 1% have been patched by maintainers.

13 hours ago3 min read
Physical Intelligence's π0.7 Robot Brain Teaches Itself Tasks It Was Never Trained On
Research

Physical Intelligence's π0.7 Robot Brain Teaches Itself Tasks It Was Never Trained On

Physical Intelligence's new π0.7 model shows early signs of compositional generalization, letting robots fold laundry and operate new kitchen appliances without task-specific training data.

14 hours ago3 min read
Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk
Research

Anthropic Refuses to Fix MCP Flaw Putting 200,000 Servers at Risk

OX Security researchers disclosed a systemic design flaw in Anthropic's Model Context Protocol affecting 150M+ downloads and roughly 200,000 servers. Anthropic declined to modify the architecture, calling the behavior expected.

22 hours ago3 min read