Back to stories
Research

Anthropic's Massive 81,000-Person Survey Reveals What the World Really Wants — and Fears — From AI

Michael Ouroumis2 min read
Anthropic's Massive 81,000-Person Survey Reveals What the World Really Wants — and Fears — From AI

Anthropic has published results from the largest qualitative study of public attitudes toward artificial intelligence ever conducted. The study, titled "What 81,000 People Want From AI," drew on conversational interviews with 80,508 participants across 159 countries and 70 languages — a scale that dwarfs previous efforts to map global AI sentiment.

How the Study Worked

Over one week in December 2025, Anthropic invited every Claude.ai account holder to sit down with "Anthropic Interviewer" — a specially prompted version of Claude designed to conduct open-ended conversational interviews about how people view AI. Out of 112,846 total interviews, 80,508 met the quality threshold for analysis. Anthropic then built Claude-powered classifiers to categorize each conversation across dimensions including what people want from AI, what they fear, and their overall sentiment.

The company acknowledged a limitation: the respondent pool skews toward active Claude users who have already found value in AI, and nearly half of respondents came from North America and Western Europe.

The "Light and Shade" Paradox

The study's central finding is what researchers call the "light and shade" problem. The features people value most about AI are frequently the same ones that trigger their deepest anxieties. Someone who appreciates AI for emotional support, for instance, is three times more likely to also fear becoming dependent on it. Hope and alarm do not divide people into opposing camps — they coexist as tensions within each individual.

Global Sentiment Breakdown

Globally, 67% of interviewees expressed net positive sentiment toward AI, and no country surveyed dipped below 60%. However, regional differences were stark. Respondents in Sub-Saharan Africa, Latin America, and South Asia were significantly more positive, viewing AI as an economic equalizer that simplifies starting businesses and accessing education. In contrast, users in North America, Western Europe, and Oceania worried more about governance gaps, regulatory failure, and surveillance.

Top Concerns

When asked about their fears, 26.7% of respondents pointed to AI unreliability — hallucinations, inaccuracies, and fabricated citations — as the most pressing issue. Job displacement and economic inequality ranked second at 22.3%, followed closely by loss of human agency at 21.9%. Roughly 16% worried about cognitive degradation from over-reliance on AI, and 15% cited unclear accountability and insufficient regulation.

What People Value

Despite the anxieties, 81% of respondents said AI has already achieved their vision to some extent, citing increased productivity, cognitive collaboration, and accelerated learning as primary benefits.

Implications for the Industry

The study suggests that building public trust in AI is not simply a matter of demonstrating capability. Users are sophisticated enough to recognize that the same power that makes AI useful also makes it potentially dangerous. For AI companies and policymakers, the challenge is addressing the "shade" without dimming the "light" — improving reliability, establishing clear accountability frameworks, and ensuring AI's economic benefits reach the communities that are most optimistic about its promise.

How AI Actually Works — Free Book on FreeLibrary

A free book that explains the AI concepts behind the headlines — no jargon, just clarity.

More in Research

Basecamp Research Launches Trillion Gene Atlas to Revolutionize AI Drug Discovery
Research

Basecamp Research Launches Trillion Gene Atlas to Revolutionize AI Drug Discovery

Basecamp Research unveiled the Trillion Gene Atlas at NVIDIA GTC, partnering with Anthropic and NVIDIA to expand known genetic diversity by 100x and accelerate AI-designed therapeutics.

2 days ago2 min read
Moonshot AI's 'Attention Residuals' Rethinks a Core Transformer Building Block
Research

Moonshot AI's 'Attention Residuals' Rethinks a Core Transformer Building Block

Moonshot AI's Kimi team published a research breakthrough replacing fixed residual connections with learned attention over prior layers, delivering up to 2x compute efficiency gains.

2 days ago2 min read
Microsoft's GigaTIME AI Transforms Cheap Pathology Slides Into Detailed Cancer Maps
Research

Microsoft's GigaTIME AI Transforms Cheap Pathology Slides Into Detailed Cancer Maps

Microsoft Research unveiled GigaTIME, a multimodal AI model that converts standard pathology slides into detailed protein-level tumor maps, analyzed across 14,000 patients and 24 cancer types to uncover over 1,200 clinically significant associations.

5 days ago2 min read