Anthropic has published results from the largest qualitative study of public attitudes toward artificial intelligence ever conducted. The study, titled "What 81,000 People Want From AI," drew on conversational interviews with 80,508 participants across 159 countries and 70 languages — a scale that dwarfs previous efforts to map global AI sentiment.
How the Study Worked
Over one week in December 2025, Anthropic invited every Claude.ai account holder to sit down with "Anthropic Interviewer" — a specially prompted version of Claude designed to conduct open-ended conversational interviews about how people view AI. Out of 112,846 total interviews, 80,508 met the quality threshold for analysis. Anthropic then built Claude-powered classifiers to categorize each conversation across dimensions including what people want from AI, what they fear, and their overall sentiment.
The company acknowledged a limitation: the respondent pool skews toward active Claude users who have already found value in AI, and nearly half of respondents came from North America and Western Europe.
The "Light and Shade" Paradox
The study's central finding is what researchers call the "light and shade" problem. The features people value most about AI are frequently the same ones that trigger their deepest anxieties. Someone who appreciates AI for emotional support, for instance, is three times more likely to also fear becoming dependent on it. Hope and alarm do not divide people into opposing camps — they coexist as tensions within each individual.
Global Sentiment Breakdown
Globally, 67% of interviewees expressed net positive sentiment toward AI, and no country surveyed dipped below 60%. However, regional differences were stark. Respondents in Sub-Saharan Africa, Latin America, and South Asia were significantly more positive, viewing AI as an economic equalizer that simplifies starting businesses and accessing education. In contrast, users in North America, Western Europe, and Oceania worried more about governance gaps, regulatory failure, and surveillance.
Top Concerns
When asked about their fears, 26.7% of respondents pointed to AI unreliability — hallucinations, inaccuracies, and fabricated citations — as the most pressing issue. Job displacement and economic inequality ranked second at 22.3%, followed closely by loss of human agency at 21.9%. Roughly 16% worried about cognitive degradation from over-reliance on AI, and 15% cited unclear accountability and insufficient regulation.
What People Value
Despite the anxieties, 81% of respondents said AI has already achieved their vision to some extent, citing increased productivity, cognitive collaboration, and accelerated learning as primary benefits.
Implications for the Industry
The study suggests that building public trust in AI is not simply a matter of demonstrating capability. Users are sophisticated enough to recognize that the same power that makes AI useful also makes it potentially dangerous. For AI companies and policymakers, the challenge is addressing the "shade" without dimming the "light" — improving reliability, establishing clear accountability frameworks, and ensuring AI's economic benefits reach the communities that are most optimistic about its promise.



