Americans are using AI in record numbers — and trusting it less than ever. That's the striking paradox revealed in a March 2026 NBC News poll of 1,000 registered voters, which found artificial intelligence ranked near the bottom of a broad favorability survey, ahead of only the Democratic Party and Iran.
The Numbers
AI received a net favorability rating of -20, with just 26% of respondents viewing it positively and 46% negatively. To put that in context, the survey included a wide range of institutions, politicians, corporations, and countries. AI ranked below ICE (Immigration and Customs Enforcement), the Republican Party, and dozens of other entities that typically poll poorly.
The poll was conducted in March 2026 and reported by The Verge's Richard Lawler. It represents one of the most comprehensive snapshots of American sentiment toward AI technology to date.
The Paradox: Users Who Don't Trust It
Here's where it gets interesting: 56% of respondents said they had used an AI platform — such as ChatGPT or Microsoft Copilot — in the prior month. That's a majority of registered voters actively engaging with AI tools on a regular basis.
This creates a sharp disconnect. More than half the country is using AI, yet fewer than a third view it favorably. People are clearly finding value in these tools — whether for drafting emails, answering questions, or generating content — while simultaneously harboring deep reservations about AI as a broader force.
Why the Gap?
The distrust likely stems from several compounding anxieties:
Job displacement fears remain acute. As AI coding agents, customer service bots, and content tools go mainstream, workers across industries are watching their roles erode or transform. High-profile layoffs at major tech companies — often explicitly attributed to AI efficiency gains — have made this feel concrete, not hypothetical.
Misinformation concerns are well-founded. AI-generated deepfakes, synthetic media, and hallucinating chatbots have all made headlines. Voters who've seen fabricated images or read confidently wrong AI outputs have reason to be skeptical.
Pace outrunning oversight. Unlike previous technology waves, AI development has moved faster than regulatory frameworks. Many Americans sense the technology is being deployed before its risks are understood, let alone managed.
Corporate concentration. A handful of companies — OpenAI, Google, Anthropic, Meta — control the most capable models. Skepticism about Big Tech broadly likely spills into skepticism about AI specifically.
Implications for the Industry
For AI companies, this poll is a warning signal. The industry has focused heavily on capability announcements and benchmark achievements, but has done comparatively little to build public trust. Regulatory battles, lawsuits over training data, and high-profile AI failures have dominated coverage in ways that erode confidence.
The gap between usage and trust also suggests a pragmatic relationship: Americans will use AI when it's convenient, but that convenience doesn't translate into goodwill or enthusiasm. The tools are useful enough to adopt, but not trustworthy enough to embrace.
As AI becomes more deeply embedded in healthcare, education, law enforcement, and finance, that trust deficit could become a serious problem — both for companies seeking adoption and for policymakers trying to govern the technology with public support.
The industry has a usage problem solved. It has a trust problem it hasn't started on.



