Back to stories
Research

AI Is Now Threatening to Make Mathematicians Obsolete — Experts Say It's a Matter of Time

Michael Ouroumis2 min read
AI Is Now Threatening to Make Mathematicians Obsolete — Experts Say It's a Matter of Time

Not long ago, AI couldn't reliably add large numbers. Now it's writing graduate-level mathematical proofs — and some of the people who've spent their careers doing exactly that are starting to take the threat seriously.

Science Friday covered the emerging phenomenon this week, with experts warning that AI making professional mathematicians obsolete is no longer a hypothetical. It's a matter of timeline.

The Journey From Arithmetic to Proofs

The failure of early AI on basic arithmetic was infamous. Ask GPT-3 to multiply two four-digit numbers and it would often confidently produce the wrong answer. It seemed to confirm the intuition that AI was pattern-matching on language, not actually reasoning about quantities.

That intuition has been overturned by the reasoning model era. Models trained with chain-of-thought and reinforcement learning on math problems have shown dramatic improvements in formal reasoning. More importantly, the integration of AI with formal proof systems — tools like Lean and Coq that can mechanically verify whether a proof is logically valid — has created feedback loops that push mathematical AI far beyond what pure language modeling could achieve.

What 'Vibe-Proving' Looks Like

The Science Friday piece introduced the term "vibe-proving" — an analog to vibe coding in software development. The workflow: use AI to generate a proof attempt, let the formal verification system check it, iterate until it passes. The human role shifts from proving to directing and verifying.

For routine mathematical work — verifying lemmas, generating examples, exploring special cases — this workflow is already competitive with trained mathematicians. For some classes of formal proofs, AI is already faster.

Where Humans Still Have the Edge

Frontier mathematics isn't mostly about formal proof generation. It's about knowing which problems are worth solving, which approaches might work, which intuitions to trust. That kind of mathematical taste — the judgment that makes a great mathematician — hasn't been replicated in AI.

But "not yet replicated" is doing a lot of work in that sentence. The areas of mathematics most vulnerable to AI displacement are exactly the areas that employ the most professional mathematicians: formal verification, graduate-level coursework, applied mathematics in industry.

The Honest Reckoning

What Science Friday is reporting isn't panic — it's a sober acknowledgment from people in the field that the calculus has changed. The same trajectory that took AI from "can't multiply" to "writes graduate proofs" in five years is still accelerating.

Whether that means mathematicians will be obsolete in five more years, or twenty, or never — no one knows. But the experts who've spent their lives in mathematics are no longer comfortable dismissing the question.

Learn AI for Free — FreeAcademy.ai

Take "AI Essentials: Understanding AI in 2026" — a free course with certificate to master the skills behind this story.

More in Research

Google Says It Found the First AI-Built Zero-Day Exploit in the Wild
Research

Google Says It Found the First AI-Built Zero-Day Exploit in the Wild

Google's Threat Intelligence Group says a prominent cybercrime group used AI to discover and weaponize a previously unknown 2FA-bypass flaw in a widely used open-source admin tool — the first AI-developed zero-day it has caught in a live campaign.

1 day ago2 min read
Google DeepMind Unveils 'AI Co-Mathematician' — and It Helps an Oxford Professor Crack an Open Problem
Research

Google DeepMind Unveils 'AI Co-Mathematician' — and It Helps an Oxford Professor Crack an Open Problem

Google DeepMind introduced a multi-agent AI system built on Gemini 3.1 that collaborates with research mathematicians, scoring 48% on FrontierMath Tier 4 and helping Oxford's Marc Lackenby resolve a long-open group-theory question.

1 day ago2 min read
AI Agents Can Self-Replicate Across Networks: Palisade Study Shows 81% Success Rate
Research

AI Agents Can Self-Replicate Across Networks: Palisade Study Shows 81% Success Rate

Palisade Research demonstrates frontier AI agents can autonomously hack vulnerable servers, copy themselves, and form replication chains. Success rates jumped from 6% to 81% in a single year.

2 days ago3 min read