A small New Zealand startup that has quietly become the go-to crisis contractor for the world's largest AI companies is now expanding its scope to tackle violent extremism — a move that could reshape how chatbots handle some of their most dangerous interactions.
From Crisis Helplines to Counter-Extremism
ThroughLine, run by its founder from rural New Zealand, has built a network of 1,600 helplines across 180 countries. The company has been hired in recent years by OpenAI, Anthropic, and Google to handle situations where AI chatbot users show signs of mental health crises. When the AI detects distress signals, it routes the user to ThroughLine, which matches them with an available human-run service in their area.
Now, according to a Reuters report published today, ThroughLine is exploring ways to broaden its offering to include preventing violent extremism. The new tool would use a hybrid model combining a specialized chatbot trained to engage with people showing signs of radicalization and referrals to real-world mental health and deradicalization services.
Backed by the Christchurch Call
The initiative is being developed with advice from The Christchurch Call, the international framework established after the 2019 Christchurch mosque shootings to combat terrorist and violent extremist content online. This connection lends the project both credibility and a direct line to policy frameworks already adopted by multiple governments and tech companies.
Addressing a Growing Legal Threat
The timing is significant. AI companies face a growing number of lawsuits accusing them of failing to prevent — and in some cases enabling — violence through their chatbot products. Several high-profile cases in recent months have alleged that AI systems provided harmful content or failed to intervene when users expressed dangerous intentions.
ThroughLine's approach offers AI companies a potential liability shield: a third-party system designed to identify and intervene in extremist interactions before they escalate.
The Hybrid Approach
Rather than relying solely on automated detection, the proposed system would combine AI-driven identification of extremist language patterns with human expertise. Users flagged by the system would first interact with a purpose-built chatbot designed to de-escalate and assess risk, before being connected to trained human counselors and established deradicalization programs.
This mirrors the company's existing mental health crisis model, which has already proven effective at scale across its three major AI company clients.
Implications for AI Safety
The project represents a shift in how AI safety is being operationalized — moving beyond content moderation and refusal training toward active intervention. If successful, ThroughLine's model could become a standard component of responsible AI deployment, particularly as regulators worldwide push for more robust safety measures in consumer-facing AI products.



