Friday, 15 May 2026

A New Zealand-based startup is developing a tool that could guide users displaying violent extremist behaviors on AI platforms like ChatGPT toward human and automated deradicalization resources, according to its developers. This effort comes amid increasing legal challenges against AI firms for not adequately addressing or preventing violence. In February, Canadian officials warned OpenAI after learning a school shooter had been banned from the platform without notifying authorities.

ThroughLine, which provides crisis intervention services to OpenAI, Anthropic, and Google, helps redirect at-risk users dealing with self-harm, domestic abuse, or eating disorders. Founder Elliot Taylor, a former youth worker, explained that the company is now considering expanding to counter violent extremism. ThroughLine is collaborating with the Christchurch Call, an international effort to eliminate online hate established after the 2019 New Zealand terrorist attack, to create an intervention chatbot with expert input.

Taylor stated in an interview that the goal is to enhance platform support without a set timeline. OpenAI acknowledged the partnership but offered no additional details, while Anthropic and Google did not reply to inquiries.

Operating from rural New Zealand, ThroughLine maintains a vetted network of 1,600 helplines across 180 countries. When AI systems identify mental health risks, users are connected to local human-operated services. Taylor noted that the rise of AI chatbots has led to more disclosures of various issues, including extremist leanings.

The proposed tool would likely blend a specialized chatbot with referrals to professional mental health support. Taylor emphasized that it avoids generic large language model training data, instead relying on expert collaboration. Testing is underway, but no launch date is confirmed.

Galen Lamphere-Englund, a counterterrorism expert with the Christchurch Call, expressed interest in deploying the tool for gaming forum moderators and parents monitoring online extremism. AI researcher Henry Fraser from Queensland University of Technology described the concept as valuable for addressing relational aspects beyond mere content.

Fraser highlighted that effectiveness depends on robust follow-up and support structures. Taylor indicated that features like authority notifications are under review, balancing risks of escalation. He warned that abrupt conversation shutdowns by platforms could leave users unsupported, potentially driving them to unregulated spaces like Telegram, as noted in a 2025 study from New York University’s Stern Center for Business and Human Rights.

Taylor added that individuals often share sensitive information with AI that they withhold from people, and heavy-handed moderation might heighten dangers.

BCN

Leave A Reply