Friday, 15 May 2026

AI company Anthropic is looking into allegations that unauthorized individuals obtained access to its Mythos model, which the firm has flagged for potential cybersecurity dangers. The American firm issued the announcement following a Bloomberg article on Wednesday, which detailed that a limited number of people reached the unreleased model due to its capacity to facilitate digital attacks. ‘We are examining a claim of improper access to Claude Mythos Preview via a third-party vendor system,’ Anthropic stated. According to Bloomberg, a few participants in a confidential online group accessed Mythos on the day the company announced its limited release to select businesses, such as Apple and Goldman Sachs, for evaluation. The report indicated that these unidentified users gained entry through credentials held by one member employed at an external contractor for Anthropic, using techniques common among security experts. Bloomberg noted that the group avoided running attack-related queries and focused instead on experimenting with the technology rather than creating harm, verifying the information with screenshots and a real-time model showcase. Still, this possible security lapse is likely to concern officials who have expressed worries about Mythos’s ability to cause disruption, prompting scrutiny over safeguarding hazardous innovations from misuse. UK’s AI Minister Kanishka Narayan has cautioned that British companies ‘should be concerned’ about the model’s skill in identifying vulnerabilities in computer networks, which could be exploited by intruders. The model underwent review by the UK’s AI Safety Institute (AISI), a top global evaluator of such technology, which last week described Mythos as a significant advancement in cyber risks compared to earlier versions. AISI reported that Mythos can execute multi-step assaults and detect system flaws independently. These operations typically demand days of effort from expert humans. Mythos became the initial AI to fully navigate a 32-stage cyber-attack simulation designed by AISI, succeeding in three of ten trials.

Credit:
https://www.theguardian.com/technology/2026/apr/22/anthropic-investigates-report-of-rogue-access-to-hack-enabling-mythos-ai
BCN

Leave A Reply