Friday, 15 May 2026

Artificial intelligence firms are producing sophisticated technologies while also making bold assertions about their capabilities. Recently, Anthropic unveiled Claude Mythos, an AI designed for cybersecurity tasks, generating both excitement and concern regarding its reported proficiency. However, the model remains unavailable to the public. Similarly, OpenAI announced its own highly advanced cybersecurity AI toward the week’s end.

Anthropic described Mythos as a major shift for the cybersecurity sector, citing its ability to identify numerous software flaws. The company claims it has uncovered thousands of unpatched vulnerabilities in popular applications, leading to the creation of Project Glasswing, a collaboration with experts to enhance protections and limit the model’s availability.

In an opinion piece, Shakeel Hashim noted that Anthropic asserts Mythos has detected weaknesses in all leading browsers and operating systems, potentially enabling hackers to compromise critical global software. If accessible and as effective as claimed, this could lead to severe consequences. Cyber threats now extend beyond digital realms, affecting physical infrastructure like airports, hospitals, and transit systems, which have suffered major disruptions from attacks requiring high expertise. Mythos could democratize such capabilities for novices and amplify them for experts.

Nevertheless, cybersecurity professionals are questioning Anthropic’s statements. Reports indicate uncertainty about whether the company has created an extraordinarily powerful system. Instead, Anthropic, often viewed as a responsible player in AI, appears skilled in promotion. Expert Jameison O’Reilly acknowledged the development’s legitimacy and the need for caution but downplayed some claims, such as discovering thousands of zero-day vulnerabilities in key operating systems, as not particularly impactful in practice.

Drawing parallels to Apple’s success in building consumer desire, Anthropic demonstrates similar prowess. Its Claude model offers genuine value, especially in coding, attracting partnerships with major firms like Apple, Nvidia, Google, JPMorgan Chase, Amazon Web Services, and Broadcom for Project Glasswing. Yet, restricting access by deeming it too potent cleverly heightens interest.

At a recent AI conference in San Francisco, Anthropic drew significant attention, according to reports. Hype has clouded perceptions of generative AI since its emergence, with journalists working to clarify realities. A 2019 article highlighted OpenAI’s decision to withhold its GPT-2 model due to perceived dangers, a tactic echoed by Anthropic’s CEO, who previously worked at OpenAI. OpenAI similarly delayed its Sora video tool, which did not upend industries as feared before its eventual discontinuation.

Fears over text generation seem minor compared to potential cybersecurity collapses, but overcoming past concerns suggests society may adapt to current ones, finding a balanced outcome. Further reading includes discussions on AI firms addressing their reputational challenges through policy funding, a series on AI’s impact on employment, tech layoffs amid AI investments, older workers seeking AI skills, AI mimicking musicians on streaming platforms, and investigations into child exploitation on social media. Additionally, Meta recently faced a substantial legal defeat in New Mexico related to its platforms.

Credit:
https://www.theguardian.com/technology/2026/apr/13/ai-tech-marketing
BCN

Leave A Reply