PANews reported on April 25th that, according to The Information, security incidents at Anthropic and OpenAI have raised concerns about the security of AI models themselves. Anthropic is currently investigating the possibility that its Claude Mythos model may have been accessed without authorization. Almost simultaneously, OpenAI was also exposed for accidentally releasing several unreleased models in its Codex application. Industry experts point out that these vulnerabilities have heightened scrutiny of the security governance capabilities of AI companies and reflect that, despite the rapid development of AI technology, security systems still need improvement.
Analysts believe that even AI model providers that emphasize cybersecurity capabilities still face significant security challenges. As AI is increasingly used to defend against cyberattacks, platform security and access control issues have also become critical risk points.

