PANews reported on May 15 that according to the official announcement of OpenAI, in order to improve the transparency of model security, OpenAI announced the launch of the "Safety Evaluations Hub" to continuously publish the safety performance results of its models in terms of harmful content, jailbreak attacks, hallucination generation, instruction priority, etc. Compared with the system card that only discloses one-time data when the model is released, the center will be updated periodically with the model update, supporting horizontal comparisons between different models, aiming to enhance the community's understanding of AI security and regulatory transparency. Currently, GPT-4.5 and GPT-4o perform best in terms of jailbreak attack resistance and factual accuracy.

PAData: Web3 in Data
Data analysis and visualization reporting of industry hot spots

AI Agent: The Journey to Web3 Intelligence
The AI Agen innovation wave is sweeping the world. How will it take root in Web3? Let’s embark on this intelligent journey together

Pioneer's View: Crypto Celebrity Interviews
Exclusive interviews with crypto celebrities, sharing unique observations and insights

Memecoin Supercycle: The hype around attention tokenization
From joke culture to the trillion-dollar race, Memecoin has become an integral part of the crypto market. In this Memecoin super cycle, how can we seize the opportunity?

Real-time tracking of Bybit attack
Bybit suffered a security incident, and funds worth $1.44 billion were withdrawn. A North Korean hacker group was accused of being the perpetrator.