PANews reported on April 24 that OpenAI announced a bounty program for biosecurity vulnerabilities in GPT-5.5, inviting researchers with AI red team, security, or biosecurity experience to test their models' protection. The challenge is to construct a "universal jailbreak hint" that passes five biosecurity questions without triggering a review. The first successful participant will receive a $25,000 reward, and partial successes may also receive prizes. Applications close on June 22, and the testing period is from April 28 to July 27. All research is subject to confidentiality agreements.
OpenAI launches GPT-5.5 biosecurity vulnerability bounty program
Share to:
Author: PA一线
This content is for market information only and is not investment advice.
Follow PANews official accounts, navigate bull and bear markets together
PANews App
24/7 blockchain news tracking and in-depth analysis.

