OpenAI has launched Dawn, a brand new cybersecurity initiative that brings collectively frontier synthetic intelligence (AI) mannequin capabilities and Codex Safety to assist organizations determine and patch vulnerabilities earlier than attackers discover a manner in utilizing the identical points.
“Dawn combines the intelligence of OpenAI fashions, the extensibility of Codex as an agentic harness, and our companions throughout the security flywheel to assist make the world safer for everybody,” the AI upstart stated. “Defenders can carry safe code overview, menace modeling, patch validation, dependency threat evaluation, detection, and remediation steerage into the on a regular basis improvement loop so software program turns into extra resilient from the beginning.”
Like Anthropic’s Mythos, the concept is to leverage AI to tilt the steadiness in favor of defenders and assist detect and tackle security points earlier than they’re discovered by dangerous actors. Entry to the tooling stays tightly managed for now, with OpenAI urging organizations to request for a vulnerability scan or contact its gross sales group.
Dawn leverages Codex Safety to construct an editable menace mannequin for a given repository that focuses on sensible assault paths and high-impact code, determine and check vulnerabilities in an remoted surroundings, and suggest fixes.
The trouble is constructed on the foundations of three fashions: GPT-5.5 (which has commonplace safeguards for common goal use), GPT-5.5 with Trusted Entry for Cyber (for verified defensive work in approved environments), and GPT-5.5-Cyber (a permissive mannequin for purple teaming, penetration testing, and managed validation).
A number of main firms like Akamai, Cisco, Cloudflare, CrowdStrike, Fortinet, Oracle, Palo Alto Networks, and Zscaler are already integrating these capabilities below the Trusted Entry for Cyber initiative, OpenAI stated, including it is working with trade and authorities companions to deploy “extra cyber-capable fashions” sooner or later.
The rollout comes as AI instruments have shortened the time it takes to find latent security points that will have in any other case escaped discover, turning what would as soon as have taken a major quantity of effort and time right into a a lot shorter interval of labor. Because of this, the patching course of can wrestle to maintain up even below preferrred circumstances.
Earlier this March, HackerOne paused its bug bounty program citing a shift in steadiness between vulnerability discoveries and the power for open-source maintainers to deal with them, attributing it to how AI-assisted analysis has led to an uptick within the quantity of recent flaws and the velocity at which they’re recognized.
This additionally has had the aspect impact of what is known as triage fatigue, the place venture maintainers are required to sift by means of a flood of vulnerability stories, a few of which might be plausible-sounding however solely hallucinated by the AI fashions.
As AI lowers the barrier to discovering security flaws, firms like Anthropic, Google, and OpenAI have more and more positioned AI security brokers as a brand new operational layer to deal with the remediation bottleneck and safeguard digital infrastructure from potential exploitation.
In a put up printed final week, security researcher Himanshu Anand stated “the 90 day disclosure coverage is useless,” as giant language fashions (LLMs) compress disclosure and exploit timelines to near-zero.
“When 10 unrelated researchers discover the identical bug in six weeks, and AI can flip a patch diff right into a working exploit in half-hour, what precisely is the 90-day window defending? No person,” Anand stated.



