HomeVulnerabilityGoogle AI "Large Sleep" Stops Exploitation of Important SQLite Vulnerability Earlier than...

Google AI “Large Sleep” Stops Exploitation of Important SQLite Vulnerability Earlier than Hackers Act

Google on Tuesday revealed that its massive language mannequin (LLM)-assisted vulnerability discovery framework found a security flaw within the SQLite open-source database engine earlier than it may have been exploited within the wild.

The vulnerability, tracked as CVE-2025-6965 (CVSS rating: 7.2), is a reminiscence corruption flaw affecting all variations prior to three.50.2. It was found by Large Sleep, a synthetic intelligence (AI) agent that was launched by Google final 12 months as a part of a collaboration between DeepMind and Google Mission Zero.

“An attacker who can inject arbitrary SQL statements into an software would possibly be capable of trigger an integer overflow leading to learn off the tip of an array,” SQLite challenge maintainers mentioned in an advisory.

Cybersecurity

The tech large described CVE-2025-6965 as a essential security problem that was “identified solely to risk actors and was liable to being exploited.” Google didn’t reveal who the risk actors had been.

“By means of the mixture of risk intelligence and Large Sleep, Google was capable of really predict {that a} vulnerability was imminently going for use and we had been capable of reduce it off beforehand,” Kent Walker, President of International Affairs at Google and Alphabet, mentioned.

See also  Microsoft cracks down on group working ‘cybercrime-as-a-service’

“We imagine that is the primary time an AI agent has been used to immediately foil efforts to take advantage of a vulnerability within the wild.”

In October 2024, Large Sleep was behind the invention of one other flaw in SQLite, a stack buffer underflow vulnerability that would have been exploited to end in a crash or arbitrary code execution.

Coinciding with the event, Google has additionally revealed a white paper to construct safe AI brokers such that they’ve well-defined human controllers, their capabilities are fastidiously restricted to keep away from potential rogue actions and delicate information disclosure, and their actions are observable and clear.

“Conventional methods security approaches (equivalent to restrictions on agent actions carried out by classical software program) lack the contextual consciousness wanted for versatile brokers and might overly limit utility,” Google’s Santiago (Sal) Díaz, Christoph Kern, and Kara Olive mentioned.

“Conversely, purely reasoning-based security (relying solely on the AI mannequin’s judgment) is inadequate as a result of present LLMs stay inclined to manipulations like immediate injection and can’t but provide sufficiently sturdy ensures.”

See also  Prime 5 Advertising and marketing Tech SaaS Safety Challenges

To mitigate the important thing dangers related to agent security, the corporate mentioned it has adopted a hybrid defense-in-depth method that mixes the strengths of each conventional, deterministic controls and dynamic, reasoning-based defenses.

Cybersecurity

The concept is to create sturdy boundaries across the agent’s operational setting in order that the danger of dangerous outcomes is considerably mitigated, particularly malicious actions carried out because of immediate injection.

“This defense-in-depth method depends on enforced boundaries across the AI agent’s operational setting to stop potential worst-case situations, appearing as guardrails even when the agent’s inside reasoning course of turns into compromised or misaligned by subtle assaults or sudden inputs,” Google mentioned.

“This multi-layered method acknowledges that neither purely rule-based methods nor purely AI-based judgment are enough on their very own.”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular