HomeVulnerabilityGoogle's AI Instrument Large Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google’s AI Instrument Large Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google stated it found a zero-day vulnerability within the SQLite open-source database engine utilizing its giant language mannequin (LLM) assisted framework referred to as Large Sleep (previously Challenge Naptime).

The tech big described the event because the “first real-world vulnerability” uncovered utilizing the substitute intelligence (AI) agent.

“We imagine that is the primary public instance of an AI agent discovering a beforehand unknown exploitable memory-safety challenge in extensively used real-world software program,” the Large Sleep crew stated in a weblog submit shared with The Hacker Information.

Cybersecurity

The vulnerability in query is a stack buffer underflow in SQLite, which happens when a bit of software program references a reminiscence location previous to the start of the reminiscence buffer, thereby leading to a crash or arbitrary code execution.

“This sometimes happens when a pointer or its index is decremented to a place earlier than the buffer, when pointer arithmetic outcomes ready earlier than the start of the legitimate reminiscence location, or when a unfavourable index is used,” in keeping with a Widespread Weak point Enumeration (CWE) description of the bug class.

See also  GitLab warns of vital arbitrary department pipeline execution flaw

Following accountable disclosure, the shortcoming has been addressed as of early October 2024. It is price noting that the flaw was found in a growth department of the library, which means it was flagged earlier than it made it into an official launch.

Challenge Naptime was first detailed by Google in June 2024 as a technical framework to enhance automated vulnerability discovery approaches. It has since developed into Large Sleep, as a part of a broader collaboration between Google Challenge Zero and Google DeepMind.

With Large Sleep, the thought is to leverage an AI agent to simulate human habits when figuring out and demonstrating security vulnerabilities by profiting from an LLM’s code comprehension and reasoning skills.

Cybersecurity

This entails utilizing a collection of specialised instruments that enable the agent to navigate by way of the goal codebase, run Python scripts in a sandboxed setting to generate inputs for fuzzing, and debug this system and observe outcomes.

“We expect that this work has super defensive potential. Discovering vulnerabilities in software program earlier than it is even launched, signifies that there is not any scope for attackers to compete: the vulnerabilities are mounted earlier than attackers actually have a probability to make use of them,” Google stated.

See also  FBI warns Black Basta ransomware impacted over 500 organizations worldwide

The corporate, nonetheless, additionally emphasised that these are nonetheless experimental outcomes, including “the place of the Large Sleep crew is that at current, it is possible {that a} target-specific fuzzer could be at the very least as efficient (at discovering vulnerabilities).”

- Advertisment -spot_img
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular