An immediate injection flaw in Vanna AI exposes the database to RCE attacks.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Cybersecurity researchers have discovered a highly critical security flaw in the Vanna.AI library that can be used to exploit a remote code execution vulnerability via a quick injection technique.

Supply chain security firm JFrog said the vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to an immediate injection issue in the “Ask” function that takes advantage of arbitrary commands to the library. can be used to implement

Vanna is a Python-based machine learning library that allows users to chat with their SQL database to gain insights by “just asking questions” (aka prompts) of a large language model (LLM). is translated into the equivalent SQL query using

The rapid rollout of generative artificial intelligence (AI) models in recent years has exposed the risks of exploitation by malicious actors, who can weaponize devices by providing adversarial input that bypasses their built-in security mechanisms. can.

One such prominent class of attacks is prompt injection, which refers to a type of AI jailbreak designed to bypass guardrails designed by LLM providers to prevent the production of offensive, harmful, or illegal content. may be used for, or may act in accordance with, instructions that defeat the intended purpose. Purpose of application.

Such attacks can be indirect, in which the system processes data controlled by a third party (eg incoming emails or editable documents) to launch a malicious payload that leads to an AI jailbreak. becomes.

They can also take the form of what is called a multi-shot jailbreak or multi-turn jailbreak (aka crescendo) in which the operator “starts with harmless conversation and gradually moves the conversation toward a desired, prohibited goal.” goes.”

This approach can be further extended to exploit another novel jailbreak attack known as the Skeleton Key.

“This AI jailbreak technique works by using a multi-turn (or more than one step) strategy to trick the model into ignoring its defenders,” said Mark Rosinovich, chief technology officer at Microsoft Azure. ” “Once the guardrails are ignored, one model will not be able to determine malicious or unauthorized requests from another.”

Skeleton Key also differs from Crescendo in that when a jailbreak is successful and system rules are changed, the model can generate answers to questions that would otherwise be forbidden regardless of ethical and security risks. will

“When a Skeleton Key jailbreak is successful, a model recognizes that it has updated its guidelines and will follow the instructions to generate any content,” Rusinovich said. No matter how much it violates its core responsible AI guidelines,” Rusinovich said.

“Unlike other jailbreaks like Crescendo, where models must be asked for operations indirectly or with encodings, Skeleton Key puts models in a mode where the user can request operations directly. Model knowledge or Ability to produce the desired content.”

JFrog's latest findings – also independently disclosed by Tong Liu – show how immediate injections can have severe effects, especially when they are linked to command execution.

CVE-2024-5565 exploits the fact that Vanna provides a text-to-SQL generation facility to create SQL queries, which are then served to users using the Plotly Graphing library.

This is accomplished by an “ask” function — for example, vn.ask (“What are the top 10 customers by sales?”) — which is a key API endpoint that generates SQL queries. Enables running. Data base

The above behavior, combined with the dynamic generation of plotly code, creates a security hole that allows a threat actor to issue a specially crafted prompt that executes a command to the underlying system. embeds the

“The Vanna library uses a prompt function to present the user with visual results,” said JFrog, “It is possible to change the prompt using prompt injection and execute arbitrary Python code instead of the required visualization code. Be driven,” said JFrog.

“Specifically, allowing external input to set the library's 'ask' method with 'image' to true (the default behavior) leads to remote code execution.”

Following the responsible disclosure, Vanna has issued a strict guideline warning users that the Plotly integration can be used to generate arbitrary Python code and that users exposing this functionality to such a sandbox should be done in the environment.

Shacher Menashe, senior director of security research at JFrog, said in a statement, “This finding demonstrates that the risks of widespread use of GenAI/LLMs without proper governance and security can have severe consequences for organizations. “

“The risks of prompt injection are still widely unknown, but they are easy to implement. Companies should not rely on pre-prompting as an undefendable defense mechanism and LLMs should not be able to access critical resources such as databases or dynamic databases. A more robust code generation mechanism should be used when interfacing with

Did you find this article interesting? Follow us. Twitter And LinkedIn to read more exclusive content we post.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment