Input Validation / LLM Prompt Injection

Web and API

Description

LLM Prompt Injection is an injection vulnerability in generative AI models that occurs when an attacker can manipulate the behavior of a Large Language Model (LLM) by injecting crafted inputs into the model's input context. This can be used for example to "jailbreak" the model and force it to act for the attacker in an otherwise unintended manner. The malicious input to the model can come from a direct source, such as user input given to the model, or an indirect source, such as an attacker-controlled website given to the model as context.

Risk

LLM Prompt Injection may allow an attacker to leak sensitive data that the model has access to, including the supplied system prompt, user data passed as context, or data provided by "plug-ins" that the model has access to (e.g. databases or APIs).
In addition, these plug-ins could be used by the attacker in unintended ways, which is especially critical, if the model has access to various types of operations, including write access.
Furthermore, the attacker can use prompt injection to perform social engineering attacks if the malicious LLM output is given to another person who trusts the LLM.

Solution

In general, LLM prompt injection cannot be completely prevented because the current architecture of LLMs does not distinguish between different types of user input. However, it may be possible to limit the impact. The following are based on recommendations from the OWASP project:

  • Limit what the LLM can access to an absolute minimum
  • Add a human in the loop. The human should be trained to understand the risks of LLMs and that the output cannot always be trusted.
  • Monitor the LLM's input and output for anything unexpected.
    In addition, in some cases it might be an option to limit what the LLM receives as input, e.g. not to rely on large amounts of text input, i.e. to ensure that the LLM input can be trusted to some degree.

Curious? Convinced? Interested?

Arrange a no-obligation consultation with one of our product experts today.