Q&A RAG Tutorial
Prompt Injection Tutorial In Q&A RAG App
Last updated
Prompt Injection Tutorial In Q&A RAG App
Last updated
This is a standalone example of a popular Q&A (Question and Answer) use case using llm, also refered to as "talk to your data". We'll demonstrate how question-answering systems can be attacked using jailbreak attacks and how TrustAI Guard detects and prevents these.
How are LLMs used to create powerful Q&A systems? As a company or an individual, you may have a large amount of documents covering a wide array of topics. Answering complex questions such as Which products of the company have had their prices reduced by more than 30% recently?
may be very difficult by direct search. That's were LLMs come in.
We will create a collection of documents from a Hugging Face dataset, and show how we can use an LLM to answer questions about them.
But there's a catch! These documents have been poisoned by adding a jailbreak attack in a random location of the document. If the LLM uses them to answer a question, the LLM can be hijacked to output a malicious content and the user would be victim to a jailbreak attack. This is called second order prompt injection
.
And we will show how TrsutAI Guard can be used to prevent that.
This notebook illustrates how easy it is to exploit LLM vulnerabilities via prompt injection and how TrustAI Guard can protect against them with one line of code.