Quickstart
Getting started with TrustAI Guard
TrustAI Guard protects your Large Language Model (LLM) use cases by helping your developers detect security risks to your LLM application, and its users, in real-time.
Follow the steps below to detect your first prompt injection with TrustAI Guard.
Create an API key
Navigate to the API Access page
Click on the + Create new API key button
Name the key
Guard Quickstart Key
Click the Create secret key button
Export your key as an environment variable (replacing
<your-api-key>
with your API key):
export TRUSTAI_GUARD_API_KEY=<your-api-key>
Detect a prompt injection
The example code below should trigger TrustAI Guard's prompt injection detection.
Copy and paste it into a file on your local machine and execute it from the same terminal session where you exported your API key.
Python
Learn more
Working with the TrustAI Guard API is as simple as making an HTTP request to any of the endpoints below:
/prompt_guard: detect attempted prompt injections、jailbreaks、Personally Identifiable Information (PII)、hateful, sexually explicit and vulgar content in user prompts or completions response.
Tutorials
To help you get more out of TrustAI Guard, we've created some tutorials to help guide you through some common use cases.
Prompt Injection: Detect a prompt injection attack with an example chat application that loads an article to provide context for answering the user's question
Talk to Your Data: Detect a prompt injection attack including bad prompt with an example a chat application that uses Retrieval Augmented Generation (RAG) to reference a knowledge base of documents in its answers
TrustAI Guard Dataset Evaluation: Evaluate the efficacy of TrustAI Guard on public - or even your own - datasets
Guides
To help you learn more about the security risks TrustAI Guard protects against, we've created some guides.
Other Resources
If you're still looking for more, you can Book a demo
Last updated