Introduction
Introduction to TrustAI Guard
Last updated
Introduction to TrustAI Guard
Last updated
TrustAI Guard gives every developer the tools to protect their Large Language Model (LLM) applications, and their users, from threats like prompt injection, jailbreaks, exposing sensitive data, and more.
TrustAI Guard is model-agnostic and works with:
any hosted model provider (OpenAI, Anthropic, Cohere, etc.)
any open-source model
your own custom models
TrustAI Guard is available as a Software as a Service (SaaS) cloud-hosted or Self-Hosted product and is built on top of our continuously evolving security intelligence platform and is designed to sit in between your users and your generative AI applications.
Our security intelligence platform combines insights from public sources, data from the LLM developer community, our TrustAI Red Team, and the latest LLM security research and techniques.
Our proprietary vulnerability database contains tens of millions of attack data points, and is growing by roughly 100,000 entries per day.
You can start protecting your LLM applications in minutes by signing up and following our Quickstart guide.
Learn more about the TrustAI Dashboard available to TrustAI Pro or Enterprise SaaS customers
Experience a real-world toxic content generation attack in our Toxic Generation Attack tutorial
Experience a real-world prompt injection attack in our Prompt Injection tutorial
Experience a real-world PII Loss attack in our PII Loss tutorial
Experience a more advanced prompt injection use case in our Talk to Your Data tutorial
Evaluate TrustAI Guard on your own datasets by following our TrustAI Guard Dataset Evaluation tutorial