Plan AI Risk Management
Understanding the Context
To get the most out of your security tests, it’s important to understand the context of the system under test.
These will inform the best way to set up security tests, how to triage the results, and which risks are most relevant to you.
- AI Model Type in use. Is it an LLM, Text or Multi-modal, Image Classification, Text Classification, Audio Classification, etc.
- Application domain. Is it a Financial Services tool, or a Medical tool?
- Application context. Is it only used by internal users or externally accessible? Is it part of a commercial offering?
- Threats most relevant to your organization. Are you concerned about reputational damage, or only bypassing technical restrictions.
Adoption
We recommend rolling out Mindgard in four stages:
-
PoC. Demonstrate to validate that Mindgard can meet your AI security goals.
-
Initial Visibility. Understand the security risks associated with most critical AI assets.
-
Ongoing Visibility. Know when the security risk posture of your AI assets changes, whether due to application changes, model changes, or new emerging AI vulnerabilities and exploit techniques.
-
Triage & Remediation Workflow Automation. Your development teams are alerted and take action to triage and remediate new relevant AI security risks.
AI Security Throughout SDLC
Mindgard can be used on AI throughout the Software Development Lifecycle (SDLC). Below we have provided some of the most common use cases.
Designing Secure AI Systems
When designing a new AI-powered system using a foundational or open source model, you should evaluate tradeoffs of different choices for the AI model in use. This is because all AI models have particular affinities – and weaknesses – against different risk scenarios.
Mindgard can be used to test different candidate AI models to compare their security attributes relevant to your domain to inform your choice, along with performance capabilities best matching your business requirements.
Building Secure AI Systems
When building AI powered systems, changes to the AI model, the model configuration (such as system prompts), or application input validation can affect the AI security risks.
To understand if your risk exposure is changing, you can continuously test the application with Mindgard.
Production Readiness
When you’re ready to deploy an AI powered system, thoroughly test the application with Mindgard to inform confidence it is safe to deploy to production.
Protection from Emerging Threats
When you have AI powered systems in production, Mindgard can alert you to emerging AI vulnerabilities and weaknesses that affect you.
Assisted Red Teaming
Mindgard can be a powerful tool to enhance your point in time security testing. Security testers, red teamers, and pen testers can use Mindgard’s test results to inform where to best focus their efforts and combine techniques to discover and evaluate security exploits within AI systems.
See later section within this guide for advanced programmatic usage of Mindgard’s CLI and SDK for human guided testing.
AI Procurement
Mindgard can be used to conduct end-to-end AI application security testing. With the consent of a third party software vendor, Mindgard can be used to evaluate risks of third party applications.