Mindgard home page
Search...
⌘K
Support
Try Mindgard now
Try Mindgard now
Search...
Navigation
Remediations
Apply Context Windows
Welcome
Introduction
User Guide
Getting Started
Plan AI Risk Management
AI Risk Visibility
Reconnaissance
Customizing your tests
Observations
Policies
AI Risk Remediation
Enterprise Setup
Support & Troubleshooting
Attack Library
Attack Library Overview
Attacks
Remediation Library
Introduction to Remediation
Remediations
ML Output Obfuscation
Apply Context Windows
Separate System Instructions
Dynamically Compile System Prompt
Query Restrictions
Ensemble Methods
Model Hardening
Input Restoration
Homomorphic Encryption
Differential Privacy
Overfitting Detection
Preprocess Input Text
Anonymization of Data
Refine System Prompt
Filter Outputs
Implement Guardrails
On this page
Explanation
How it works
How to implement
Remediations
Apply Context Windows
Limit the length of the input text.
Anonymization removes sensitive information from training data to protect privacy and maintain data integrity, ensuring the model’s training does not compromise confidentiality.
Explanation
How it works
How to implement
ML Output Obfuscation
Separate System Instructions
Assistant
Responses are generated using AI and may contain mistakes.