Mindgard home page
Search...
⌘K
Ask AI
Support
Try Mindgard now
Try Mindgard now
Search...
Navigation
Remediations
Differential Privacy
Welcome
Introduction
User Guide
Getting Started
Plan AI Risk Management
AI Risk Visibility
Reconnaissance
Customizing your tests
Observations
Policies
AI Risk Remediation
Enterprise Setup
Support & Troubleshooting
Attack Library
Attack Library Overview
Attacks
Remediation Library
Introduction to Remediation
Remediations
ML Output Obfuscation
Apply Context Windows
Separate System Instructions
Dynamically Compile System Prompt
Query Restrictions
Ensemble Methods
Model Hardening
Input Restoration
Homomorphic Encryption
Differential Privacy
Overfitting Detection
Preprocess Input Text
Anonymization of Data
Refine System Prompt
Filter Outputs
Implement Guardrails
On this page
Explanation
How it works
How to implement
Remediations
Differential Privacy
Ensure privacy of individuals within datasets.
A mathematical framework for ensuring the privacy of individuals within datasets. Differential Privacy ensures that model outputs do not reveal any additional information about an individual record included in the training data.
Explanation
How it works
How to implement
Homomorphic Encryption
Overfitting Detection
Assistant
Responses are generated using AI and may contain mistakes.