Remediations
Model Hardening
Make AI models more robust to adversarial inputs.
This technique strives to make AI models more robust to adversarial inputs via adversarial training or network distillation. Examples include (i) using randomization to inject noise during training to enhance resilience to evasion attacks (especially triggered by subtle perturbations), (ii) Gradient Masking, (iii) Feature Squeezing.
Explanation
How it works
How to implement