Remediations
Model Hardening
Make AI models more robust to adversarial inputs.
This technique strives to make AI models more robust to adversarial inputs via adversarial training or network distillation. Examples include (i) using randomization to inject noise during training to enhance resilience to evasion attacks (especially triggered by subtle perturbations), (ii) Gradient Masking, (iii) Feature Squeezing.
Explanation
Use adversarial robustness techniques to re-train or re-deploy your model securely.
How it works
Reduces the success of the attacker by making the model more resilient to attacks.
How to implement
Adversarial training, network distillation.