To familiarize yourself with Mindgard we recommend testing a demo AI model first. The other options will be covered later in this guide.

Mindgard hosts a selection of AI models that users can test for demonstration purposes. A variety of example AI model types (Predictive, Generative) across different domains (image, text) have been provided to allow you to explore Mindgard’s capabilities.

Pick an AI model to test and click Run Test, and Mindgard will create and run a suite of security tests against that model.

You’ll now be redirected to the test results page where you can see the risks identified from any tests you have run. Your test may take a few minutes to complete, during which time the risk score will be blank as shown here.

Click into the test you have just requested to see its results when it completes.

Results will appear as soon as they are available. You will see results from a list of different attack techniques that have been run against the AI model you selected, as well as their threat level and overall risk score.

Click into one of the attack results to see more details. The next page shows you:

  • Risk score (left): The percentage of attack attempts against the AI model deemed successful for the specific attack technique.
  • Threat Landscape (top middle): How the model you have selected compares with other similar models from Mindgard’s threat intelligence.
  • Attack context (top right): The target system, alongside attack statistics.
  • Provenance (bottom middle): The details of the inputs and outputs observed during the test.
  • Remediation (bottom right): Recommendations to reduce the system’s susceptibility to this attack technique.