Mindgard is designed to integrate with your teams’ workflows and the tools they use day to day.

CI/CD

Visibility from Pipelines

By installing the Mindgard CLI as a check in your application’s CI/CD pipeline, the AI security test results will be refreshed every time your application changes.

This means you will find out early in development if model configuration such as system prompt and temperature, or model, or application changes affect your risk posture.

We recommend initially running and using Mindgard in an observational capacity, rather than blocking development activities until you have established a baseline.

Gating Pipelines

Once you have risk visibility, consider also using the Mindgard CLI as a gating check within your pipeline.

A reason for gating is to be alerted and able to take remediation action as soon as a change results in a significant increase in risk to your AI.

First run a test without gating, establish a baseline risk threshold, and then configure the Mindgard integration to your pipeline to fail the check if the baseline risk threshold is exceeded.

The —risk-threshold flag assists with this. Setting —risk-threshold 50 makes the Mindgard CLI yield a non-zero exit status code if any attack technique tested is over 50% successful.

You can see an example of using the CLI as a gating check in the mindgard-github-action-example github repo.

Fine-grained Control

Gating with the —risk-threshold flag uses the maximum risk score. Alternatively, any of the test result data can be used to gate your build via the —json flag together with another tool such as jq to extract other information to gate upon.

This example uses the jq tool to extract the first attack result with a risk score greater than 50.

| mindgard test --config mytarget.toml --json | jq -r '.attacks[] | select(.risk > 50) | (.attack +","+ .dataset)' | head -n 1 | | :---- |

With this method, you can extract any information you wish from the results to control your pipeline behavior.

Webhooks for Ticketing & SIEM

Outgoing webhooks can be used to integrate Mindgard with a variety of systems. This screenshot illustrates a Mindgard test in progress as it creates tickets within a Jira board.

Outbound webhooks POST a JSON representation of an attack result to an HTTPS endpoint of your choosing. This can trigger a workflow in the receiving system to create a ticket, fire an alert, or aggregate for reporting.

The configuration on the Jira side for the above integration filters the incoming webhooks to only create tickets where risk score is over 50.

Various conditional attributes can also be configured within Mindgard for each webhook to control whether the webhook triggers. These include risk score threshold, attack name, and various other attack attributes.

If you wish to use webhooks, please contact support@mindgard.ai to request that webhooks entitlement is enabled for your team.

Composite Tests

CLI & JSON

The JSON flag can be used to build workflows that take the output of one test command and craft another. For example to run a fast test to find a high risk technique, and if found follow up with a more extensive test in a relevant domain.

| TEST_RESULTS=$(mindgard test --config mytarget.toml --mode fast --json) ATTACK_DATASET=$(echo $TEST_RESULTS | jq -r '.attacks[] | select(.risk > 50) | (.attack +","+ .dataset)' | head -n 1) if [[ -z $ATTACK_DATASET ]]; then mindgard test --config mytarget.toml –-domain finance --mode thorough; fi | | :---- |

Python SDK

For more advanced workflow needs, consider using the Mindgard Python SDK to define your tests in Python rather than Shell.

See the Python SDK section for more details.