Skip to main content
Testing applications or models with Mindgard is best done using our Python based CLI.

Pre-requisites

Mindgard’s CLI requires a working python 3.10+ environment, and pip, pipx, or equivalent package manager. If your organization has custom ssl certificates deployed for purposes of traffic inspection then these must be available to the python certificate store.

Installation

Install the Mindgard CLI using pip: pip install mindgard

Login to Mindgard

The Mindgard CLI works in conjunction with a Mindgard SaaS deployment. Before you can run tests, you will need to login with mindgard login
For Mindgard enterprise private tenant customers only, login to your enterprise instance using the command mindgard login --instance <name>.
Replace <name> with the instance name provided by your Mindgard representative. This instance name identifies your private tenant.

Constructing a Test Command

Mindgard test commands follow the form of
mindgard <test_type> <project_id> <test options> <targeting options>
While building your first test commands we are going to avoid test options except where necessary. Please refer to the command line reference and help within the CLI to assist with crafting more specific tests over time.

Specifying Test Type

Mindgard’s CLI can conduct three types of test against an external AI model. Use the recon, test or run commands to conduct reconnaissance and attacks with techniques using one or multiple prompts. The general command forms are:
Discover the target’s capabilities and defenses using reconnaissance techniques
mindgard recon <recon_type> --project-id <project_id> <targeting options>
Numerous recon tests are available. To start we suggest trying: input-encoding, output-encoding, and guardrail.

Project ID

Every test you run must be associated with a project identifier. The ID can be found just under the project’s name on any project’s results page along with a copy button to make it easier to transplant to your terminal.. Project ID in GUI As this is likely your first test for your target, please create a target either from the Projects page in the web interface, or from the command line using:
mindgard create project --name <project name>
CLI Create Project Copy the project ID from this output to save yourself the trouble of finding it again later in the web interface or with mindgard list projects.

Specifying your target

To test a target, Mindgard’s CLI needs to know:
  • The URL where it will submit prompts and receive responses
  • Prompt and response schemas
  • API key if your target requires authentication
Many of the examples here will show this data specified on the command line. Once you’re comfortable with a target’s details, TOML formatted configuration files are also supported to hold repetitive arguments. Mindgard also supports preset configurations for some popular publicly hosted models requiring less configuration. Starting with an application or model supported by a preset configuration will reduce the number of required arguments.
mindgard test \
  --project-id YOUR_PROJECT_ID-XXXX    `# the id of your mindgard project` \
  --url http://127.0.0.1/infer         `# url to test` \
  --api-key YOUR_TARGET_KEY            `# Only use if needed to authenticate requests` \
  --selector '["response"]'            `# JSON selector to match the textual response` \
  --request-template ''                `# how to format the system prompt and prompt in the API request`
You only strictly need to include the project ID and target URL to get Mindgard to send POST requests to the target. If you omit the other options your target may not understand our prompt format and it is likely that Mindgard will not understand your target’s response.

Response Formatting

--selectorMindgard expects all responses to be JSON formatted. Selector is a JSON Path expression, that tells Mindgard how to identify your target’s answer to our prompts within the API response.Your browser devtools may be useful to observe the structure of your API response to determine what this should be set to. In the example in the below screenshot “$.text” would be used to match the text response from the chatbot.

Request Formatting

--request-templateThe Request Template option tells Mindgard how to format an outbound request to your test target API. Without this option the test prompts will be sent in this format:
{
  "prompt": "Hello system, are you there?"
}
It is possible that your test may run even if the target does not understand your request template. Review specific test results in the web interface after your first tests with any new target to verify the prompt was properly understood.If you need to define a request template, your browser devtools may be useful to observe the structure of the outbound request.There are two template placeholders you must include in your Request Template.{prompt} Mindgard will replace this placeholder with an adversarial input as part of an attack technique.{system_prompt} Mindgard will replace this with the system prompt you specify below. This will allow you to test how the system behaves with different system instructions.The screenshot above would require a Request Template of {"inputs": "{system_prompt} {prompt}"}. In this case the other data you see in the browser’s request is not required for Mindgard to operate. Start small configuring request templates. The goal is to define the minimum set of information required to deliver prompts to your target.
When a target does not have distinct fields for prompt and system prompt, like this example, send the system prompt followed by the prompt in one field.
If you ever find yourself struggling with either request or response formatting please contact Mindgard support (support@mindgard.ai) for help. An agent will be happy to troubleshoot with you.
Multiple preset configurations exist within the CLI that will handle some of the required targeting options. Notably, request and response templates are already configured for many of the presets. Using a preset when possible can save significant configuration work.The targeting options required for each preset are:
PresetRequired options
openaiapi-key
openai-compatibleurl, api-key
huggingfaceurl, request-template, api-key
huggingface-openaiurl, api-key
anthropicapi-key
azure-aistudiourl, system-prompt, api-key
azure-openaiurl, az-api-version, model-name, api-key

Validating Targeting Options

Target configuration can be tested with:
mindgard validate --project-id <project-id> <targeting options>
This will send a simple greeting to the target and verify that a response is received. Do not continue forward with any testing until you successfully validate your targeting options. If you’re having issues getting validated it can help to enable extra logging and redirect error output to a file.
mindgard --log-level debug validate --project-id <project-id> <targeting options> 2> <file name>
Feel free to send your log file to support@midngard.ai if you’d like help troubleshooting.

Configuration Files

Once your configuration validates you have everything you need to start testing. Before diving in, consider saving your options to a configuration file. Configuration files are formatted text files that can be saved in any location you prefer. Be mindful of configuration file security when they hold API keys related to targets. Any test or targeting option can be stored in a configuration file including project identifiers. We recommend saving the project ID and targeting options in a configuration file at a minimum. When you find yourself repeating tests for a target, add those options to a test specific configuration file as well. To use a configuration file called test-config.toml with a test:
Discover the target’s capabilities and defenses using reconnaissance techniques
mindgard recon <recon_type> --config test-config.toml
To see the full list of reconnaissance types try: mindgard recon --help
Configuration examples can be found here

Viewing Results in Web

After running a test, the CLI will provide a link to the results in the Web UI. From the Web UI you can also share projects with other stakeholders allowing them to review results without needing to install or operate the command line interface. Test Results Link

Next Steps

At this point we hope you have run your first test using Mindgard’s CLI, created your first configuration file, and are ready to run several more tests to comprehensively assess your applications. To dive deeper into Mindgard please take a look at the additional datasets available by default and read on to learn how to generate your own test prompts. As you review test results you may find policies useful to customize reporting. We also have a CLI reference available to help you as you develop tests for your systems.