Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mindgard.ai/llms.txt

Use this file to discover all available pages before exploring further.

Refine the system prompt to ensure the model is directed not generate harmful or malicious content. This includes instructions on rejecting prompts that may lead to harmful content generation.

Explanation

 

How it works

 

How to implement