Deploy guardrails to block malicious prompts.
Deploy a guardrail system infront of your model to detect and block malicous prompts before they reach your LLM.