Test your prompts against injection attacks
Demo mode uses simulated responses. Enable "Use Real AI" to test against actual LLM behavior.
Attempts to override system instructions
Tries to extract the system prompt
Uses fictional scenarios to bypass rules
Uses encoding to hide malicious intent
Manipulates conversation context
4 attack variants will be tested