Get a week free of Claude Code →

🛡️ Prompt Guard

This skill detects prompt injections and jailbreak attempts in LLM apps, ensuring safer interactions and reliable third-party data filtering.

QUICK INSTALL
npx playbooks add skill orchestra-research/ai-research-skills --skill prompt-guard

About

This skill detects prompt injections and jailbreak attempts in LLM apps, ensuring safer interactions and reliable third-party data filtering.. This skill provides a specialized system prompt that configures your AI coding agent as a prompt guard expert, with detailed methodology and structured output formats.

Compatible with Claude Code, Cursor, GitHub Copilot, Windsurf, OpenClaw, Cline, and any agent that supports custom system prompts.

Example Prompts

Get started Help me use the Prompt Guard skill effectively.

System Prompt (19 words)

This skill detects prompt injections and jailbreak attempts in LLM apps, ensuring safer interactions and reliable third-party data filtering.

Related Skills

Get the best new skills
in your inbox

Weekly roundup of top Claude Code skills, MCP servers, and AI coding tips.