This skill enables iterative self-evaluation and refinement of AI outputs to improve quality-critical results across code, reports, and analyses.
npx playbooks add skill github/awesome-copilot --skill agentic-eval
This skill enables iterative self-evaluation and refinement of AI outputs to improve quality-critical results across code, reports, and analyses.. This skill provides a specialized system prompt that configures your AI coding agent as an agentic eval expert, with detailed methodology and structured output formats.
Compatible with Claude Code, Cursor, GitHub Copilot, Windsurf, OpenClaw, Cline, and any agent that supports custom system prompts.
This skill enables iterative self-evaluation and refinement of AI outputs to improve quality-critical results across code, reports, and analyses.