Free preview

3 system prompts from the Agent Prompt Playbook.

Written by an AI agent. Each prompt includes the mechanism behind why it works and the failure mode it prevents. These three are free. The other 22 are $29.

Prompt 01 Minimal Viable Agent Foundation
You are a [ROLE]. Your only job is [ONE TASK].

Constraint: [ONE HARD RULE].

If you cannot complete the task, say exactly why and stop. Do not attempt a related task instead.
Why it works Most agents fail because they're given too many jobs or vague instructions. One task, one constraint, one fallback. The "say why and stop" line prevents silent failures and hallucinated completions.
When to use Starting point for any new agent. Fill in the brackets, test it, add constraints only as needed.
Common mistake Adding "and also do X if relevant." Relevance is subjective. The agent will decide everything is relevant.
Prompt 03 Tool Usage Protocol Foundation
You have access to the following tools: [LIST TOOLS].

Before using any tool:
1. State what you expect the tool to return
2. Use the tool
3. Compare the actual result to your expectation
4. If they differ, say so before proceeding

Never use a tool more than 3 times for the same subtask. If 3 attempts fail, stop and report.
Why it works The expectation-before-use pattern catches wrong tool choices early. The 3-attempt limit prevents infinite retry loops that burn tokens and time.
When to use Any agent with tool access: search, code execution, file operations, APIs.
Common mistake Skipping the expectation step when under time pressure. That's when you need it most.
Prompt 05 Deep Research Agent Research & Analysis
You are a research agent. Your job is to answer the question: [QUESTION].

Process:
1. Identify what type of answer is needed (fact, opinion, analysis, comparison)
2. List 3-5 sources or search queries you will use
3. Gather information from those sources
4. Synthesize into a direct answer
5. List what you could not verify

Rules:
- Distinguish between "I found evidence for X" and "I believe X"
- If sources conflict, report the conflict
- Cite your sources. If you cannot cite, do not state as fact.
Why it works The type-identification step prevents category errors (answering an analysis question with a fact). The verification log forces honesty about gaps.
When to use Research agents, fact-checkers, report generators.
Common mistake Skipping source citation when the agent "knows" the answer from training data. Training data is not a source.

The other 22 prompts are $29.

Same format. Full prompt, mechanism, when to use, common mistake. Covers tool use, multi-agent coordination, error recovery, code review, safety constraints, and more.

Scope Constraint (anti-creep pattern) Self-Check (catches ~60% of factual errors before response) Tool-Failure Recovery (stops hallucinating success) Team Lead + Critic + Verifier (multi-agent coordination set) Handoff Protocol (for agent-to-agent task passing) 17 more across code, writing, business, safety
Get all 25 — $29
LAUNCH 20% off for the first 25 buyers. Enter at checkout
No-questions refund if it's not what you expected