The Core Problem
AI coding tools ship fast but create chaos. Without structure, they make assumptions, choose arbitrary approaches, and produce code that doesn't fit your architecture. Context window fills up, hallucinations increase, chat history is unsearchable, and every new session starts from zero.
Assumption-Driven
Guesses at requirements instead of asking clarifying questions
Arbitrary Choices
Picks random technical approaches without considering your stack
Poor Fit
Produces code that doesn't match existing patterns or conventions
No Checkpoints
Skips verification and claims completion without proof
The Solution: Context-Driven Development
Draft treats context as a managed artifact alongside code. File-based persistent memory replaces ephemeral chat. Your repository becomes the single source of truth.
When to Use
Good Fit
Design Decisions
Features requiring architecture choices, API design, or data model decisions
Team Review
Work that will be reviewed by others — specs are faster to review than code
Multi-Step Work
Complex implementations spanning multiple files, modules, or phases
5-Step Quickstart
# 1. Initialize project context
/draft:init
# 2. Create a feature track
/draft:new-track "Add user authentication"
# 3. Start implementing
/draft:implement
# 4. Verify quality
/draft:coverage
/draft:review
Economics: Why Specs Win
Writing specs feels slower. It isn't. Overhead is constant (~20% for simple tasks), but savings scale with complexity, team size, and criticality.
For critical product development, Draft isn't overhead — it's risk mitigation.
Why Draft
Draft is a methodology-first plugin that layers onto tools you already use. No new IDE to adopt. No subscription. No vendor lock-in.
Free & Open Source
Works With Your Existing Tools
Draft is a plugin, not a replacement. Zero switching cost — install and start using it inside your current editor.
Claude Code
Native plugin with full slash command support. Install from marketplace in 30 seconds.
GitHub Copilot
Drop-in instructions file. Works in VS Code, JetBrains, Neovim — wherever Copilot runs.
Cursor
Add from GitHub as a Cursor rule. Works alongside your existing Cursor setup.
Antigravity IDE & Gemini
Uses a lightweight `.gemini.md` bootstrap file pointing to a global or local installation.
Your specs, plans, and architecture docs are plain markdown files in your repo. Switch tools any time — your project knowledge stays with you.
Deeper Than Any AI IDE
Most AI coding tools focus on writing code faster. Draft focuses on writing correct code — with methodology depth no IDE provides.
Best Fit For
Brownfield Projects
Existing codebases where understanding the architecture before changing it prevents production incidents.
Enterprise & Regulated
Teams needing ADRs, audit trails, change management, and traceable decision-making.
Backend & Infrastructure
Microservices, APIs, data pipelines — where ACID compliance and consistency boundaries matter.
Teams Over Solo
Specs and plans go through PR review before any code is written. The entire team aligns before implementation starts.
Install
Install Draft as a Claude Code plugin, or use the integrations for Cursor, GitHub Copilot, Antigravity IDE, and Gemini.
Claude Code (Marketplace)
# Install from marketplace
/plugin marketplace add mayurpise/draft
/plugin install draft
Prerequisites: Claude Code CLI, Git, and Node.js 18+.
Cursor
Cursor > Settings > Rules, Skills, Subagents > Rules > New > Add from Github:
https://github.com/mayurpise/draft.git
GitHub Copilot
Generates a comprehensive `copilot-instructions.md` context file for Copilot's specific requirements:
mkdir -p .github && curl -o .github/copilot-instructions.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/copilot/.github/copilot-instructions.md
Antigravity IDE
Install globally and set up your bootstrap configuration once:
# Clone skills to Antigravity global directory
mkdir -p ~/.gemini/antigravity/skills
git clone https://github.com/mayurpise/draft.git ~/.gemini/antigravity/skills/draft
# Set up the bootstrap
curl -o ~/.gemini.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/gemini/.gemini.md
Gemini
Use the lightweight bootstrap file in your local repository:
curl -o .gemini.md https://raw.githubusercontent.com/mayurpise/draft/main/integrations/gemini/.gemini.md
Workflow
The Draft Workflow
By treating context as a managed artifact alongside code, your repository becomes the single source of truth.
Step 1: Context Files
/draft:init creates persistent context files that define your project landscape.
These files live in draft/, are git-tracked, and load automatically. AI always
starts with your ground truth instead of assumptions.
Step 2: Spec & Plan
When you ask AI to "add authentication," it immediately writes code. /draft:new-track conducts a collaborative intake — structured conversation where AI acts as expert collaborator. It asks clarifying questions, contributes expertise (patterns, risks, trade-offs), and updates the spec progressively.
Step 3: Decompose (optional)
For multi-module features, /draft:decompose maps your feature into discrete modules with defined responsibilities, API surfaces, and dependency graph. This prevents tangled code and circular dependencies.
Step 4: Implement
/draft:implement executes one task at a time from the plan, follows TDD cycle (test first, then code, then refactor), runs verification gates before marking completion, and triggers three-stage review at phase boundaries.
Step 5: Verify Quality
Passing tests doesn't guarantee good code. /draft:coverage measures test completeness (95%+ target). /draft:review runs context-aware checks: architecture conformance, security scans (hardcoded secrets, SQL injection, XSS), performance anti-patterns, and compliance with the spec. /draft:bughunt performs exhaustive bug hunting across 12 dimensions. /draft:deep-review audits entire modules for ACID compliance and architectural resilience.
The Constraint Hierarchy
Each document layer narrows the solution space. By the time AI writes code, most decisions are already made.
The AI becomes an executor of pre-approved work, not an autonomous decision-maker. Explicit specs, phased plans, verification steps, and status markers keep implementation focused and accountable.
Review Before Code
This is Draft's most important feature. In traditional AI coding, you discover the AI's design decisions during code review — after it's already built the wrong thing. With Draft, the AI writes a spec first. You review the approach in a document, not a diff. Disagreements are resolved by editing a paragraph, not rewriting a module.
Collaborative Intake: AI as Expert Partner
Instead of dumping requirements at the AI and hoping for the best, /draft:new-track
conducts a structured conversation where AI acts as an expert collaborator — asking the right
questions, contributing knowledge, and building the spec progressively.
This levels the playing field. Junior engineers get senior-level guidance. Senior engineers can't shortcut rigor. Both produce well-documented specs with traceable reasoning. The discipline scales across your entire team.
Architecture Mode (Optional)
Standard Draft gives you specs and plans. Architecture Mode goes deeper — it forces the AI to design before it codes. Every module gets a dependency analysis. Every algorithm gets documented in plain language. Every function signature gets approved before implementation begins.
When to use: Multi-module features, new projects, complex algorithms, teams wanting maximum review granularity.
Overkill for: Simple features touching 1-2 files, bug fixes with clear scope, configuration changes.
TDD Workflow
AI-generated code without tests is a liability. When TDD is enabled in workflow.md,
Draft forces the AI to prove its code works at every step.
Architecture Discovery (Brownfield)
For brownfield projects, /draft:init performs a deep five-phase codebase analysis
that generates architecture.md (comprehensive engineering reference) and derives
.ai-context.md (machine-optimized AI context). These documents become the
persistent
context every future track references.
Phase 1: Orientation — System map with mermaid diagrams, directory hierarchy, entry points, request/response flow, tech stack inventory
Phase 2: Logic — Data lifecycle (state machines, storage topology, transformation chains), design patterns, complexity hotspots, conventions, external dependencies, critical invariants, security architecture, concurrency model, error handling, observability
Phase 3: Module Discovery — Module dependencies, module inventory, dependency order
Phase 4: Critical Path Tracing — End-to-end write/read/async paths with consistency boundaries, failure recovery matrix, commit points
Phase 5: Schema & Contract Discovery — Protobuf, OpenAPI, GraphQL, database schemas, inter-service dependencies
Phase 6: Test, Config & Extension Points — Test mapping, config discovery, extension cookbooks
Pay the analysis cost once, benefit on every track. Architecture discovery turns your codebase into a documented system that any AI assistant can understand instantly.
Revert Workflow
AI makes mistakes. When it does, you need to undo cleanly. Draft's revert understands the logical structure of your work at three granularities: task (single task's commits), phase (all commits in a phase), or track (entire track's commits). Preview, confirm, execute — git revert + Draft state update together.
Quality Disciplines
AI's default failure mode is to guess at fixes, skip verification, and claim success. Draft embeds three quality agents directly into the workflow.
Systematic Debugging
When a task is blocked ([!]), the Debugger Agent enforces a four-phase process:
Investigate → Analyze → Hypothesize → Implement. After 3 failed hypothesis cycles, escalates to
you with everything learned and eliminated. Root cause documented in plan.md.
Three-Stage Review
At every phase boundary, the Reviewer Agent runs three sequential checks:
Stage 1: Automated Validation — Architecture conformance, dead code, circular dependencies, security anti-patterns, performance issues.
Stage 2: Spec Compliance — All functional requirements implemented? Acceptance criteria met? No scope creep or missing features?
Stage 3: Code Quality — Follows project patterns from tech-stack.md? Appropriate error handling? Tests cover real logic? Maintainability and complexity.
Critical issues must be fixed before proceeding. Important issues should be fixed. Minor issues noted but don't block.
Code Coverage (95%+ target)
/draft:coverage runs your project's coverage tool and classifies every uncovered
line:
Testable — Should be covered. Suggests specific tests to write.
Defensive — Error handlers for impossible states. Acceptable to leave.
Infrastructure — Framework boilerplate and entry points. Acceptable.
Module Lifecycle Audit
/draft:deep-review performs an exhaustive end-to-end lifecycle review of a service,
component, or module. It evaluates ACID compliance, architectural resilience, and
production-grade enterprise quality. Non-blocking by default. Results in
deep-review-report.md.
Exhaustive Bug Hunt (11 dimensions)
/draft:bughunt performs exhaustive defect discovery: correctness, reliability,
security, performance, UI responsiveness, concurrency, state management, API contracts,
accessibility, configuration, tests, maintainability. Findings severity-ranked
(Critical/High/Medium/Low) with file:line locations in bughunt-report.md.
Command Reference
Draft provides 16 slash commands for the full development lifecycle.
Overview and intent mapping
What it does:
- Shows available commands and guides you to the right workflow
- Maps natural language intent to commands
Usage:
/draft
Initialize project context
What it does:
- Detects brownfield (existing) vs greenfield (new) project
- Architecture discovery (brownfield): 5-phase deep analysis with mermaid diagrams, data state machines, consistency boundaries
- Creates
product.md,tech-stack.md,workflow.md,.ai-context.md,architecture.md - Optionally enables Architecture Mode for module decomposition
- Creates
tracks.mdmaster registry
Usage:
/draft:init
/draft:init refresh
Monorepo federation and service aggregation
What it does:
- Scans immediate child directories for service markers
- Reads each service's
draft/context - Synthesizes root-level documents for system-of-systems view
- Generates service registry, dependency graph, tech matrix
- Bughunt mode: Runs
/draft:bughuntacross subdirectories, aggregates results
Usage:
/draft:index
/draft:index --init-missing
/draft:index bughunt
/draft:index bughunt dir1 dir2
Collaborative intake for spec + plan creation
What it does:
- Creates
spec-draft.md+plan-draft.mdimmediately - Conducts collaborative intake — one question at a time
- AI contributes expertise: patterns, risks, trade-offs, citations
- Checkpoints between phases for refinement
- On confirmation: promotes drafts to
spec.md+plan.md
Usage:
/draft:new-track "Add user authentication"
Execute tasks with TDD workflow
What it does:
- Finds next uncompleted task in active track
- Executes TDD cycle: RED → GREEN → REFACTOR
- Updates plan status markers and metadata
- At phase boundaries: runs three-stage review
Usage:
/draft:implement
Display progress overview
What it does:
- Shows all active tracks with completion percentages
- Displays current phase and task breakdown
- Highlights blocked items
Usage:
/draft:status
Git-aware rollback
What it does:
- Identifies commits by track pattern
- Shows preview with affected files and plan changes
- Requires explicit confirmation
- Executes
git revert+ updates plan markers
Revert levels:
- Task: Single task's commits
- Phase: All commits in a phase
- Track: Entire track's commits
Module decomposition + dependency mapping
What it does:
- Proposes modules with: name, responsibility, files, API, dependencies
- Maps dependencies, detects cycles, generates dependency diagram
- Creates
architecture.mdwith module definitions (derives.ai-context.md)
Usage:
/draft:decompose
Code coverage report (target 95%+)
What it does:
- Auto-detects coverage tool (jest, vitest, pytest-cov, go test)
- Runs coverage and captures output
- Reports per-file breakdown with uncovered line ranges
- Classifies gaps: testable, defensive, infrastructure
Usage:
/draft:coverage
Module lifecycle audit
Evaluates:
- ACID compliance
- Architectural resilience
- Production-grade enterprise quality
- Structural analysis
Usage:
/draft:deep-review src/auth
Exhaustive bug hunt across 11 dimensions
11 dimensions analyzed:
- Correctness, reliability, security, performance
- UI responsiveness, concurrency, state management
- API contracts, accessibility, configuration, tests, maintainability
Usage:
/draft:bughunt
/draft:bughunt --track my-feature
Code review orchestrator
Track-level review:
- Stage 1: Automated static validation
- Stage 2: Spec compliance verification
- Stage 3: Code quality checks
- Optional: Runs
/draft:bughunt
Usage:
/draft:review
/draft:review --full
Discover patterns, update guardrails
What it does:
- Scans codebase for recurring coding patterns (3+ occurrences)
- Learns conventions (skip in future analysis)
- Learns anti-patterns (always flag in future)
- Updates
draft/guardrails.md
Usage:
/draft:learn
/draft:learn promote
/draft:learn migrate
Generate Jira export for review
What it does:
- Generates
jira-export.mdfrom track plan - Maps: Track → Epic, Phase → Story, Task → Sub-task
- Auto-calculates story points from task count
Usage:
/draft:jira-preview
Push issues to Jira via MCP
What it does:
- Creates issues from
jira-export.md - Creates Epic → Stories → Sub-tasks in order
- Updates plan and export with issue keys
Requirements:
- MCP-Jira server configured in Claude Code settings
Usage:
/draft:jira-create
Architecture Decision Records
What it does:
- Documents significant technical decisions with context and rationale
- Creates structured ADR: Context, Decision, Alternatives, Consequences
- Stores at
draft/adrs/NNNN-title.md
Usage:
/draft:adr
/draft:adr "Use PostgreSQL"
Handle mid-track requirement changes
What it does:
- Analyzes impact on all completed and pending tasks
- Flags any
[x]tasks retroactively invalidated by the change - Proposes exact amendments to
spec.mdandplan.md - Applies changes only after explicit confirmation (CHECKPOINT: yes / no / edit)
- Appends a timestamped entry to
## Change Loginplan.md
Usage:
/draft:change the export format should also support JSON
/draft:change track add-export-feature add progress indicator
Team Collaboration
Draft's most powerful application is team-wide: every markdown file goes through commit → review → update → merge before a single line of code is written. By the time implementation starts, the entire team has already agreed on what to build, how to build it, and in what order.
The PR Cycle on Documents, Not Code
/draft:init. For brownfield projects, Draft
performs deep 5-phase architecture discovery — generating architecture.md
(30-45 pages, 25 sections + appendices with Mermaid diagrams) and deriving
.ai-context.md (200-400 lines, 15+ sections: invariants, interface contracts,
data flows, cookbooks). Team reviews project vision,
technical choices, system architecture, and workflow preferences via PR.
/draft:new-track — a collaborative
intake where AI asks structured questions one at a time, contributes expertise
(patterns, risks, trade-offs), and builds the spec progressively. Team reviews requirements,
acceptance criteria, and phased task breakdown via PR./draft:decompose. Team reviews module boundaries, API
surfaces, dependency graph, and implementation order via PR. architecture.md is
the 30-45 page source of truth; .ai-context.md is the machine-optimized AI
context derived from it. Senior engineers validate the
architecture without touching the codebase./draft:jira-preview and
/draft:jira-create. Epics, stories, and sub-tasks are created from the approved
plan. Individual team members pick up Jira stories and implement.
spec.md), in what order (plan.md),
with what boundaries (.ai-context.md / architecture.md). After
implementation, quality tools
verify completeness.Why This Changes How Teams Work
spec.md and plan.md to understand features
The CLI is single-user, but the artifacts it produces are the collaboration layer. Draft handles planning and decomposition. Git handles review. Jira handles distribution. Changing a sentence in spec.md takes seconds. Changing an architectural decision after 2,000 lines of code takes days.
Jira Integration
Sync tracks to Jira with a two-step workflow. Preview before pushing to catch issues early.
jira-export.md with epic and stories using
/draft:jira-preview
/draft:jira-createAuto Story Points: 1-2 tasks = 1pt, 3-4 tasks = 2pts, 5-6 tasks = 3pts, 7+ tasks = 5pts
Videos
Short videos covering Draft's methodology, agents, and workflows. View full playlist
Codebase Research
Every AI interaction starts with understanding. /draft:init performs a deep 5-phase analysis
of your codebase and produces architecture.md — a comprehensive engineering reference
that derives .ai-context.md,
captures how your system actually works. Not how it was designed. How it works today.
Why This Exists
AI coding assistants face a fundamental problem: they don't know your system. Every session, they re-discover your architecture by reading files, guessing at patterns, and inferring relationships. This costs tokens, wastes time, and produces hallucinations when the AI fills knowledge gaps with assumptions.
What It Captures
.ai-context.md is organized around the question every engineer and AI agent needs answered: "Where is my data right now, what state is it in, and what happens if something fails here?"
Dual Output: Machine + Human
One analysis, two outputs. Each optimized for its audience.
The same analysis serves both audiences without compromise. AI agents get token-efficient tables they can parse reliably. Engineers get prose they can read over coffee. Neither format sacrifices for the other because they're generated from the same source.
How It Helps AI Assistants
How It Helps Engineers
Living Document, Not a Snapshot
Architecture documentation rots. .ai-context.md doesn't — because it's maintained by the same workflow that changes the code.
/draft:init
refresh updates the document when the codebase evolves./draft:implement updates module status after completing tasks.
/draft:decompose adds new modules with dependency graphs. Each mutation
auto-refreshes the derived .ai-context.md. The documents evolve with the code.
Pay the analysis cost once, benefit on every interaction. The 10-minute init analysis saves hours of repeated context-building across every AI session, every code review, every new engineer onboarding, and every incident response. The ROI compounds with every use.
Reference
Project Structure
draft/
├── product.md # Product vision, goals, guidelines
├── tech-stack.md # Technical choices, accepted patterns
├── architecture.md # Source of truth: 30-45 pages, Mermaid diagrams, code snippets
├── .ai-context.md # Derived: 200-400 lines, machine-optimized AI context
├── workflow.md # TDD, commit, validation config
├── guardrails.md # Hard guardrails, learned conventions, anti-patterns
├── validation-report.md # Project-level quality checks (generated)
├── jira.md # Jira project config (optional)
├── tracks.md # Master track list
└── tracks/
└── <track-id>/
├── spec.md # Requirements
├── plan.md # Phased task breakdown
├── architecture.md # Track modules, data paths (optional)
├── .ai-context.md # Token-optimized, derived (optional)
├── metadata.json
├── validation-report.md # Quality checks (generated)
└── jira-export.md # Jira stories (optional)
Status Markers
Simple markers track progress throughout specs and plans. Progress is explicit, not assumed.
Evidence before claims, always. Never mark [x] without running verification, confirming output shows success, and showing evidence in the response.
Core Principles
Constraint Mechanisms
How Draft keeps AI focused and accountable:
Industry Standards
Draft codifies the engineering culture of "Big Tech" (Google, Amazon, Stripe) into an AI-assisted workflow. The goal is to shift effort "left"—solving problems in writing before writing code.
Industry Practice Ranking
How Draft methods map to industry standards, ranked by impact on engineering quality (1 = Critical, 5 = Optimization).
| Rank | Practice | Draft Implementation | Industry Equivalent | Companies |
|---|---|---|---|---|
| 1 | Design-First Engineering | spec.md & plan.md Writing a detailed spec and implementation plan before coding. |
Design Docs / RFCs Amazon "PR/FAQ", Google Design Docs. |
Google, Amazon, Stripe, Uber |
| 1 | Monorepo / Shared Context | /draft:index Federated knowledge index and system maps. |
Unified Codebase Single source of truth, automated dependency graphing. |
Google, Meta, Twitter |
| 2 | Test-Driven Development | /draft:implement Enforced "Red-Green-Refactor" workflow. |
TDD / CI Gates Tests written as part of the feature. |
Netflix, Pivotal |
| 3 | Structured Code Review | /draft:review Three-stage review: 1) Spec compliance, 2) Code quality. |
Readability / Owners Google's "Critique" system. |
Google, Meta |
| 3 | Arch. Decision Records | .ai-context.md + architecture.md Documenting why the system is built this way. Machine-optimized + human-readable dual output. |
ADRs Immutable records of architectural choices. |
Spotify, AWS, GitHub |
| 4 | Bug Bashes | /draft:bughunt Systematic, categorized search for edge cases. |
Bug Bashes Scheduled team-wide testing sessions. |
Microsoft, Game Studios |
| 5 | Service Catalog | product.md Standardized metadata for every project. |
IDP (Internal Dev Platform) Portals to manage service metadata. |
Spotify, Lyft |
Detailed Comparison
You cannot run
/draft:implement without a spec. This forces the AI to "think"
before "acting," preventing the generation of code that works but solves the wrong problem.
Google engineers write 5-20 page docs before coding ("Code is expensive, docs are cheap"). Amazon works backwards from the Press Release (PR/FAQ) to ensure customer value.
Aggregates context from multiple services (`draft/` folders) into a federated knowledge base, solving the "context window" problem for AI.
Google/Meta store code in one giant repo with massive tooling (Blaze/Buck) to manage dependencies and allow atomic refactors.
Three-stage review: Automated Validation (architecture, security, performance), Spec Compliance, and Code Quality. Stage 1 automates quality checks that would require manual review elsewhere.
Google requires 3 approvals: LGTM (Peer), Owners (gatekeeper), and Readability (Language Expert) to ensure code health.
Adopting Draft places your workflow at Maturity Level 4/5 (High). You are operating with the "Staff Engineer" model of a FAANG company: proposing Implementations of Specs, performing Systematic Bughunts, and maintaining Architectural Documents.