Pattern Learning
Home GitHub

Pattern Learning

Part IV: Quality · Chapter 15

5 min read

A new developer joins the team and notices that every database call in the codebase is wrapped in a transaction, even single reads. "Is this intentional?" she asks. Nobody remembers who started the pattern or why, but it has been that way for two years and nothing has broken. Three months later, an AI coding agent generates a database module without transaction wrappers. The code review catches it, but only because a human happened to know the convention. /draft:learn catches it automatically — because it already discovered that pattern, recorded it, and taught the AI to enforce it.

Codebase draft:learn Detect Patterns guardrails.md Learn & Record draft:implement Apply Rules
The pattern learning feedback loop: scan the codebase, detect recurring patterns, record them as guardrails, apply rules to future implementations, producing code that feeds the next scan.

How Pattern Learning Works

/draft:learn scans the codebase for recurring patterns — code structures that appear three or more times consistently. It distinguishes between conventions (patterns the team intentionally follows) and anti-patterns (patterns that cause bugs, security issues, or performance problems). Both are recorded in draft/guardrails.md, where they feed into every subsequent quality command.

$ /draft:learn
  Scanned: 247 source files across 18 directories

  Results:
    New conventions learned:     4
    New anti-patterns learned:   2
    Existing patterns updated:   3
    Skipped (insufficient data): 12
    Skipped (already documented): 5

  Promotion candidates (high confidence):
    2 patterns ready — run /draft:learn promote to review

  Updated: draft/guardrails.md

The Evidence Threshold

Pattern learning is conservative by design. A single occurrence is an example. Two occurrences might be coincidence. Three or more consistent occurrences are a pattern worth recording.

EvidenceConfidenceAction
1-2 occurrencesInsufficient data, skip
3-5 occurrences, all consistentMediumLearn as convention or anti-pattern
5+ occurrences, consistent, cross-verifiedHighLearn and flag as promotion candidate
5+ occurrences, inconsistentDo NOT learn (investigate the inconsistency)

The last row is critical. If a pattern appears in eight files but two of them do it differently, that is not a convention — it is a conflict that needs human attention. /draft:learn does not paper over inconsistencies; it surfaces them.

Evidence Over Assumptions

Pattern learning never infers from fewer than three occurrences, never auto-promotes to hard guardrails without human approval, never overwrites human-curated entries, and never learns framework defaults as project conventions. The rule is simple: quantity over anecdote.

Two Categories

Every learned pattern falls into one of two categories, and the distinction determines how quality commands treat it.

Conventions (Skip in Future)

A convention is a pattern that is consistently applied and does not cause bugs. When a convention is recorded, future runs of /draft:review and /draft:bughunt will not flag it as unusual or suspicious. This eliminates false positives.

### Custom Error Classes for Domain Errors
- **Category:** error-handling
- **Confidence:** high
- **Evidence:** Found in 9 files — src/auth/errors.ts:5,
    src/payments/errors.ts:3, src/orders/errors.ts:7, ...
- **Discovered at:** 2026-02-15
- **Established at:** ~2025-06-20
- **Last verified:** 2026-03-28
- **Last active:** 2026-03-25
- **Discovered by:** draft:learn on 2026-02-15
- **Description:** All domain modules define custom error
    classes extending BaseAppError with error codes and
    HTTP status mappings. This is intentional for
    centralized error handling in the API middleware.

Anti-Patterns (Always Flag)

An anti-pattern is a pattern that is consistently applied but causes or risks bugs, security issues, or performance problems. When an anti-pattern is recorded, every future quality command will flag it.

### Unguarded Environment Variable Access
- **Category:** reliability
- **Severity:** high
- **Evidence:** Found in 6 files — src/config/db.ts:12,
    src/config/cache.ts:8, src/workers/email.ts:3, ...
- **Discovered at:** 2026-02-15
- **Established at:** ~2025-08-10
- **Last verified:** 2026-03-28
- **Last active:** 2026-03-20
- **Discovered by:** draft:learn on 2026-02-15
- **Description:** process.env.VAR accessed without
    undefined check or default value. Missing env vars
    cause silent undefined propagation instead of
    fail-fast at startup.
- **Suggested fix:** Validate all required env vars at
    application startup using a config validation module.

Temporal Metadata

Every learned pattern carries four timestamps that enable temporal reasoning about the codebase's evolution:

TimestampPurposeHow Determined
Discovered atWhen Draft first observed the patternCurrent date when the scan runs
Established atWhen the pattern was introduced in the codebasegit blame on evidence files, oldest occurrence
Last verifiedWhen the pattern was last confirmed presentUpdated on each re-verification
Last activeWhen files using the pattern were last modifiedgit log -1 on each evidence file, most recent date

These timestamps answer questions that occurrence counts alone cannot. A pattern discovered recently but established two years ago is a well-entrenched convention. A pattern established last month and actively spreading is an emerging convention. A pattern where last_active is six months old and all occurrences live in legacy code is a declining pattern that the team is phasing out.

Temporal Analysis: Declining vs. Emerging Patterns

/draft:learn uses git blame to detect the trajectory of each pattern. If a pattern appears heavily in files last modified over a year ago but rarely in files modified within the past six months, it is flagged as declining. The occurrence ratio old:new greater than 3:1 triggers the declining classification.

This matters because a declining pattern should not be enforced. If the team is migrating from manual error logging to structured error middleware, learning the old pattern as a convention would create friction — flagging every new file that correctly uses the new approach as inconsistent. Instead, declining patterns are annotated but not propagated to quality commands.

Patterns Have Lifecycles

A pattern emerges when a developer introduces a new approach. It becomes a convention when others adopt it. It plateaus as the standard way. It declines when a better approach arrives. /draft:learn tracks where each pattern sits in this lifecycle, so it enforces living conventions and leaves dying patterns alone.

Pattern Lifecycle Adoption Rate Time Emergence 1-2 occurrences Adoption 3-5 occurrences Convention 5+ consistent Plateau stable standard Decline old:new > 3:1
The pattern lifecycle: a pattern emerges when a developer introduces a new approach, gains adoption as others use it, becomes a convention at 5+ consistent occurrences, plateaus as the established standard, and eventually declines when a better approach supersedes it. Draft tracks each pattern's position in this lifecycle to enforce living conventions and ignore dying patterns.

The Seven Dimensions Scanned

/draft:learn analyzes the codebase across seven dimensions, looking for recurring structures in each:

  1. Error handling — How errors are caught, logged, propagated. Custom error classes, retry strategies, error boundaries
  2. Naming — Variable, function, file naming conventions beyond language defaults. Module organization patterns
  3. Architecture — Import patterns, state management approaches, API call patterns, component composition
  4. Concurrency — Async/await conventions, locking approaches, queue patterns, cancellation handling
  5. Data flow — Validation placement, serialization conventions, caching strategies, transformation pipelines
  6. Testing — Test file placement, structure (arrange/act/assert vs. given/when/then), mock conventions, fixture patterns
  7. Configuration — Environment variable access patterns, feature flag patterns, config file conventions

Before recording any candidate, /draft:learn cross-references it against tech-stack.md (already documented patterns), existing guardrails.md entries (already learned), and .ai-context.md (architecture-level documentation). No duplication occurs.

How Patterns Feed Into Guardrails

draft/guardrails.md has three sections, each treated differently by quality commands:

SectionSourceQuality Command Behavior
Hard GuardrailsHuman-curated, checked itemsFlag violations as issues
Learned Conventions/draft:learnSkip these patterns during analysis (not bugs)
Learned Anti-Patterns/draft:learnAlways flag these patterns as bugs

Hard guardrails are human-written rules that override everything. Learned entries complement them but never replace them. If a learned pattern conflicts with a hard guardrail, the hard guardrail wins.

Auto-Eviction

Each section in guardrails.md is capped at 50 learned entries. When capacity is reached, the oldest medium-confidence entry that has not been re-verified in 90+ days is evicted to make room. This prevents guardrails from growing unboundedly and ensures that the entries reflect the current state of the codebase, not its history.

Conflict Detection

Before saving any new pattern, /draft:learn checks for conflicts with existing entries. If a new candidate contradicts a learned convention, an existing anti-pattern, or a hard guardrail, it presents both patterns side by side and asks the developer to resolve the conflict. Options: keep both (the new pattern is a scoped exception), replace (the pattern has evolved), or discard (the existing entry is correct). No silent overwrites.

The Promotion Workflow

Learned patterns start as provisional entries. When a pattern reaches high confidence (5+ consistent occurrences, cross-verified), it becomes a promotion candidate. Running /draft:learn promote presents these candidates for human review:

$ /draft:learn promote

  Pattern promotion candidates:

  1. [Convention] "Centralized API client pattern"
     (high confidence, 12 files)
     Promote to: tech-stack.md Accepted Patterns? [y/n]

  2. [Anti-Pattern] "Unguarded .env access"
     (high confidence, 6 files)
     Promote to: Hard Guardrail (enforce always)? [y/n]

Promoted conventions move to tech-stack.md under Accepted Patterns. Promoted anti-patterns become hard guardrails — permanently enforced rules. In both cases, human approval is required. /draft:learn never auto-promotes.

The Learning Loop

Pattern learning creates a continuous improvement cycle that makes each successive interaction with Draft more precise:

  1. /draft:init establishes the project context and creates guardrails.md
  2. /draft:implement generates code following existing guardrails
  3. /draft:learn scans the codebase, discovers new patterns, updates guardrails
  4. Next /draft:implement is constrained by the updated guardrails — fewer convention violations, fewer false positives in review
  5. /draft:review and /draft:bughunt read the updated guardrails — skip known conventions, always flag known anti-patterns
  6. /draft:learn promote graduates stable patterns to permanent status

Pattern learning also runs as the final phase of /draft:deep-review, /draft:review, and /draft:bughunt. Every quality analysis that discovers patterns feeds them back into guardrails, so the system gets smarter with each run without requiring explicit /draft:learn invocations.

Teaching the AI Your Team's Style

Pattern learning solves a fundamental problem with AI coding assistants: they do not know your conventions. They generate code that works but does not match how your team does things. /draft:learn observes your codebase, extracts the implicit rules your team follows, and makes them explicit in a file the AI reads on every interaction. The AI stops guessing your conventions and starts enforcing them.