The CodeMySpec Method

Stop Fighting AI Drift. Control What Goes Into Context.

The 5-phase methodology I use to build CodeMySpec with AI - from stories to production code.

Phase 1

πŸ“ User Stories

Define requirements in plain English

↓
Phase 2

πŸ—ΊοΈ Context Mapping

Map stories to Phoenix contexts

↓
Phase 3

πŸ“ Design Documents

Write specifications that control AI context

↓
Phase 4

πŸ§ͺ Test Generation

Generate tests from design documents

↓
Phase 5

⚑ Code Generation

Generate code that makes tests pass

The Problem: Compounding Drift

AI drift compounds through reasoning layers. The farther the LLM's reasoning gets from your actual prompt, the more the output drifts from your intent.

Reasoning Drift

LLM's reasoning slightly off from your intent β†’ code generation is more off. Each inference step compounds the error.

Documentation Drift

Generate PRD β†’ generate code from PRD β†’ two layers of drift between your intent and the output.

Process Drift

Ask LLM to prescribe AND follow process β†’ compounding error. Every time you ask the LLM to figure out the process, you add drift.

Every intermediate step adds drift. Every time you ask the LLM to figure out the process, you add drift.

The Solution: Control Context

Remove drift by controlling exactly what goes into the LLM's context. No intermediate reasoning about requirements. No inventing process.

Design documents define the specifications

Tests define the validation

LLM only generates code to match both

Here's the spec, here are the tests, make them pass.

The Five Phases

Phase 1

πŸ“ User Stories

Define requirements in plain English

What You Create

`user_stories.md` with acceptance criteria

Manual

Write stories yourself or interview with AI

  • Create user_stories.md in your repo
  • Interview yourself about requirements
  • Write stories in 'As a... I want... So that...' format
  • Add specific, testable acceptance criteria

Automated

Stories MCP Server handles interviews and generation

  • AI interviews you about requirements with structured PM persona
  • AI creates stories in database with full version history
  • AI reviews stories for quality and completeness on demand
  • Link stories to components automatically during design
  • Track status (in_progress, completed, dirty) throughout lifecycle
Example from CodeMySpec
Story Schema Implementation: Ecto schema with versioning, locking, and component association lib/code_my_spec/stories/story.ex
Phase 2

πŸ—ΊοΈ Context Mapping

Map stories to Phoenix contexts

What You Create

Context boundaries, entity ownership, dependency graph

Manual

Map stories to Phoenix contexts using vertical slice architecture

  • Group stories by entity ownership
  • Apply business capability grouping within entity boundaries
  • Keep contexts flat (no nested contexts)
  • Identify components within each context
  • Map dependencies between contexts

Automated

Architect MCP Server guides context design and validates dependencies

  • AI interviews you about architecture with structured architect persona
  • AI creates domain and coordination contexts in database
  • AI validates dependencies and detects circular references
  • Link stories to contexts automatically during design
  • Track architectural health (orphaned contexts, dependency depth)
  • Review architecture quality on demand with best practices validation
Example from CodeMySpec
Stories Context Design: Entity ownership, access patterns, public API, dependencies docs/design/code_my_spec/stories.md
Phase 3

πŸ“ Design Documents

Write specifications that control AI context

What You Create

Structured markdown specs (purpose, entities, API, components, dependencies)

Manual

Write design docs following template structure

  • Create docs/design/{app_name}/{context_name}.md
  • Follow template structure (purpose, entities, access patterns, etc.)
  • Be specific about behavior, edge cases, error handling
  • Include acceptance criteria from relevant user stories
  • List every component (file) that needs to be implemented
  • Paste design document into AI context before asking for code
How to Write Design Documents for AI (Coming Soon)

Automated

Context Design Sessions orchestrator generates designs from stories

Session Orchestration - Design Generation (Coming Soon)
Example from CodeMySpec
ContextDesign Schema: Embedded schema for parsing and storing design documents lib/code_my_spec/documents/context_design.ex
Phase 4

πŸ§ͺ Test Generation

Generate tests from design documents

What You Create

Test files, fixtures, integration tests

Manual

Write tests from design docs before implementation

  • Read design document
  • Write tests for each component's public API
  • Include edge cases and error scenarios from acceptance criteria
  • Create fixtures for test data
  • Run tests (they should fail - no implementation yet)
  • Paste design doc + failing tests into AI context for implementation
Test-First AI Code Generation (Coming Soon)

Automated

Component Test Sessions orchestrator generates and validates tests

Component Test Sessions - Automated Test Generation (Coming Soon)
Example from CodeMySpec
Stories Context Tests: Comprehensive tests for public API, edge cases, error handling test/code_my_spec/stories_test.exs
Phase 5

⚑ Code Generation

Generate code that makes tests pass

What You Create

Implementation that passes tests

Manual

Paste design + tests into LLM, iterate until green

  • Paste design document into AI context
  • Paste failing tests into AI context
  • Ask AI to implement each component following the design
  • Run tests
  • If tests fail, paste errors back to AI and ask for fixes
  • Iterate until all tests pass
  • Commit with clear message referencing design doc and tests
Design-Driven Code Generation with AI (Coming Soon)

Automated

Component Coding Sessions orchestrator implements and iterates until green

Component Coding Sessions - Full Automation (Coming Soon)
Example from CodeMySpec
Stories Context Implementation: Public API generated from design doc, validated by tests lib/code_my_spec/stories.ex

The Proof

I built CodeMySpec using this method. The codebase is the proof.

  • Stories context: Generated from design docs
  • Components context: Generated from design docs
  • Sessions orchestration: Generated from design docs
  • MCP servers: Generated from design docs
  • This entire application: Built with the product
Every module links back to a design document. Every design document links back to user stories.

Getting Started

Manual Process

Learn the methodology, works with any framework

2-4 hours to learn

Best for:

  • Learning the method
  • Non-Phoenix projects
  • Small projects
  • Prefer more control

Steps:

  1. Read "Managing User Stories" guide
  2. Create user_stories.md
  3. Map contexts
  4. Write one design doc
  5. Write tests
  6. Generate code with design + tests

Automated Workflow

Full automation with CodeMySpec, Phoenix/Elixir only

30 minutes setup

Best for:

  • Phoenix/Elixir projects
  • Want maximum speed
  • Trust the automated workflow
  • Medium/large projects

Steps:

  1. Install CodeMySpec
  2. Run setup session
  3. Interview generates stories
  4. Review generated designs
  5. Approve and generate code

FAQ

What if requirements change?

Update stories β†’ regenerate designs β†’ update tests β†’ regenerate code. Version control shows impact. The process supports change.

What about other frameworks?

Manual process works anywhere. Automation is Phoenix/Elixir only (for now).

Do I need MCP servers?

No. Manual process is copy-paste. MCP removes tedium.

How is this different from Cursor/Windsurf/etc?

They optimize prompts. This controls context. Different approach.

What if AI still generates wrong code?

Tests catch it. Iterate until green. If repeatedly failing, refine design. Tests are the validation gate.

Isn't this a lot of overhead?

Less overhead than refactoring AI-generated technical debt. Manual process takes time; automation removes overhead.

Related Content

Main Quest

Side Quest