What is nspec?
nspec is a specification-driven project management tool built for AI-native development. It structures your backlog as markdown files — every feature request (FR) gets a matching implementation spec (IMPL) — and exposes an MCP server so AI coding assistants like Claude Code can autonomously pick up work, track tasks, and advance specs through their lifecycle.
Instead of writing tickets that an AI has to interpret, you write structured specs that an AI can read, validate, execute against, and update as it works.
Install
pip install nspec[mcp]
The [mcp] extra includes the Model Context Protocol server. Other package managers work too:
pipx install nspec[mcp] # Isolated install
uv add --dev nspec[mcp] # uv
poetry add --group dev nspec[mcp] # Poetry
Verify the install:
nspec --version
Initialize Your Project
Run nspec init in your project root:
nspec init
nspec auto-detects your stack — language (Python, Node, Rust, Go), package manager (poetry, uv, pip, npm, yarn, pnpm, cargo), CI platform (GitHub Actions, GitLab CI) — and scaffolds everything:
your-project/
├── .novabuilt.dev/nspec/config.toml # Configuration file
├── .claude/commands/ # Claude Code skills
│ ├── go.md # /go — start a work session
│ ├── loop.md # /loop — autonomous execution
│ ├── backlog.md # /backlog — view priorities
│ ├── backlog-handoff.md # Session handoff summaries
│ └── verify-tests.md # Verify test coverage
├── nspec.mk # Makefile fragment
└── docs/
├── frs/active/
│ └── TEMPLATE.md # Feature request template
├── impls/active/
│ └── TEMPLATE.md # Implementation template
└── completed/
├── done/
├── superseded/
└── rejected/
You can customize paths during init, or accept the defaults and adjust .novabuilt.dev/nspec/config.toml later.
Options
nspec init --ci github # Generate GitHub Actions workflow
nspec init --ci gitlab # Generate GitLab CI config
nspec init --docs-root specs/ # Use specs/ instead of docs/
nspec init --force # Overwrite existing files
Set Up the MCP Server
The MCP server is how Claude Code (or any MCP-compatible agent) talks to your backlog. Generate the config for your stack:
nspec --mcp-config
This prints a ready-to-paste .mcp.json block tailored to your project. For a pip-installed project, it looks like:
{
"mcpServers": {
"nspec": {
"command": "nspec-mcp",
"args": [],
"env": {}
}
}
}
For managed environments, the command adapts automatically:
| Stack | Command |
|---|---|
| pip / pipx | nspec-mcp |
| poetry | poetry run nspec-mcp |
| uv | uv run nspec-mcp |
| hatch | hatch run nspec-mcp |
| Node (npm/yarn/pnpm) | npx nspec-mcp |
Add the output to .mcp.json in your project root. Claude Code picks this up automatically when you open the project.
Alternative: Claude Code settings
You can also add it to .claude/settings.json in your project or your global Claude Code settings — same JSON block.
Transport options
The default is stdio (direct pipe). For Docker or remote setups:
nspec-mcp --sse # SSE on http://localhost:8080/sse
nspec-mcp --http # Streamable HTTP on http://localhost:8080/mcp
For Docker deployments, use the included helper:
scripts/mcp-docker.sh start # Build and run
scripts/mcp-docker.sh watch # With hot-reload
Then configure Claude Code with a URL transport:
{
"mcpServers": {
"nspec": {
"type": "sse",
"url": "http://localhost:8080/sse"
}
}
}
Create Your First Spec
Create a feature request + implementation spec pair:
nspec spec create --title "User authentication" --priority P1
This creates two files with auto-assigned IDs (e.g., S001):
docs/frs/active/FR-S001-user-authentication.md
# FR-S001: User authentication
**Priority:** P1
**Status:** Proposed
deps: []
## Overview
Description of the feature request.
## Acceptance Criteria
- [ ] AC-F1: First acceptance criterion
docs/impls/active/IMPL-S001-user-authentication.md
# IMPL-S001: User authentication
**Status:** Planning
**LOE:** N/A
## Tasks
- [ ] 1. First task
Edit these files to fill in the details — acceptance criteria, task breakdown, level of effort. The validation engine ensures everything stays consistent.
Organizing with epics
Group related specs under an epic:
# Create an epic (use --type epic)
nspec spec create --title "Platform foundation" --type epic --priority P1
# This gets ID E001 (epics use E prefix)
# Set it as the default for future specs
nspec spec create --title "Platform foundation" --type epic --set-default
# Create specs under it
nspec spec create --title "Database schema" --priority P1 --epic E001
nspec spec create --title "API layer" --priority P1 --epic E001
Validate your setup
nspec validate
This runs the 6-layer validation engine: format, existence (FR/IMPL pairing), dependencies, business logic (priority inheritance), and ordering.
Working with Claude Code
With the MCP server configured, Claude Code has structured access to your entire backlog. Here’s how to use it.
Start a work session
Use the /go skill (installed by nspec init):
/go S001
This calls session_start which shows pending tasks, blockers, and suggests the first action. Claude picks up where you (or a previous session) left off.
View the backlog
/backlog
Shows all specs by priority, with completion percentages and blocked items highlighted.
Autonomous mode
/loop
Claude autonomously cycles through the backlog: picks the highest-priority unblocked spec, works on it, marks tasks complete (running tests before each checkpoint), and moves to the next spec. Each spec gets a fresh context window.
Natural interaction
You don’t need skills — Claude can call any MCP tool directly. Examples:
- “What should I work on next?” — Claude calls
next_spec - “Show me S003” — Claude calls
show(supports fuzzy IDs:3,003,S003all work) - “Mark task 2.1 done on S001” — Claude calls
task_complete(runsmake test-quickfirst) - “Create a spec for search functionality” — Claude calls
create_spec
How the test gate works
When Claude marks a task or acceptance criterion complete, nspec runs make test-quick before actually checking it off. If tests fail, the task stays unchecked and Claude gets the failure output. This prevents specs from advancing when code is broken.
To skip the gate (e.g., for documentation-only tasks):
Pass run_tests=false when completing this task
Sequential task enforcement
Tasks must be completed in order — task 2 can’t be checked off before task 1, and all subtasks (2.1, 2.2) must finish before their parent (2). This keeps implementation disciplined and reviewable.
Code review
Before a spec can be archived as complete, it must pass a code review:
Review S001
Claude calls codex_review, which reviews the diff against the FR’s acceptance criteria and the IMPL’s task list. The verdict (APPROVED or NEEDS_WORK) is written directly into the IMPL file. Only APPROVED specs can be completed.
The Spec Lifecycle
Specs progress through a defined lifecycle:
Planning → Active → Testing → Ready → Completed
Each transition is explicit:
- Planning — Spec is written, tasks defined
- Active —
activatesets the spec as current work; FR moves to Active - Testing — All tasks done,
advancemoves to Testing - Ready — Tests pass,
advancemoves to Ready - Completed — Code review passes,
completearchives tocompleted/done/
Claude (or you) calls advance to move forward. The MCP server validates that the spec is ready for each transition.
Parking and blocking
If a spec is stuck:
Park S001 — waiting on external API access
Individual tasks can be blocked too:
Block task 3 on S001 — depends on infrastructure team
The blocked_specs tool shows everything that’s stalled.
CLI Quick Reference
| What | Command |
|---|---|
| Initialize project | nspec init |
| Generate MCP config | nspec --mcp-config |
| Validate everything | nspec validate |
| Create a spec | nspec spec create --title "T" [--priority P1] [--epic E001] |
| View progress | nspec spec progress --id S001 |
| Open the TUI | nspec tui |
| Engineering metrics | nspec dashboard |
| Add a dependency | nspec dep add --to S002 --dep S001 |
| Mark task done | nspec task check --id S001 --task-id 1 |
| Advance status | nspec task next-status --id S001 |
| Session handoff | nspec session handoff --id S001 |
Configuration Reference
All config lives in .novabuilt.dev/nspec/config.toml:
[paths]
feature_requests = "frs/active"
implementation = "impls/active"
completed = "completed"
completed_done = "done"
completed_superseded = "superseded"
completed_rejected = "rejected"
[defaults]
epic = "001" # Default epic for new specs
[validation]
enforce_epic_grouping = false
[review]
model = "gpt-5.2"
reasoning_effort = "high"
timeout = 600
Configuration priority: CLI args > environment variables (NSPEC_FR_DIR, etc.) > config file > built-in defaults.
What’s Next
- Explore the TUI —
nspec tuigives you a full interactive dashboard with vim keybindings, search, and live reload. - Set up CI —
nspec init --ci githubadds validation to your PR pipeline. - Add your Makefile fragment —
include nspec.mkin your Makefile for convenient targets. - Read the full docs — github.com/Novabuiltdevv/nspec