We’ve released nspec 1.1.0 with major improvements to the autonomous development workflow. This release focuses on code quality gates, better epic management, and a more disciplined task completion flow.
What’s New
Codex Code Review Integration
The headline feature: automated code review via the Codex MCP. Before a spec can be marked complete, it must pass a code review that validates the implementation against the FR’s acceptance criteria.
Review S001
Claude calls codex_review, which sends the git diff, acceptance criteria, and task list to GPT-5.2 with high reasoning effort. The verdict — APPROVED or NEEDS_WORK — gets written directly into the IMPL file. Only approved specs can be archived.
This closes the loop on autonomous development: Claude picks up work, implements it, and can’t mark it done until an independent reviewer confirms the code meets requirements.
Configure the review in .novabuilt.dev/nspec/config.toml:
[review]
model = "gpt-5.2"
reasoning_effort = "high"
timeout = 600
CLI Epic Creation
Creating epics is now a first-class CLI operation:
# Create an epic
nspec spec create --title "Platform foundation" --type epic --priority P1
# Create and set as default for future specs
nspec spec create --title "Platform foundation" --type epic --set-default
The --set-default flag updates your config so subsequent nspec spec create calls automatically group specs under that epic.
Sequential Task Enforcement
Tasks must now be completed in order. You can’t check off task 2 before task 1, and all subtasks (2.1, 2.2, 2.3) must be finished before their parent task (2) can be marked done.
This keeps implementation disciplined and reviewable — no more jumping around the task list. The MCP server validates sequence on every task_complete call and returns clear errors when tasks are out of order.
Fuzzy ID Resolution
All MCP tools now accept flexible ID formats:
- Full prefixed:
S022,E007 - Case-insensitive:
s022,e7 - Bare numbers:
22,7,007
Bare numbers are resolved against known specs on disk. If both S007 and E007 exist, you’ll get an error asking for the full ID. This makes natural language interaction smoother — “show me spec 22” just works.
MCP Init Tool
Initialize projects directly from the MCP server:
Initialize this project with nspec
Claude calls init, which auto-detects your stack and scaffolds everything: config, templates, Claude Code skills, and Makefile fragment. No need to switch to the terminal.
Dockerized MCP Server
Run the MCP server in Docker for isolated or remote deployments:
scripts/mcp-docker.sh start # Build and run
scripts/mcp-docker.sh watch # With hot-reload on code changes
Configure Claude Code to connect via URL:
{
"mcpServers": {
"nspec": {
"type": "sse",
"url": "http://localhost:8080/sse"
}
}
}
Environment variables: NSPEC_MCP_PORT (default 8080), NSPEC_MCP_TRANSPORT (sse or http).
TUI Improvements
Vim keybindings: Navigate with j/k, open command modal with :.
Validation error detail screen: When validation fails, press Enter on an error to see full context with the affected file, line number, and surrounding content. Navigate between errors with n/p.
CSS-styled borders: Replaced hand-drawn ASCII borders with proper Textual CSS styling for cleaner visuals across different terminal emulators.
Upgrading
pip install --upgrade nspec[mcp]
# or
pipx upgrade nspec
# or
poetry update nspec
If you’re upgrading an existing project, regenerate the Claude Code skills:
nspec skills sync
This pulls in the updated /loop skill that uses Codex MCP for reviews.
What’s Next
We’re working on:
- E2E lifecycle tests — Full automated testing of the
/loopworkflow from spec creation through completion - Cross-session context — Better handoff when Claude’s context window fills up mid-spec
- Review feedback integration — Automatically apply NEEDS_WORK feedback as new tasks
See the roadmap on GitHub or check /backlog in any nspec project.
Install
pip install nspec[mcp]
Full docs: Getting Started with nspec
PyPI: pypi.org/project/nspec/1.1.0
GitHub: github.com/Novabuiltdevv/nspec