Publishing Your First Spec
You’ve run a few specs. Maybe you forked one and tweaked it. Now you want to create something from scratch and share it with the world.
This guide walks through the full creator workflow: initialising a spec, writing the files that matter, validating everything locally, testing with the Ralph Loop, and publishing to the SpecMarket registry. By the end, your spec will have a public listing with metrics that update as people run it.
Time estimate: 30–60 minutes for your first spec, depending on complexity.
Prerequisites
Before you start, make sure you have:
- Node.js 20+ and pnpm installed
- SpecMarket CLI installed:
npm install -g @specmarket/cli - Claude Code (or another compatible runner) for testing your spec
- A SpecMarket account: run
specmarket loginto authenticate via Clerk
Check you’re logged in:
specmarket whoami
# @yourname | creator | 3 specs publishedIf you see “Not logged in,” run specmarket login first.
Step 1: Scaffold Your Spec
The init command creates a new directory with every required file templated out:
specmarket init --name invoice-generatorOr run specmarket init without flags for interactive mode, which prompts you for:
- Spec name (lowercase, hyphens only — this becomes your URL slug)
- Display name (“Invoice Generator”)
- SaaS it replaces (optional — e.g., “FreshBooks”)
- Output type:
web-app,cli-tool,api-service,library, ormobile-app - Primary stack:
nextjs-typescript,astro-typescript,python-fastapi,go,rust, orother
You’ll get a directory like this:
invoice-generator/
├── spec.yaml # Metadata and configuration
├── PROMPT.md # The AI prompt — what the agent executes
├── SPEC.md # Full human-readable specification
├── SUCCESS_CRITERIA.md # Measurable pass/fail criteria
├── stdlib/
│ └── STACK.md # Technology requirements and constraints
└── fix_plan.md # Implementation tracking (used during runs)
Every file matters. Let’s walk through each one.
Step 2: Write Your spec.yaml
This is the metadata file that the registry reads. The scaffolded version has sensible defaults, but you’ll want to customise it.
name: invoice-generator
display_name: Invoice Generator
description: >
Generate professional invoices from structured data. Supports
PDF export, line items, tax calculation, and company branding.
Replaces basic FreshBooks/Wave invoicing for teams that don't
need recurring billing.
version: "1.0.0"
replaces_saas: "FreshBooks"
replaces_saas_price: "$15/user/month"
output_type: web-app
primary_stack: nextjs-typescript
runner: claude-code
min_model: claude-sonnet-4-5
tags:
- invoicing
- pdf
- finance
- small-business
estimated_tokens: 150000
estimated_cost_usd: 12.00
estimated_time_minutes: 25
infrastructure:
default_provider: vercel
monthly_cost_free_tier: 0
monthly_cost_production: 5.00
setup_time_minutes: 10
deployment_targets:
- vercel
- docker
services:
- category: storage
provider: vercel-blob
purpose: "PDF storage"
free_tier: true
user_provided:
- "Company logo (PNG/SVG)"
- "Tax rates for your jurisdiction"Key fields to get right:
estimated_cost_usd— Be honest. Overestimate slightly rather than under. Users see this before they run your spec.replaces_saasandreplaces_saas_price— These power the cost comparison on your spec’s listing page. Only fill them in if the comparison is genuine.infrastructure— If your spec produces something that needs hosting, document the costs. Include both free tier and production pricing.tags— These drive search. Use specific, lowercase tags. Avoid generic tags like “app” or “tool.”
Step 3: Write SPEC.md
This is the full specification — the document that defines exactly what the spec builds. Write it for a human reader first. An AI will execute it, but a human needs to evaluate whether the output matches the spec.
Good SPEC.md files have:
- Clear scope: What the spec builds, and equally important, what it doesn’t build.
- Feature list: Every feature, described precisely. “Users can create invoices” is vague. “Users can create invoices with up to 50 line items, each with description, quantity, unit price, and tax rate” is useful.
- Data model: What entities exist, what fields they have, how they relate.
- UI requirements (for web-apps): Page list, key interactions, responsive requirements.
- Edge cases: What happens on error? What about empty states?
# Invoice Generator
## Overview
A web application that generates professional PDF invoices
from structured input. Designed for freelancers and small
teams who need clean invoices without a SaaS subscription.
## Features
### Invoice Creation
- Create invoices with up to 50 line items
- Each line item: description (text), quantity (number),
unit price (USD), tax rate (percentage)
- Subtotal, tax, and total calculated automatically
- Add company logo, name, address, and contact info
- Add client name, address, and contact info
- Invoice numbering: auto-increment from INV-0001
### PDF Export
- Generate PDF with professional layout
- A4 and US Letter paper sizes
- Download or share via unique URL (24-hour expiry)
## What This Spec Does NOT Build
- Recurring invoices or subscriptions
- Payment processing (no Stripe integration)
- Client management or CRM features
- Multi-currency support (USD only in v1)Step 4: Write SUCCESS_CRITERIA.md
This is how SpecMarket determines whether a run succeeded. Each criterion is a checkbox that the runner evaluates after execution. Be specific and testable.
# Success Criteria
## Core Functionality
- [ ] Application starts without errors on `npm run dev`
- [ ] User can create a new invoice with at least 3 line items
- [ ] Subtotal correctly sums all (quantity × unit price) values
- [ ] Tax is calculated correctly per line item
- [ ] Total equals subtotal plus total tax
- [ ] Generated PDF contains all invoice data
- [ ] PDF renders correctly in Chrome, Firefox, and Safari
## Data Validation
- [ ] Empty invoice (0 line items) shows validation error
- [ ] Negative quantities are rejected
- [ ] Invoice number auto-increments from INV-0001
## UI/UX
- [ ] Responsive layout works on mobile (375px width)
- [ ] Company logo uploads and displays in PDF
- [ ] Loading state shown during PDF generationRules for good criteria:
- Each criterion must be independently verifiable — no “and” joining unrelated checks.
- Use concrete values: “at least 3 line items” not “multiple items.”
- Include both happy path and error cases.
- The runner checks these using a combination of automated tests and visual inspection. Make them checkable.
Step 5: Write PROMPT.md
This is what the AI agent actually executes. Think of it as the instruction set for the Ralph Loop. It should reference SPEC.md and SUCCESS_CRITERIA.md rather than duplicating their content.
# Invoice Generator — Build Prompt
## Context
You are building an invoice generator web application.
Read SPEC.md for the full specification and
SUCCESS_CRITERIA.md for pass/fail requirements.
## Technical Requirements
Read stdlib/STACK.md for technology constraints.
## Build Order
1. Set up Next.js project with TypeScript
2. Implement data model (Invoice, LineItem types)
3. Build invoice creation form with line item management
4. Implement calculation logic (subtotal, tax, total)
5. Build PDF generation using @react-pdf/renderer
6. Add company branding (logo upload, details form)
7. Style with Tailwind CSS, ensure mobile responsive
8. Test against all success criteria
## Important
- Do NOT install unnecessary dependencies
- Do NOT add features beyond what SPEC.md defines
- Run the dev server and verify each feature works
before moving to the nextKeep PROMPT.md focused on execution order and constraints. The spec itself lives in SPEC.md.
Step 6: Write stdlib/STACK.md
This locks down the technology choices so the AI doesn’t make incompatible decisions.
# Technology Stack
## Framework
- Next.js 15 with App Router
- TypeScript (strict mode)
- React 19
## Styling
- Tailwind CSS 4
- No component libraries (build from scratch)
## PDF Generation
- @react-pdf/renderer
## Testing
- Vitest for unit tests
- Playwright for e2e (if time permits)
## Deployment
- Vercel (primary)
- Docker support via Dockerfile
## Constraints
- No external databases — use local state or file storage
- No authentication required
- Must work offline after initial loadStep 7: Validate Locally
Before publishing, validate your spec:
specmarket validate ./invoice-generatorThis checks:
spec.yamlschema compliance- All required files exist and are non-empty
- SUCCESS_CRITERIA.md has at least one criterion
- Infrastructure block is internally consistent
- Estimates are reasonable (warnings for extreme values)
Fix any errors. Warnings are informational — address them if they’re legitimate concerns.
Step 8: Test With a Run
Run your spec locally to verify it works:
specmarket run ./invoice-generator --max-loops 50Watch the output. If the run succeeds, you’ll see which success criteria passed and failed. If it stalls or fails, check:
- Is PROMPT.md clear enough? Ambiguous instructions cause stalls.
- Are SUCCESS_CRITERIA.md criteria testable? Vague criteria cause false failures.
- Is stdlib/STACK.md specific enough? If the AI picks incompatible libraries, constrain the choices.
Iterate on your spec files until you get a clean run. This is normal — most specs need 2–3 revision rounds before they’re reliable.
Check the run report:
specmarket report <run-id>Note the cost, time, and token count. These become your estimated_* values in spec.yaml. Update them to match reality:
# After a successful run that cost $11.40 and took 28 minutes
# Update spec.yaml:
estimated_cost_usd: 12.00 # Round up slightly
estimated_time_minutes: 30 # Round up slightly
estimated_tokens: 160000 # From run reportStep 9: Publish
When your spec passes validation and produces a clean run, publish it:
specmarket publish ./invoice-generatorThe CLI will:
- Validate your spec one more time
- Create a zip archive (excluding
node_modules,dist,.git) - Upload the archive to Convex file storage
- Create your registry listing
On success:
✓ Validated
✓ Archived (38 KB)
✓ Published @yourname/invoice-generator v1.0.0
Run it: specmarket run @yourname/invoice-generator
Your spec is now live. Anyone can find it with specmarket search "invoice" and run it with specmarket run @yourname/invoice-generator.
Rate limit: You can publish up to 5 specs per day.
What Happens After Publishing
Your spec listing immediately shows on the registry with:
- 0 runs, 0% success rate — this is normal, metrics build as people use it
- Your estimated cost from spec.yaml
- Security score — an automated scan runs on publish (scores 0–100)
- Community rating — users rate specs 1–5 stars after running them
As people run your spec, the platform tracks:
- Success rate across all runs
- Average cost (real, from telemetry)
- Average time
- Most used model
- Cost distribution (min, median, max)
These metrics update automatically. You don’t need to do anything.
Updating Your Spec
To publish a new version, update version in spec.yaml and run specmarket publish again:
version: "1.1.0"specmarket publish ./invoice-generator
# ✓ Published @yourname/invoice-generator v1.1.0Version history is preserved. Users can run any published version with specmarket run @yourname/invoice-generator@1.0.0.
Tips From Spec Creators
Start small. A spec that reliably builds a focused tool (invoice generator, markdown editor, URL shortener) is more valuable than a spec that attempts a full CRM and fails 60% of the time. High success rate > broad scope.
Be specific about what you don’t build. “This spec does NOT build recurring billing” saves runners from wasting tokens discovering that limitation on their own.
Test on multiple models. If your spec works on claude-sonnet-4-5, try it on claude-opus-4-5 too. Note the model you tested with in spec.yaml’s min_model field.
Watch your metrics. After publishing, check your spec’s success rate. If it drops below 80%, investigate which criteria are failing and tighten your PROMPT.md or SPEC.md.
Respond to community feedback. Ratings include review text. If runners consistently hit the same issue, fix it in a new version.
Related
- Spec Format Reference — Complete spec.yaml schema and file structure
- Getting Started — Running your first spec as a user
- CLI Command Reference — All 12 commands documented
- Security & Trust — How published specs are scanned