Skip to content

Installation & Setup Issues

”Command not found: specmarket”

The CLI isn’t in your PATH.

Solution: Check pnpm’s global bin directory:

pnpm bin -g

Add this directory to your shell profile (~/.bashrc, ~/.zshrc, etc.):

export PATH="$(pnpm bin -g):$PATH"

Then reload your shell: source ~/.bashrc or source ~/.zshrc.

Note: If you installed via npm install -g @specmarket/cli, use npm bin -g instead.


”Node.js version mismatch”

SpecMarket requires Node.js 20 or later.

Check your version:

node --version

Solution:

  • Use nvm or similar: nvm install 20 then nvm use 20
  • Or install directly from nodejs.org

Specs may specify higher minimum versions (e.g., node_policy: ">=20.0.0" in spec.yaml). Check the spec details with specmarket info <spec-name>.


pnpm not found

SpecMarket and specs use pnpm for dependency management.

Solution:

npm install -g pnpm

Verify:

pnpm --version

“EACCES: permission denied” when installing globally

Your npm/pnpm global directory requires elevated permissions.

Solution (Option 1 — Recommended): Fix npm permissions without sudo:

mkdir ~/.npm-global
npm config set prefix ~/.npm-global
export PATH=~/.npm-global/bin:$PATH

Add the export to your shell profile to make it permanent.

Solution (Option 2): Use a Node version manager (nvm, asdf) — it handles permissions automatically.


Authentication & Credentials

”Unauthorized” when running specmarket login

Your Clerk session expired or couldn’t initialize.

Solutions:

  1. Clear the cached credential:
    rm ~/.specmarket/credentials.json
  2. Try logging in again:
    specmarket login
  3. If your browser doesn’t open, open specmarket.dev/auth manually and follow the device code flow.

Credentials stored in plaintext location

SpecMarket stores credentials at ~/.specmarket/credentials.json with 0600 file permissions (read/write by owner only).

This is secure, but:

  • On shared machines, don’t use specmarket login — use unauthenticated mode instead
  • The credentials file contains your session token; treat it like a password
  • You can delete it anytime: rm ~/.specmarket/credentials.json

Note: Credentials expire after 30 days. Re-authenticate with specmarket login when prompted.


”Invalid or expired device code”

Your login device code expired (typically 15 minutes).

Solution: Run specmarket login again and complete the authentication flow immediately.


”Spec validation failed”

The spec you tried to run has structural issues.

Solutions:

  1. Check the error message — It usually specifies which file is missing or malformed
  2. Validate locally — If you have the spec directory:
    specmarket validate /path/to/spec
  3. Check the spec details online:
    specmarket info @author/spec-name
    Look for recent version changes or reported issues.

Why this happens:

  • Spec author uploaded a broken version (rare for published specs)
  • Network issue during download — try again
  • Spec was deleted by the author

”Search rate limit exceeded”

You’ve made too many search requests too quickly.

Limit: 100 requests per minute per IP address.

Solution: Wait 60 seconds, then try again. If you’re running automated scripts, add delays between requests:

sleep 2  # 2-second delay between calls
specmarket search "query"

Search returns “No specs found” but I see specs on the web

You may be logged in as a different user, or filtering is too restrictive.

Solutions:

  1. Check your login status:
    specmarket whoami
  2. Try a broader search:
    specmarket search ""  # Returns all published specs
  3. Use the web UI at specmarket.dev/explore to browse with filters

Ralph Loop & Execution Issues

”Ralph Loop stalled” or “Build didn’t complete”

The AI agent couldn’t make progress on the spec.

Common causes:

  • Model ran out of context (spec too complex, solution requires many files)
  • Success criteria are contradictory or impossible to achieve
  • Spec requires an external service (Stripe, AWS) that isn’t configured

Solutions:

  1. Check the failure reason:

    specmarket report latest

    Look at which success criteria failed.

  2. Try running again: Ralph Loops are non-deterministic. Different runs may succeed or fail differently. Try 2-3 times.

  3. Check the spec’s success rate:

    specmarket info @author/spec-name

    If success rate is <50%, the spec is difficult or broken. Check recent reviews.

  4. Use a more capable model:

    specmarket run @author/spec-name --model claude-opus-4-6

    Opus handles complex specs better than Sonnet.

  5. Simplify the success criteria: If the spec is yours, break it into smaller success criteria. Instead of “build a full e-commerce platform”, start with “build product catalog only”.


”Build completed but some success criteria failed”

Not all success criteria were met.

Check what failed:

specmarket report latest

The output shows which criteria passed and which failed.

Next steps:

  1. For published specs: This is a spec quality issue. Check the spec’s success rate on the marketplace.
  2. For your own specs: Review your success criteria — are they too strict or ambiguous? Example bad criteria: “code is clean” (subjective). Example good criteria: “app starts without errors, displays data from database”.

”Docker sandbox not available”

You specified --sandbox docker but Docker isn’t installed or running.

Solutions:

  1. Install Docker:
    • macOS/Windows: Docker Desktop
    • Linux: apt install docker.io or equivalent
  2. Start Docker:
    docker ps  # Verify Docker is running
  3. Grant permissions (Linux):
    sudo usermod -aG docker $USER
    # Log out and back in for changes to take effect
  4. Run without sandbox if you trust the spec:
    specmarket run @author/spec-name  # No --sandbox flag

Security note: Running untrusted specs without sandboxing is risky. Specs can modify files on your system.


”Ralph Loop ran but output is broken or incomplete”

The build technically succeeded (success criteria passed), but the output isn’t usable.

Possible causes:

  • Spec success criteria are incomplete (don’t cover all functionality)
  • Spec contains bugs not caught by tests
  • Output was built but isn’t well-documented

Solutions:

  1. Test the output yourself:
    cd output-directory
    pnpm install
    pnpm test
  2. Check the spec’s README — It should explain what was built and how to run it
  3. Rate the spec honestly — Leave a review explaining what’s missing
  4. Report the issue — Use specmarket report to flag problems to the spec author

Telemetry & Privacy

”What data does telemetry collect?”

When you run specmarket report latest after a spec run, the following is submitted:

Transmitted:

  • Spec ID and version
  • Model used (e.g., claude-sonnet-4)
  • Total API cost (USD)
  • Build duration (seconds)
  • Success/failure status
  • Which success criteria passed/failed
  • Your username (if logged in)
  • Timestamp

NOT transmitted:

  • Your source code or output files
  • Prompts sent to Claude
  • Environment variables
  • API keys or credentials
  • Anything in .env or config files

”Telemetry is on by default”

Telemetry is opt-in by default (disabled).

To enable it:

specmarket config set telemetry true

To disable it:

specmarket config set telemetry false

To delete all submitted data:

specmarket config delete-telemetry

“I submitted telemetry — can I delete it?”

Yes.

specmarket config delete-telemetry

This removes all your submitted run reports from the platform. Your data is permanently deleted within 24 hours.


Known Limitations

These are documented gaps. They don’t prevent using SpecMarket, but you should be aware of them.

Bounties and Payments (Phase 1.5)

Limitation: Stripe Connect onboarding for bounty winners is scaffolded but not yet live.

What works:

  • Browse open bounties: specmarket bounties list
  • View bounty details: specmarket bounties info <id>

What doesn’t work yet:

  • Actually claim and receive payment for a bounty (coming in a future release)
  • Spec authors can’t onboard to Stripe Connect to receive payouts

Timeline: Stripe Connect will be enabled after Phase 1 stable deployment.


Ratings (Phase 1.5)

Limitation: The web UI’s “Rate this spec” widget exists but isn’t wired to the backend yet.

What works:

  • View existing ratings: specmarket info @author/spec
  • Submit ratings via CLI (coming)

What doesn’t work yet:

  • Click the star rating on the spec detail page — it won’t save

Workaround: Rate specs by leaving written reviews on the spec’s page (once review UI is live).


LLM Security Scanning (Experimental)

Limitation: The security scanner returns a default score of 75 instead of actually scanning the spec code.

What this means:

  • Specs aren’t being scanned for prompt injection, jailbreak vulnerability, or code safety issues yet
  • The score “75” doesn’t reflect the actual security posture of the spec
  • This is a placeholder for future machine-readable security analysis

Workaround: Review the spec’s code yourself:

git clone <spec-repo>
cd spec/
# Review spec.yaml, SPEC.md, PROMPT.md, stdlib/ for suspicious content

Managed Runs (Phase 2)

Limitation: Cloud-hosted spec execution (Managed Runs) is fully implemented on the backend but not yet deployed.

What this means:

  • You can only run specs locally with specmarket run
  • You can’t queue a spec to run in the cloud and receive results via webhook
  • You can’t set a scheduled cron job to run a spec daily

Timeline: Managed Runs will launch in Phase 2 after stable Phase 1 deployment. A coordinator service on Fly.io will handle orchestration.


Web UI Incomplete Features

Some web UI components are placeholders:

  • Social proof section — “Featured Specs” and “Happy Users” sections don’t have real data yet (blocked on first published specs)
  • Spec detail page runs tab — Shows structure but doesn’t display actual run history yet
  • Creator dashboard — Basic structure exists but analytics and advanced management aren’t implemented

Performance Issues

”Spec run is slow”

Ralph Loops are inherently slow — they involve multiple LLM calls, testing cycles, and refactoring.

Expected time: 15-60 minutes depending on spec complexity and model.

To make runs faster:

  1. Use a more capable model:

    specmarket run @author/spec --model claude-opus-4-6

    Capable models make fewer mistakes, requiring fewer fix cycles.

  2. Use a faster (but less capable) model if the spec is simple:

    specmarket run @author/spec --model claude-haiku-4-5

    Only for very small specs.

  3. Run locally, not in Docker: The Docker sandbox adds overhead. Use --sandbox none if you trust the spec.


”CLI commands are slow”

Network or disk I/O latency.

Check your network:

ping specmarket.dev

Check disk I/O:

time specmarket search ""

If timing is >5 seconds, you may have network issues or the platform is under load.


Reporting Issues

How to report a bug

Check the GitHub issues first — your bug might already be reported.

If you find a new bug:

  1. Run the command again and save the full output
  2. Include: your OS, Node version, CLI version (specmarket --version)
  3. Include exact steps to reproduce
  4. Open an issue on GitHub with the bug report template

How to request a feature

Use the feature request template on GitHub.


How to get help

  1. Check this page — Most common issues are documented here
  2. Check FAQ — General questions about specs, Ralph Loops, and the marketplace
  3. Check Spec Format Reference — If you’re publishing a spec
  4. Ask on GitHub Discussions — Community help

Getting Help

Still stuck?

This documentation is maintained by the Chief Wiggum agent. If you notice something is out of date or missing, file an issue and we’ll update it.