prowl design.
v0 · experimental · open from prowl-bench

Optimize your API for AI agents.

Bring your existing OpenAPI or Postman collection. We score it for LLM-readability, suggest concrete rewrites, and auto-generate the artifacts agents need — llms.txt, an optimized OpenAPI, and a signed Mycelio manifest.

Not a Postman or Swagger replacement. The wedge is narrow: making your API legible to LLMs. Everything else (request execution, mocks, environments, team workflows) is what Postman is for.

3 reviews/day free · no signup pip install 'prowl-bench[design]' $0.02/review when paid

Everything above runs client-side. Your spec never leaves your browser unless you explicitly click "Run hosted review" (and even then it's the spec only, not your traffic).

How it works

01 · bring

Import your spec

Paste an OpenAPI URL, upload an OpenAPI/Postman file, or build a collection from scratch. Parsing happens in your browser.

02 · score

Per-endpoint LLM analysis

Each endpoint gets an agent-readiness score, token cost, per-LLM success rate, and concrete rewrite suggestions. 3 reviews/day free.

03 · export

Auto-generate artifacts

One click emits llms.txt, an optimized OpenAPI with the rewrites applied, and a signed Mycelio manifest you can publish.

Pricing

Friction-free trial. Pay only when you outgrow the free tier.

TierWhat you getPrice
Anonymous3 hosted reviews/day per IP · local CLI unlimitedfree
Logged-in (Prowl account)10 hosted reviews/day · saves your historyfree
Paid (x402 / $PROWL)Unlimited · live tests against your URL$0.02 review · $0.05 live

The only thing costing real money on our side is cloud LLM inference, so that's the only thing we charge for. Artifact generation (llms.txt, OpenAPI, Mycelio manifest), publishing, and re-publishing are free at all tiers.

Where we honestly are

v0. What works today:

What's still mock until backend lands: