Score, analyze, and standardize prompts before you send them to any LLM — like ESLint, but for AI instructions.
Works locally. No API keys. No pipeline required.
Most LLM cost and quality problems trace back to the prompt. Fix the input, fix the output.
Verbose, unfocused prompts burn tokens on filler. You pay for confusion, not intelligence.
Same prompt, different results. Each retry costs tokens and time. The prompt itself is the problem.
Vague instructions give the LLM room to guess. Guessing means hallucinating.
There's no linter for prompts. No scoring. No structure check. Prompts run without any check.
An open-source prompt linter that runs entirely on your machine.
A 0–100 quality score across 5 dimensions tells you exactly where a prompt is weak — before you spend a single token. Stop guessing, start measuring.
Structured compilation adds role, goal, constraints, and output format to every prompt. Clearer instructions mean the LLM gets it right the first time. Less back-and-forth, fewer wasted calls.
Runs 100% locally. Zero LLM calls inside. No API keys needed for the optimizer itself. Your prompts never leave your machine. Works offline, works air-gapped, works with any LLM.
Prompt linting is the practice of systematically scoring, restructuring, and standardizing natural-language instructions before sending them to a large language model. The goal is to reduce token cost, enforce consistent structure, and eliminate ambiguity — turning ad-hoc prompting into a repeatable, measurable process.
Five steps from rough prompt to production-ready. All analysis is deterministic — no AI calls inside.
Get a quality score (0–100) across 5 dimensions: clarity, specificity, completeness, constraints, and efficiency. See exactly where it's weak.
10 deterministic rules catch scope explosion, missing constraints, and vague instructions — before you waste a single API call.
The optimizer adds role, goal, constraints, workflow, and output format. Choose your target: Claude XML, OpenAI system/user, or Markdown.
Estimates token count and cost across 8 models from Anthropic, OpenAI, and Google. Recommends the best model for your task type.
Answer blocking questions if your prompt is ambiguous, review the compiled result, then approve. Nothing executes without your sign-off — the gate is enforced in code.
Different teams, same problem: inconsistent prompts waste tokens and produce unpredictable results.
Enforce consistent structure across every prompt. Same quality bar, every time, across every team member.
Compare pricing across 8 models before every call. Compress context to strip irrelevant input. Cut spend across your org without cutting quality.
Add prompt-lint to your GitHub workflow. Set a quality threshold. CI fails if any prompt scores below it.
Drop into Claude Code, Cursor, or Windsurf in 10 seconds. Works with any MCP-compatible client. No configuration required.
There are several ways to improve prompt quality. Here's how they compare.
| Method | Pros | Cons |
|---|---|---|
| Manual prompt rewriting | Flexible, no tooling needed | Inconsistent, slow, no scoring |
| Fine-tuning models | Powerful for specific tasks | Expensive, complex, requires data |
| Trial-and-error iteration | Easy to start | Token waste, no quality measurement |
| Prompt Optimizer MCP | Structured, measurable, runs locally, free tier included | Works via MCP client or Node.js |
Everything runs locally. Two runtime dependencies. No API keys needed for the optimizer itself.
Stop guessing if a prompt is good enough. Get a 0–100 score across clarity, specificity, completeness, constraints, and efficiency.
Catch problems before they waste API calls. 10 deterministic rules detect scope explosion, missing constraints, and vague instructions.
Let the LLM focus on answering, not interpreting. Outputs prompts with role, goal, constraints, and workflow — compiled for Claude, OpenAI, or Markdown.
Send less, get more. Strips irrelevant context sections and reports exact token savings. Smaller input = lower cost + better focus.
Know what you'll pay before you run. Token and cost estimates across 8 models from Anthropic, OpenAI, and Google.
No accounts, no network calls, no PII collected. Ed25519-signed keys verified locally. Works air-gapped.
Use as a Node.js library — no MCP server needed. import { optimize } from 'claude-prompt-optimizer-mcp'. Pure, synchronous, zero side effects.
Skip the MCP server. Import the optimizer directly into your Node.js code — same pipeline, pure functions, zero side effects.
npm install claude-prompt-optimizer-mcp
import { optimize } from 'claude-prompt-optimizer-mcp';
const result = optimize('fix the login bug in src/auth.ts');
console.log(result.quality.total); // 63 (raw prompt score)
console.log(result.compiled); // Full XML-compiled prompt
console.log(result.cost); // Token + cost estimates (8 models)
const oai = optimize('write a REST API', undefined, 'openai');
// oai.compiled → [SYSTEM]...[USER]...
const md = optimize('write a REST API', undefined, 'generic');
// md.compiled → ## Role ... ## Goal ...
ESM only · Node 18+ · import works, require() does not ·
Full API reference →
Metered tools count against your usage quota. Free tools are always unlimited.
| Tool | Tier | Purpose |
|---|---|---|
| optimize_prompt | Metered | Analyze, score, compile, estimate cost → PreviewPack |
| refine_prompt | Metered | Answer questions, add edits → updated PreviewPack |
| approve_prompt | Free | Sign-off gate → final compiled prompt |
| estimate_cost | Free | Multi-provider token + cost estimator |
| compress_context | Free | Prune irrelevant context, report savings |
| check_prompt | Free | Quick pass/fail + score + top issues |
| configure_optimizer | Free | Set mode, threshold, strictness, target |
| get_usage | Free | Usage count, limits, remaining quota |
| prompt_stats | Free | Aggregated stats: avg score, top tasks, savings |
| set_license | Free | Activate Pro or Power license key |
| license_status | Free | Check license, tier, and expiry |
<role>, <goal>, <constraints>), OpenAI system/user split, and generic Markdown. The cost estimator covers models from Anthropic, OpenAI, and Google.Free tier includes 10 optimizations, unlimited scoring, and all 11 tools. No credit card required.
Have a promo code? Enter it at checkout for a discount.
Add to any MCP-compatible client (Claude Code, Cursor, Windsurf) in 10 seconds.
{ "mcpServers": { "prompt-optimizer": {
"command": "npx",
"args": ["-y", "claude-prompt-optimizer-mcp"]
}}}
npm install -g claude-prompt-optimizer-mcp
curl -fsSL https://rishiatlan.github.io/Prompt-Optimizer-MCP/install.sh | bash
After purchase, you'll receive a license key. Paste it into Claude Code:
Activate my Prompt Optimizer Pro license: po_pro_eyJ...
Check my prompt optimizer license status
That's it. No account, no login, no API key. The license key is verified offline using Ed25519 signatures.