Open Source · The prompt linter for LLM applications · Zero LLM Calls Inside

The Prompt Linter
for LLM Applications

Score, analyze, and standardize prompts before you send them to any LLM — like ESLint, but for AI instructions.

Works locally. No API keys. No pipeline required.

Your prompt
> Write me something about our product launch|
Score: 31/100
⚠ vague_objective ⚠ missing_audience ⚠ no_constraints
Compiled output
<role>Product marketing writer</role>
<goal>Write a 200-word launch announcement</goal>
<constraints>
- Tone: professional, concise
- Audience: existing customers
</constraints>
Score: 91/100

Why Prompt Quality Matters

Most LLM cost and quality problems trace back to the prompt. Fix the input, fix the output.

💸

Poor prompts increase token cost

Verbose, unfocused prompts burn tokens on filler. You pay for confusion, not intelligence.

🔄

Inconsistent outputs require rework

Same prompt, different results. Each retry costs tokens and time. The prompt itself is the problem.

⚠️

Hidden ambiguity increases hallucination risk

Vague instructions give the LLM room to guess. Guessing means hallucinating.

🤷

Teams lack objective quality scoring

There's no linter for prompts. No scoring. No structure check. Prompts run without any check.

Prompt Optimizer MCP

An open-source prompt linter that runs entirely on your machine.

01

Every prompt scored before you run it

A 0–100 quality score across 5 dimensions tells you exactly where a prompt is weak — before you spend a single token. Stop guessing, start measuring.

02

Standardized structure, fewer retries

Structured compilation adds role, goal, constraints, and output format to every prompt. Clearer instructions mean the LLM gets it right the first time. Less back-and-forth, fewer wasted calls.

03

Works everywhere, calls nothing

Runs 100% locally. Zero LLM calls inside. No API keys needed for the optimizer itself. Your prompts never leave your machine. Works offline, works air-gapped, works with any LLM.

What Is AI Prompt Linting?

Prompt linting is the practice of systematically scoring, restructuring, and standardizing natural-language instructions before sending them to a large language model. The goal is to reduce token cost, enforce consistent structure, and eliminate ambiguity — turning ad-hoc prompting into a repeatable, measurable process.

  • Scoring clarity, specificity, and structure
  • Detecting ambiguity and missing constraints
  • Compiling structured instructions with role, goal, and boundaries
  • Estimating token cost across multiple providers
  • Standardizing prompt frameworks across teams

How It Works

Five steps from rough prompt to production-ready. All analysis is deterministic — no AI calls inside.

1

Score your prompt

Get a quality score (0–100) across 5 dimensions: clarity, specificity, completeness, constraints, and efficiency. See exactly where it's weak.

2

Detect ambiguities

10 deterministic rules catch scope explosion, missing constraints, and vague instructions — before you waste a single API call.

3

Compile a structured version

The optimizer adds role, goal, constraints, workflow, and output format. Choose your target: Claude XML, OpenAI system/user, or Markdown.

4

Know the cost before you run

Estimates token count and cost across 8 models from Anthropic, OpenAI, and Google. Recommends the best model for your task type.

5

Review & approve

Answer blocking questions if your prompt is ambiguous, review the compiled result, then approve. Nothing executes without your sign-off — the gate is enforced in code.

Use Cases

Different teams, same problem: inconsistent prompts waste tokens and produce unpredictable results.

For developers building AI features

Enforce consistent structure across every prompt. Same quality bar, every time, across every team member.

For engineering teams managing LLM spend

Compare pricing across 8 models before every call. Compress context to strip irrelevant input. Cut spend across your org without cutting quality.

For CI/CD pipelines

Add prompt-lint to your GitHub workflow. Set a quality threshold. CI fails if any prompt scores below it.

For MCP-powered workflows

Drop into Claude Code, Cursor, or Windsurf in 10 seconds. Works with any MCP-compatible client. No configuration required.

How Teams Check Prompt Quality Today

There are several ways to improve prompt quality. Here's how they compare.

MethodProsCons
Manual prompt rewritingFlexible, no tooling neededInconsistent, slow, no scoring
Fine-tuning modelsPowerful for specific tasksExpensive, complex, requires data
Trial-and-error iterationEasy to startToken waste, no quality measurement
Prompt Optimizer MCPStructured, measurable, runs locally, free tier includedWorks via MCP client or Node.js

Built for people who take prompts seriously

Everything runs locally. Two runtime dependencies. No API keys needed for the optimizer itself.

🔍

Quality Scoring

Stop guessing if a prompt is good enough. Get a 0–100 score across clarity, specificity, completeness, constraints, and efficiency.

🛡️

Ambiguity Detection

Catch problems before they waste API calls. 10 deterministic rules detect scope explosion, missing constraints, and vague instructions.

🎯

Structured Compilation

Let the LLM focus on answering, not interpreting. Outputs prompts with role, goal, constraints, and workflow — compiled for Claude, OpenAI, or Markdown.

Context Compression

Send less, get more. Strips irrelevant context sections and reports exact token savings. Smaller input = lower cost + better focus.

💰

Multi-Provider Cost

Know what you'll pay before you run. Token and cost estimates across 8 models from Anthropic, OpenAI, and Google.

🔒

Offline License

No accounts, no network calls, no PII collected. Ed25519-signed keys verified locally. Works air-gapped.

📦

Programmatic API

Use as a Node.js library — no MCP server needed. import { optimize } from 'claude-prompt-optimizer-mcp'. Pure, synchronous, zero side effects.

Use it as a library

Skip the MCP server. Import the optimizer directly into your Node.js code — same pipeline, pure functions, zero side effects.

npm install
npm install claude-prompt-optimizer-mcp
Usage — full analysis pipeline in 3 lines
import { optimize } from 'claude-prompt-optimizer-mcp'; const result = optimize('fix the login bug in src/auth.ts'); console.log(result.quality.total); // 63 (raw prompt score) console.log(result.compiled); // Full XML-compiled prompt console.log(result.cost); // Token + cost estimates (8 models)
Target any LLM — Claude XML, OpenAI, or Markdown
const oai = optimize('write a REST API', undefined, 'openai'); // oai.compiled → [SYSTEM]...[USER]... const md = optimize('write a REST API', undefined, 'generic'); // md.compiled → ## Role ... ## Goal ...

ESM only · Node 18+ · import works, require() does not · Full API reference →

All 11 tools

Metered tools count against your usage quota. Free tools are always unlimited.

ToolTierPurpose
optimize_promptMeteredAnalyze, score, compile, estimate cost → PreviewPack
refine_promptMeteredAnswer questions, add edits → updated PreviewPack
approve_promptFreeSign-off gate → final compiled prompt
estimate_costFreeMulti-provider token + cost estimator
compress_contextFreePrune irrelevant context, report savings
check_promptFreeQuick pass/fail + score + top issues
configure_optimizerFreeSet mode, threshold, strictness, target
get_usageFreeUsage count, limits, remaining quota
prompt_statsFreeAggregated stats: avg score, top tasks, savings
set_licenseFreeActivate Pro or Power license key
license_statusFreeCheck license, tier, and expiry

FAQ

How do I measure prompt quality?
Prompt Optimizer scores every prompt 0–100 across five dimensions: clarity, specificity, completeness, constraints, and efficiency. Each point deducted has a specific, transparent reason. The before/after delta shows exactly what improved.
How can I reduce LLM token cost?
Two ways: use the context compressor to strip irrelevant input (reports exact token savings), and use the cost estimator to compare pricing across 8 models before you run. Structured prompts also reduce retry cycles — clearer instructions mean fewer wasted calls. Cost estimates are approximate — validate for billing-critical workflows.
Does this call external APIs?
No. Zero LLM calls inside. All analysis is deterministic — regex, heuristics, and rule engines. Your prompts never leave your machine. The host Claude (or any LLM client) provides all intelligence; the optimizer provides structure.
What is MCP?
Model Context Protocol — an open standard that lets AI assistants like Claude, Cursor, and Windsurf use external tools. Think of it as plugins for AI. Not using MCP? The programmatic API works as a standalone Node.js library.
Does it work with OpenAI and Claude?
Yes. Prompts compile to three formats: Claude XML (<role>, <goal>, <constraints>), OpenAI system/user split, and generic Markdown. The cost estimator covers models from Anthropic, OpenAI, and Google.
Is it free?
The free tier includes 10 optimizations, unlimited scoring and checking, and all 11 tools. Pro ($4.99/month) adds 100 optimizations per month. Power ($9.99/month) is unlimited. No credit card required to start.

Get started in under 2 minutes

Free tier includes 10 optimizations, unlimited scoring, and all 11 tools. No credit card required.

Free
$0
forever
  • 10 optimizations total
  • Unlimited scoring & checking
  • 5 per minute rate limit
  • All 11 tools
  • All 3 output formats
Get Started
Power
$9.99
per month
  • Unlimited optimizations
  • Unlimited scoring & checking
  • 60 per minute rate limit
  • Always-on mode
  • Priority support
Get Power

Have a promo code? Enter it at checkout for a discount.

Install

Add to any MCP-compatible client (Claude Code, Cursor, Windsurf) in 10 seconds.

MCP config (recommended)
{ "mcpServers": { "prompt-optimizer": { "command": "npx", "args": ["-y", "claude-prompt-optimizer-mcp"] }}}
Or install globally via npm
npm install -g claude-prompt-optimizer-mcp
Or one-line curl install
curl -fsSL https://rishiatlan.github.io/Prompt-Optimizer-MCP/install.sh | bash

Activate your license

After purchase, you'll receive a license key. Paste it into Claude Code:

Step 1 — Tell Claude to activate
Activate my Prompt Optimizer Pro license: po_pro_eyJ...
Step 2 — Verify
Check my prompt optimizer license status

That's it. No account, no login, no API key. The license key is verified offline using Ed25519 signatures.