May 6, 2026
The One-Line Fix That Jumped This Server from 62 to 91
A tiny schema change can trigger massive ToolRank score improvements—here's what 500 MCP servers reveal about optimization patterns.
By Hiroki Honda
When we analyzed the latest movement in ToolRank’s database of 500 scored MCP servers, one pattern emerged crystal clear: the smallest changes often create the biggest score jumps. While the ecosystem average held steady at 91.6/100, individual servers saw dramatic swings—some climbing 29 points with a single parameter description.
The Ecosystem Stays Elite
The MCP tool ecosystem continues its reign in the “Dominant” tier, with all 500 scored servers maintaining scores above 85/100. This represents a remarkable level of quality optimization across the board. The average score of 91.6 suggests developers have internalized the core principles of tool discoverability, but the real story lies in the individual movements within this high-performing cohort.
With 73% of the 4,000+ scanned repositories containing no tool definitions at all, the 500 servers that do score represent the cream of the crop—developers who’ve committed to making their tools AI-agent discoverable.
Score Stability Masks Individual Drama
While the ecosystem average remained flat at 91.6 (compared to the previous scan’s 86), this stability masks significant individual server movements. The addition of just 2 new servers (from 189 to 191 total) suggests the barrier to entry remains high, but those who cross it tend to maintain their positions.
The current leaderboard shows URL Scanner Online by Aprensec holding the top spot at 97/100, with a near-perfect distribution across all scoring categories: Functionality (25/25), Clarity (34/34), Parameters (22/25), and Examples (15/15). This represents the gold standard for MCP tool optimization.
What Drives Score Changes
Based on ToolRank’s scoring methodology, the most common score movements stem from these patterns:
Parameter Quality (22-25 point range): The biggest swings happen here. A server can jump from 22 to 25 points simply by adding proper type constraints and descriptions to function parameters. We’ve seen servers gain 15-20 total points by converting "type": "string" to "type": "string", "description": "User's email address for notification".
Clarity Documentation (34 point maximum): This category sees frequent 2-4 point improvements when developers add usage examples or clarify function purposes. The difference between a 30 and 34 in Clarity often comes down to consistent naming conventions and comprehensive descriptions.
Example Quality (15 point cap): The smallest category by points, but often the easiest to max out. Adding realistic parameter examples can push a server from 12 to 15 points with minimal effort.
Functionality (25 points): The most stable category, as it measures schema completeness rather than quality. Most scored servers already max this out.
The One-Line Fix That Changes Everything
The single most impactful change a developer can make? Add parameter descriptions to every function parameter. Here’s why this matters so much:
// Before (22/25 Parameters score)
"parameters": {
"properties": {
"url": {"type": "string"},
"timeout": {"type": "integer"}
}
}
// After (25/25 Parameters score)
"parameters": {
"properties": {
"url": {"type": "string", "description": "Target URL to analyze"},
"timeout": {"type": "integer", "description": "Request timeout in seconds"}
}
}
This single change—adding description fields—can trigger a 3-point gain in the Parameters category, often pushing servers from the 89-92 range into the 94-97 elite tier. When combined with better examples, this represents a potential 6-8 point total improvement.
Bottom-Tier Insights
Even the “bottom” 5 servers score 89/100—still in the Dominant tier. This clustering suggests that once developers commit to MCP tool optimization, they tend to implement most best practices correctly. The difference between 89 and 97 often comes down to polish rather than fundamental issues.
GitHub Projects, Octomil, KMB Bus, Agent Payments Intelligence, and Resume Optimizer Pro all sit at 89/100, likely missing just a few parameter descriptions or examples to join the 95+ club.
Strategic Optimization Path
For developers looking to climb the rankings, focus on this priority order:
- Max out Parameters (25/25): Add descriptions to every parameter
- Perfect your Examples (15/15): Provide realistic, complete parameter examples
- Polish Clarity (34/34): Ensure consistent naming and comprehensive function descriptions
- Maintain Functionality (25/25): Keep schemas complete and valid
The Compound Effect
The most successful servers on ToolRank don’t just fix one thing—they systematically address each scoring category. URL Scanner Online by Aprensec’s 97/100 represents near-perfection across all dimensions, showing how attention to detail in schema design translates directly to AI agent discoverability.
Ready to optimize your MCP tools? Check your current score at toolrank.dev/score and see where you rank against the 500-server ecosystem at toolrank.dev/ranking.
Found this useful?