Agent Readiness Framework (ARF)
Version 1.0.0 — Published March 16, 2026
Abstract
The Agent Readiness Framework (ARF) defines a standardized methodology for measuring how prepared a website is for interaction with AI agents. As AI-powered search engines, assistants, and autonomous agents become primary channels for content discovery, websites must be optimized not only for human visitors but for machine comprehension. ARF provides a quantitative scoring system (0–100), eight weighted evaluation factors, a letter-grade scale, protocol compliance requirements, and a tiered certification program. This specification is published under the CC-BY-4.0 license to encourage open adoption and interoperability.
1. Introduction
The web is undergoing a fundamental shift. AI engines such as ChatGPT, Perplexity, Google AI Overviews, and Claude are becoming primary interfaces through which users discover and consume information. When a user asks an AI assistant a question, the assistant crawls, parses, and synthesizes web content to generate an answer — and decides which sources to cite.
This creates a new competitive dimension: AI readiness. A website that is invisible, poorly structured, or inaccessible to AI crawlers will be excluded from AI-generated answers, losing a rapidly growing channel of traffic and influence.
The Agent Readiness Framework (ARF) was created to give website owners, developers, and SEO professionals a clear, measurable, and actionable standard for AI readiness. It answers the question: “How well can AI agents find, understand, and cite my website?”
1.1 Design Goals
- Quantitative — Produce a single numeric score (0–100) that is easy to communicate and track over time.
- Actionable — Each factor maps to specific, implementable changes.
- Transparent — All weights, thresholds, and scoring criteria are published openly.
- Extensible — The framework is versioned and designed for evolution as AI protocols mature.
- Machine-readable — The specification is available as both a human-readable document and a JSON schema.
2. Terminology
- AI Readiness Score
- A composite numeric score from 0 to 100 representing the overall readiness of a website for AI agent interaction. Computed as a weighted sum of eight factor scores plus applicable bonuses.
- Factor
- One of eight scoring dimensions that contribute to the overall AI Readiness Score. Each factor has a defined weight and is scored independently on a 0–100 scale.
- Weight
- The proportional contribution of a factor to the overall score. Weights sum to 1.00 (100%).
- Grade
- A letter classification (A+ through F) derived from the numeric AI Readiness Score. Provides a quick qualitative assessment.
- Protocol
- A technical standard or file format that websites can implement to improve AI agent interaction (e.g., llms.txt, MCP, NLWeb, robots.txt).
- AI Agent
- Any software system that autonomously crawls, parses, or interacts with web content on behalf of an AI model. Includes search crawlers (GPTBot, ClaudeBot), AI assistants, and autonomous agents.
- Bonus
- Additional points awarded to the overall score when exceptional conditions are met (e.g., all AI bots allowed, 100% schema coverage, all protocols implemented).
- Certification Tier
- A level of formal recognition (Bronze, Silver, Gold, Platinum) awarded to sites that meet specific score and protocol requirements.
3. Scoring Methodology
The AI Readiness Score is computed as a weighted sum of eight independently scored factors, plus bonus points for exceptional implementations. The final score is capped at 100.
3.1 Formula
Score = min(100, content × 0.20 + botAccess × 0.20 + schema × 0.18 + entities × 0.10 + protocols × 0.10 + authority × 0.10 + crawlEfficiency × 0.07 + speed × 0.05 + bonuses )
Each factor score ranges from 0 to 100. Factor scores are calculated based on specific criteria described in Section 4.
3.2 Bonus Points
Bonus points are awarded for exceptional implementations and added to the weighted sum before capping at 100:
| Bonus | Points | Condition |
|---|---|---|
| All Bots Allowed | +5 | Googlebot, ChatGPT-User, PerplexityBot, and ClaudeBot all allowed in robots.txt |
| Full Schema Coverage | +5 | 100% of crawled pages contain valid JSON-LD structured data |
| All AI Protocols | +10 | llms.txt, NLWeb, and MCP all detected |
3.3 Score Floors
All factor score floors are currently set to 0. No minimum score is enforced per category in Algorithm v2.0. This may change in future versions to prevent zero-scoring in any single area.
4. Factor Definitions
Each factor is scored independently on a 0–100 scale. The following sections describe what each factor measures, its weight, and its scoring criteria.
4.1 Content Quality
Weight: 20%Evaluates how well content is structured for AI consumption. AI engines prefer content that leads with answers, uses clear heading hierarchies, and signals freshness.
Scoring Criteria:
- Answer-first content in opening 200 words (up to +30 points)
- FAQ sections detected (+15 points)
- Clean H1 > H2 > H3 heading hierarchy (+15 points)
- Readability score (up to +20 points)
- Visible “last updated” date (+10 points)
- Original content signals detected (+10 points)
4.2 Bot Access
Weight: 20%Measures whether AI crawlers can access your content. A site that blocks AI bots in robots.txt is invisible to AI engines regardless of content quality.
Scoring Criteria:
- Googlebot allowed (+20 points)
- ChatGPT-User allowed (+20 points)
- PerplexityBot allowed (+20 points)
- ClaudeBot allowed (+20 points)
- XML sitemap present (+10 points)
- XML sitemap valid (+10 points)
4.3 Schema Markup
Weight: 18%Measures the presence, coverage, and quality of JSON-LD structured data. AI engines use schema markup to understand content type, relationships, and context.
Scoring Criteria:
- Any schema markup detected (+20 points)
- Coverage percentage across pages (up to +30 points)
- Unique schema type diversity (up to +25 points, 5 per type)
- Schema validation pass rate (up to +25 points)
4.4 Topic Clarity (Entities)
Weight: 10%Assesses the density and quality of named entities in content. AI engines prefer entity-rich content with clear author attribution and organizational identity (E-E-A-T signals).
Scoring Criteria:
- Entity density: 5+ per 100 words (+80), 3–4.9 (+60), 1–2.9 (+30)
- Author entities detected (+10 points)
- Organization entity detected (+10 points)
4.5 AI Protocols
Weight: 10%Checks for implementation of AI-specific protocols that enable direct machine interaction beyond simple crawling.
Scoring Criteria:
- llms.txt file present (+25 points)
- NLWeb endpoint detected (+30 points)
- MCP configuration found (+25 points)
- AI sitemap present (+10 points)
- Structured API endpoint (+10 points)
4.6 Authority & Trust
Weight: 10%Scores credibility signals that AI engines use to determine source trustworthiness. Uses entity authorship, organizational presence, schema richness, and freshness as proxies for authority.
Scoring Criteria:
- Author entities present (+25 points)
- Organization entity present (+20 points)
- Organization schema type detected (+15 points)
- Person schema type detected (+10 points)
- Schema validation rate ≥ 90% (+15 points)
- Content freshness signals (last updated date) (+15 points)
4.7 Crawl Health
Weight: 7%Evaluates how efficiently AI bots can navigate and discover content across the site. Considers sitemap quality, bot blocking behavior, and structural clarity.
Scoring Criteria:
- XML sitemap present (+25 points)
- XML sitemap valid (+15 points)
- Schema coverage: 80%+ (+20), 50–79% (+10)
- No blocked bots (+20), 1–2 blocked (+10)
- Clean heading structure (+10 points)
- AI sitemap present (+10 points)
4.8 Speed & Performance
Weight: 5%Measures page load performance using Core Web Vitals. Faster sites are more likely to be fully crawled and cited by AI engines.
Scoring Criteria:
- FCP: <0.4s (+40), <1.0s (+25), <2.0s (+10)
- LCP: <2.5s (+30), <4.0s (+15)
- CLS: <0.1 (+15), <0.25 (+8)
- Mobile score > 80 (+15 points)
5. Grade Scale
The numeric AI Readiness Score maps to a letter grade for quick qualitative assessment. Grade thresholds are defined in the getGrade() function of Algorithm v2.0.
| Grade | Score Range | Classification |
|---|---|---|
| A+ | 90 – 100 | Exceptional — Fully optimized for AI discovery and citation |
| A | 80 – 89 | Excellent — Strong AI readiness with minor improvements possible |
| B+ | 70 – 79 | Good — Solid foundation, some factors need attention |
| B | 65 – 69 | Above Average — Noticeable gaps in AI readiness |
| C+ | 55 – 64 | Average — Several areas need improvement |
| C | 45 – 54 | Below Average — Significant gaps in AI readiness |
| D | 35 – 44 | Poor — Major issues preventing AI discovery |
| E | 20 – 34 | Very Poor — Critical deficiencies across multiple factors |
| F | 0 – 19 | Failing — Site is largely invisible to AI engines |
6. Protocol Requirements
ARF evaluates implementation of specific web protocols that facilitate AI agent interaction. Protocols are categorized into three tiers based on their importance and complexity.
6.1 Required Protocols
These protocols are foundational. Without them, a site cannot achieve meaningful AI readiness.
robots.txt
Must be present at the site root. Must allow access for AI crawlers including GPTBot, Google-Extended, ClaudeBot, and PerplexityBot. Should reference the XML sitemap via a Sitemap: directive.
Spec: robotstxt.org
XML Sitemap
A valid sitemap.xml listing all indexable pages with lastmod dates. Must be well-formed XML conforming to the Sitemaps 0.9 protocol.
Spec: sitemaps.org
6.2 Recommended Protocols
These protocols significantly enhance AI readiness and are recommended for all sites seeking Silver certification or above.
llms.txt
A plain-text file at /llms.txt providing AI models with structured guidance about the site's purpose, content organization, key pages, and preferred citation format.
Spec: llmstxt.org
JSON-LD Structured Data
Schema.org-compliant JSON-LD markup embedded on all pages. Should include type-appropriate schemas (Organization, Article, Product, FAQ, Person, etc.) with high coverage and validation rates.
Spec: schema.org
ai.txt
A configuration file declaring AI agent permissions, supported capabilities, preferred interaction methods, and contact information for AI-related inquiries.
6.3 Advanced Protocols
These protocols enable deep AI agent interaction and are required for Gold and Platinum certification.
Model Context Protocol (MCP)
Enables AI assistants to directly interact with site data through tool-based APIs. MCP servers expose structured tools that AI agents can invoke programmatically.
Spec: modelcontextprotocol.io
NLWeb
Natural language query endpoint allowing AI agents to ask questions about site content in plain language and receive structured responses.
OpenAPI Specification
Machine-readable API documentation enabling AI agents to discover, understand, and call site APIs without human intervention.
Spec: openapis.org
Agent Discovery
A JSON manifest at /.well-known/agent-discovery.json declaring AI agent capabilities, supported protocols, and interaction endpoints.
7. Certification Criteria
The AgentReady Certification Program recognizes websites that meet defined AI readiness standards. Each tier requires both a minimum score and implementation of specific protocols.
| Tier | Min Score | Required Protocols |
|---|---|---|
| Bronze | 40 | robots.txt, XML Sitemap |
| Silver | 60 | robots.txt, XML Sitemap, llms.txt, Structured Data |
| Gold | 80 | robots.txt, XML Sitemap, llms.txt, Structured Data, ai.txt, MCP |
| Platinum | 95 | All protocols |
Certification is awarded based on the most recent scan results. Sites must maintain their score and protocol compliance to retain certification. Scores are re-evaluated on each rescan.
8. Machine-Readable Format
The complete ARF v1.0 specification is available as a machine-readable JSON document for programmatic consumption by AI agents, scoring tools, and compliance checkers.
8.1 Usage Example
# Fetch the ARF specification
curl https://agentready.site/arf-v1.json
# Parse in JavaScript
const spec = await fetch('https://agentready.site/arf-v1.json')
.then(r => r.json());
console.log(spec.arf_version); // "1.0.0"
console.log(spec.scoring.factors); // Array of 8 factors
console.log(spec.scoring.grades); // Grade thresholds
console.log(spec.certification); // Tier requirements8.2 Content Type
The JSON specification is served with Content-Type: application/json. Consumers should parse it as standard JSON. The schema follows a flat structure with top-level keys for scoring, protocols, and certification.
9. Changelog
Initial Release
- Defined 8 scoring factors with weights based on Algorithm v2.0
- Established grade scale (A+ through F) with 9 tiers
- Published protocol requirements (Required, Recommended, Advanced)
- Defined certification tiers (Bronze, Silver, Gold, Platinum)
- Published machine-readable JSON specification
- Licensed under CC-BY-4.0
10. References
- llms.txt Specification — Standard for providing AI models with site guidance
- Model Context Protocol (MCP) — Anthropic's protocol for AI agent tool interaction
- Schema.org — Collaborative vocabulary for structured data on the web
- The Robots Exclusion Protocol — Standard for web crawler access control
- Sitemaps XML Protocol — Standard for listing indexable URLs
- OpenAPI Specification — Standard for describing REST APIs
- Core Web Vitals — Google's user-centric performance metrics
- AgentReady Scoring Methodology v2.0 — Implementation details of the scoring algorithm
- AgentReady Certification Program — Detailed certification requirements and badge program
This specification is published under the Creative Commons Attribution 4.0 International License. You are free to share and adapt this material for any purpose, including commercially, provided you give appropriate credit to AgentReady.