AgentReady
PricingBenchmarksResearchMethodologyFree ToolsDocsBlogAbout
Log inGet Started
BlogProduct
ProductMarch 15, 20267 min

Our Algorithm Is Open Source — Here's Why

We made the AgentReady scoring algorithm open source. Not because we had to — because we believe the AI readiness standard should be built by the community, not controlled by one company. Here is why, and how to contribute.

Eitan Gorodetsky

Founder & CEO at AgentReady

Share

Table of Contents

  1. 01The Decision to Open Source
  2. 02Trust Through Transparency
  3. 03The Community Will Build a Better Algorithm
  4. 04What We Open-Sourced (and What We Did Not)
  5. 05How to Contribute
  6. 06What This Means for the Industry

The Decision to Open Source

When we started building AgentReady, one of the first decisions we faced was whether to keep the scoring algorithm proprietary or make it open source. Every existing web metric — Domain Authority, Domain Rating, Authority Score — is a black box. The methodology is loosely described but never published. The code is closed. The weights are secret.

We understand why those companies made that choice. Proprietary algorithms are defensible. They are harder to reverse-engineer. They create vendor lock-in. From a pure business perspective, keeping it closed makes sense.

But we chose a different path. The AgentReady scoring algorithm is fully open source, published on GitHub under the MIT license. Anyone can read it, run it, fork it, and contribute to it. Here is why.

Trust Through Transparency

The fundamental problem with black-box metrics is trust. When nobody can verify how a score is calculated, every number is taken on faith. Agencies use Domain Authority in client decks knowing that no one — including them — can validate the methodology. It works because enough people agree to trust the number, not because the number has been verified.

We think AI readiness is too important for faith-based measurement. Businesses will make real decisions based on their scores — investing in protocol adoption, restructuring content, changing technical infrastructure. Those decisions should be based on a methodology that can be audited, questioned, and improved.

Open source is how you earn trust. When a developer, an agency, or a researcher questions a score, they can read the code, understand the logic, and determine whether the methodology is sound. That transparency makes the metric more credible, not less.

The Community Will Build a Better Algorithm

No single team, no matter how talented, can build the best possible AI readiness scoring algorithm alone. The landscape is moving too fast. New AI platforms launch monthly. New protocols emerge quarterly. The signals that matter today may not be the signals that matter in six months.

Open-sourcing the algorithm invites contributions from the people who are closest to the problem. SEO professionals who work with hundreds of sites across different industries. Developers who build the tools and frameworks that power the web. Researchers who study how AI systems discover and cite content. AI engineers who understand the internals of retrieval-augmented generation.

Since publishing the code, we have already received pull requests that improved our schema validation logic, added detection for new AI crawler user agents, and proposed weight adjustments based on independent citation analysis. Each of these contributions made the algorithm better than we could have made it alone.

MIT
Open source license — use, modify, and distribute freely

What We Open-Sourced (and What We Did Not)

Let us be precise about what is open source and what is not.

Open source: The complete scoring algorithm — all 8 factor calculations, weight distributions, grade thresholds, and bonus logic. The signal detection code that checks for schema markup, AI protocols, bot access rules, and content structure. The test suite that validates scoring accuracy. And the documentation that explains every decision.

Not open source: The AgentReady platform infrastructure — the scanning pipeline, the web application, the database layer, the API, and the monitoring system. These are proprietary because they represent the service layer, not the methodology. You can run the scoring algorithm on your own infrastructure, but the hosted platform is our business.

  • Open source: Scoring algorithm, all 8 factor calculations, weight distributions
  • Open source: Signal detection code (schema, protocols, bot access, content analysis)
  • Open source: Grade thresholds, bonus logic, and scoring calibration data
  • Open source: Full test suite and methodology documentation
  • Proprietary: Scanning infrastructure, web application, API, and monitoring platform
  • Proprietary: User data, scan results, and analytics dashboards

How to Contribute

Contributing to the AgentReady scoring algorithm is straightforward. The repository is hosted on GitHub and we welcome contributions of all sizes.

Bug fixes — If you find a case where the scoring produces an incorrect or unexpected result, open an issue with a reproducible example. If you can fix it, submit a pull request.

New signal detectors — If you have identified a signal that correlates with AI citation outcomes and is not currently measured, propose it. Include the signal definition, detection logic, and any supporting data.

Weight adjustments — If you have data suggesting a factor should be weighted differently, open a discussion with your methodology and evidence. We evaluate weight proposals quarterly.

Documentation — Improvements to the methodology docs, code comments, or contribution guidelines are always welcome.

Every accepted contribution is credited in the changelog. Major contributors are listed in the project README. We believe the people who help build the standard should be recognized for it.

bash
# Clone the repository
git clone https://github.com/agentready/scoring.git
cd scoring

# Install dependencies
npm install

# Run the test suite
npm test

# Score a website locally
npx agentready-score https://example.com

Getting started with the AgentReady scoring algorithm

What This Means for the Industry

By open-sourcing the scoring algorithm, we are making a bet: that AI readiness measurement will become more valuable, not less, when the methodology is shared.

We want agencies to embed AgentReady scores in their audits. We want CMS platforms to integrate AI readiness checks into their dashboards. We want competing tools to build on the same foundation so that the industry converges on a shared standard rather than fragmenting into incompatible proprietary metrics.

The web needs a common language for AI readiness — just as it needed a common language for mobile readiness, page speed, and security. Open source is how common languages emerge. We are putting our methodology out there and inviting the industry to build on it, challenge it, and make it better.

If we are right, the AgentReady scoring framework will become the foundation that the entire industry uses to measure and communicate AI readiness. If we are wrong, at least we will have contributed something useful to the conversation. Either way, the web wins.

Frequently Asked Questions

Can I use the AgentReady algorithm in my own product?

Yes. The algorithm is released under the MIT license, which allows commercial use, modification, and distribution. We ask that you attribute AgentReady in your documentation, but this is a request, not a requirement of the license.

Will the open-source version always match the hosted platform?

Yes. The scoring algorithm used on agentready.site is always the same code that is published on GitHub. When we push updates to the hosted platform, the open-source repository is updated simultaneously.

How do I report a scoring issue?

Open an issue on the GitHub repository with the URL that produced the unexpected score, the expected behavior, and the actual behavior. Include screenshots of the scan results if possible. We triage scoring issues within 48 hours.

Check Your AI Readiness Score

Free scan. No signup required. See how AI engines like ChatGPT, Perplexity, and Google AI view your website.

Scan Your Site Free
Transparent Methodology|Original Research|Citable Statistics
EG
Eitan GorodetskyFounder & CEO

SEO veteran with 15+ years leading digital performance at 888 Holdings, Catena Media, Betsson Group, and Evolution. Now building the AI readiness standard for the web.

15+ Years in SEO & Digital PerformanceDirector of Digital Performance at Betsson Group (20+ brands)Conference Speaker: SIGMA, SBC, iGaming NEXTSPES Framework Creator (Speed, Personalisation, Expertise, Scale)
LinkedInWebsite
Share

Related Articles

Announcement

Introducing AgentReady: The AI Readiness Score for Every Website

We built AgentReady because the web is entering its AI era and nobody had a way to measure whether their website was ready. Today we are launching a free tool that scans any site and scores it across 8 factors that determine AI visibility.

Methodology

Inside the Algorithm: How We Score AI Readiness

Full transparency on how AgentReady calculates your AI readiness score. The 8 factors, their weights, the grade scale from A to F, bonus signals, and how it all connects to real AI citation outcomes.

Data & Research

What Is AI Readiness and Why It Matters in 2026

ChatGPT, Perplexity, Claude, and Google AI Overviews are changing how people discover information. AI readiness is the measure of whether your website is visible to these systems — and in 2026, it matters more than ever.

Related Documentation

how scoring workscontributing
Published: March 15, 2026Eitan GorodetskyScoring Methodology
Previous984 Sites, 12 Industries: What Actually Predicts AI Citations?NextThe State of AI Readiness: Early Findings From 4,500 Domains
AgentReady™

Make your website visible to AI agents, chatbots, and AI search engines.

Product

PricingBenchmarksBrowse ScansFree ToolsCertificationMethodology

Resources

DocsBlogTrendsCompare SitesResearchHelp CenterStatisticsAffiliate ProgramAbout

Media

Press KitExpert QuotesAI Ready BadgeEmbed WidgetsPartnersContact

Legal

Privacy PolicyTerms of ServiceLegal Hub

Network

AgentReady ScannerAI Readiness Reportsllms.txt DirectoryMCP Server ToolsAI Bot AnalyticsAgent Protocol SpecWeb Scorecard

© 2026 AgentReady™. All rights reserved.

AI readiness scores are estimates and not guarantees of AI search visibility.

Featured on Twelve ToolsFeatured on ToolPilot