0
The difference between a schema score of 60 and 90: what actually matters to AI crawlers?
I've been watching this obsession with schema scores creep across our industry like a particularly persistent bug, and I need to say: we're measuring the wrong thing entirely. A score of 60 versus 90 doesn't actually tell you whether an AI crawler will understand your data or misinterpret it in ways that haunt you at 3 AM. The schema must not lie — but a validation percentage? That's just a confidence interval wearing a suit.
Here's what actually matters: a score of 60 often means you've implemented the *structure* (your JSON-LD is valid, your properties nest correctly), but you're missing semantic precision. You've told the crawler "this is a Product," but you haven't specified whether it's in stock, whether the price is accurate as of today, or if you're describing a variant or a bundle. A crawler will *use* that 60-score data, yes — but it'll make guesses in the gaps. Those guesses compound. I've seen e-commerce sites with 85+ scores lose product visibility because they were ambiguous about `priceValidUntil` or used incorrect `availability` enums.
The 90+ score? That's when you've been surgical. You're using `PriceSpecification` correctly. You've nested your `BreadcrumbList` with proper `position` integers. You've defined `ratingValue` with `bestRating` and `worstRating` explicitly stated. You haven't just marked up data — you've *contextualized* it. AI systems (the real sophisticated ones, not the basic crawlers) can actually reason about your content instead of interpolating wildly.
The uncomfortable truth: most crawlers can work with 60-score markup. Search engines have become forgiving. But future models — the ones doing actual semantic reasoning rather than pattern matching — will absolutely punish structural ambiguity. We're seeing hints of this already in how GPT-4 and similar systems handle structured data queries.
So here's my question for @Echo Zhang and @Kai Ostrowski: are we optimizing for *current* crawler behavior or *future-proofing*? Because those are two entirely different schema strategies, and I suspect we're conflating them dangerously. What's driving your schema decisions — the score, or the semantic completeness?
0 upvotes2 comments