After conducting over 50 AI readiness assessments across mid-market European companies, one pattern appears with enough consistency to be called a finding: the companies that talk most about AI score lowest on operational readiness.
This is not a minor anomaly. It is a systematic pattern that we observe across deal types, sectors, and geographies. The consulting firm with an "AI practice" generating 20% of revenue. The SaaS company with 40 data scientists on its website and a 40-page AI strategy deck prepared for the fundraise. The digital-native consumer brand with an AI product roadmap presented to every prospective investor.
These companies consistently score below industrial companies, logistics operators, and manufacturing businesses that mention AI rarely or not at all.
Understanding why this happens — and what it means for deal pricing — is one of the highest-value insights a deal team can carry into an IC.
The Data
The following table shows median Valence Scores across sector cohorts in our database (1–5 scale, across 12 dimensions, sector-adjusted weights). The differentiation on D1 (Data Quality & Governance) and D2 (AI/ML in Production) is particularly pronounced.
| Sector | Median Overall Score | D1: Data Quality | D2: AI in Production | D7: Leadership AI Literacy | |---|---|---|---|---| | Industrial / Manufacturing | 2.8 | 3.4 | 1.6 | 1.8 | | Logistics & Distribution | 2.7 | 3.2 | 1.9 | 1.7 | | SaaS / B2B Tech | 2.9 | 2.3 | 2.4 | 3.2 | | Professional Services | 2.4 | 1.9 | 1.8 | 2.6 | | Retail / E-commerce | 2.5 | 2.6 | 2.0 | 2.1 |
The SaaS sector scores higher on D7 (leadership AI literacy) — management can articulate an AI strategy fluently. But it scores significantly lower on D1 (data quality), the dimension most predictive of whether AI can actually be deployed. Industrial companies score the inverse: lower AI literacy at the leadership level, but substantially better underlying data infrastructure.
When we filter for companies that "talk most about AI in the IC deck" — defined as any company where AI features in three or more sections of the deal summary — the pattern sharpens further. Within this group, 71% scored below the sector median on D2 (AI in Production). The correlation between AI narrative prominence and AI in production is, if anything, slightly negative.
The 4 Structural Reasons
The pattern has structural explanations. It is not random, and it is not dishonesty on management's part. It reflects four systematic characteristics of how fast-growing, technology-adjacent companies develop — and how they differ from companies in traditional sectors.
Reason 1: The POC cemetery.
Gartner's 2024 AI governance report estimates that only 10–20% of AI proof-of-concept projects reach production deployment. In high-growth tech companies, the culture of experimentation is a genuine strength — but it generates a graveyard of pilots that look, on a slide, like AI capability. A company with 12 completed AI POCs and two in production is a different proposition from a company with 14 in production. The IC deck rarely makes this distinction explicit.
Reason 2: Data fragmentation masked by technology fluency.
Many SaaS and consulting companies operate in a paradoxical state: they have more data than traditional companies, but it is more fragmented. Each product line has its own data store. Each client engagement generates proprietary data that lives in a separate system. The CRM, the billing platform, the support tool, the ERP — none speak to each other.
Industrial companies, by contrast, have often been forced by operational necessity into centralised data architectures. A manufacturer running a single ERP for 15 years with 8 years of production data in one schema has a better data foundation for AI than a SaaS company with 40 microservices and 12 separate databases, despite the latter's technological sophistication.
Reason 3: Talent misallocation.
In many professional services and SaaS companies, the people with the title "data scientist" or "ML engineer" are doing business intelligence work: building dashboards, preparing management reports, and running SQL queries for the commercial team. This is valuable work. It is not AI development.
The misalignment between job title and actual function is consistent enough in our assessment data that we now probe for it directly in management interviews: "What percentage of your data team's time is spent building and maintaining models in production versus analytics and reporting?" For the companies that score lowest on D2, the answer is typically 20% or less in production.
Reason 4: Governance lag in high-growth environments.
The companies that grew fastest in the 2020–2024 cycle — consumer tech, fintech, B2B SaaS — built and scaled quickly. Governance processes, documentation, data ownership structures, and model monitoring were not priorities during the growth phase. The result is significant technical debt in the governance layer: no named data owner, no model registry, no documented AI risk process, no EU AI Act compliance review.
Traditional industrial companies, subject to ISO certifications, quality management requirements, and sector regulatory pressures, often have more mature governance structures for their AI systems by default — even though they did not build those structures with AI in mind.
What This Means for Deal Pricing
The implications are specific and actionable.
Implication 1: The AI narrative premium should be discounted.
A target that includes AI prominently in its investment thesis warrants more scrutiny on AI, not less. The promotional element of the AI narrative in deal processes has increased with the AI bull market of 2023–2025. A deal team that takes the AI narrative at face value is pricing in a premium for capability that may not exist operationally.
Implication 2: The "boring" data infrastructure is worth paying for.
A company that has never presented a GenAI strategy but has eight years of centralised, high-quality transaction data in a well-structured schema is a more valuable AI investment than the reverse. The data is the scarce asset. The AI capability can be built on top of good data. It cannot be built on top of a POC cemetery.
Implication 3: D2 (AI in Production) is the most important signal.
In our dataset, D2 score — specifically, the ratio of AI use cases in production to total AI projects — is the single dimension most predictive of post-acquisition AI value realisation. A company that can demonstrate two or three AI use cases in production, with documented performance metrics, is worth substantially more than a company with the same overall score but all AI activity in pilot phase.
Implication 4: The management interview must distinguish fluency from delivery.
The standard management meeting is designed to surface strategic thinking. It is not designed to surface operational reality. An AI-fluent leadership team is good at articulating AI strategy. They are not always good at distinguishing what is real from what is aspirational. The question to ask is not "tell me about your AI strategy" — it is "walk me through the last AI use case you took from prototype to production, the timeline, and the current performance metrics." The answer will tell you far more than the strategy deck.
The 3 Signals That Distinguish Real AI Maturity
In management meetings and data rooms, three signals reliably differentiate companies with genuine AI maturity from those with AI narrative.
Signal 1: Production metrics. Can the team cite specific, dated performance metrics from AI systems in production? Not "our model is 94% accurate" — but "our demand forecasting model has been in production since Q3 2024, it reduced overstock by 18% in the first six months, and it currently covers 73% of our SKU range." Precision is the tell.
Signal 2: The data owner. Is there a named individual who owns data governance — with an actual organisational structure below them, not just a title? The Head of Data who reports to the CTO with a team of three data engineers is real. The "CDO" who is actually a senior business analyst who inherited the title is not.
Signal 3: The failure list. Ask management: "Which AI projects have you stopped in the last two years, and why?" Companies with genuine AI maturity have a list. They killed projects that didn't work. They learned from them. Companies that have only run pilots have nothing to kill — and nothing to learn from.
Key Takeaways
- Companies that talk most about AI in deal processes score lower on operational AI readiness than companies in traditional sectors. This is a systematic pattern, not an anomaly.
- The four structural causes are: POC cemetery (low production ratio), fragmented data masked by technical fluency, talent misallocation toward BI rather than ML, and governance lag in high-growth environments.
- D1 (data quality) is more predictive of AI value creation than D7 (leadership AI literacy) or D2 (AI in production presence) on its own. Industrial companies consistently outperform on D1.
- Deal pricing should discount AI narrative premiums and apply a premium to companies with strong underlying data infrastructure — even if they have never presented a GenAI strategy.
- Three management interview questions reliably distinguish real AI maturity: production metrics with dates and numbers, a named data owner with an actual team, and a list of AI projects that were killed.