In most PE deal processes today, AI evaluation follows a consistent and dangerously informal pattern. The management team presents a slide showing GenAI experiments, a Copilot subscription, or a product roadmap that mentions "AI-native." The deal team nods. The IC memo gets a one-paragraph technology section. The deal closes.

Six months post-acquisition, the portfolio operations team discovers that 80% of the company's data lives in spreadsheets, the CTO just left, and the "AI roadmap" was a deck built for the fundraise.

The problem is not that deal teams are unsophisticated. The problem is that there is no standard framework for AI readiness assessment — the way there is for financial, legal, or commercial due diligence. AI is still evaluated subjectively, at a high level, by generalists. This produces systematic mispricing.

This article sets out the framework for structured AI readiness due diligence in mid-market deals.

Why AI Readiness Is Materially Relevant to Valuation

The claim that AI readiness affects deal pricing is not hypothetical. It operates through three concrete value levers.

Operational savings potential. The McKinsey Global Institute (2024) estimates that AI automation can reduce operational costs by 10–30% in functions like finance, customer service, and operations. For a company with a 15% EBITDA margin, even a 10% cost reduction on a 30% cost base improves EBITDA by 300 basis points — a meaningful move on any multiple.

Revenue upside. Pricing optimization, lead scoring, and churn prediction each carry documented uplift. These are not speculative: they are documented in post-deployment case studies across comparable companies. The issue is whether a target has the data infrastructure to capture them.

Risk and remediation cost. A target with significant tech debt, fragmented data architecture, or EU AI Act compliance gaps carries quantifiable remediation costs. These should appear in your model — not as a qualitative note, but as a line item.

| Value Lever | Typical Impact Range | Data Source | |---|---|---| | Operations automation (Finance, CS, Ops) | 10–30% cost reduction | McKinsey GI, 2024 | | AI-driven pricing optimization | +2–5% revenue | McKinsey, 2023 | | Lead scoring improvement | +15–20% conversion | Salesforce State of Sales, 2025 | | Churn prediction reduction | −20–25% churn | Gainsight, 2024 | | Tech debt remediation (if needed) | −5–15% of deal EV | Valence benchmark, 2026 |

The 12 Dimensions of AI Readiness

Evaluating AI readiness as a monolithic judgment ("good AI culture" or "not AI-native") produces noise, not signal. A structured assessment requires decomposing readiness into its constituent parts.

The Valence framework assesses 12 dimensions grouped into four pillars.

Pillar 1: Data & Technology Infrastructure

  • D1: Data Quality & Governance
  • D2: AI/ML Capabilities in Production
  • D3: Technology Stack & Technical Debt

Pillar 2: Talent & Organisation

  • D7: Leadership AI Literacy
  • D8: AI Talent Density
  • D9: Change Management Readiness

Pillar 3: Strategic Alignment

  • D4: Regulatory & Compliance Posture
  • D10: Competitive AI Positioning
  • D12: AI Investment Roadmap

Pillar 4: Governance & Risk

  • D5: Cybersecurity Posture
  • D6: Data Privacy & Ethics
  • D11: Strategic AI Ambition

Each dimension is scored 1–5 against a calibrated rubric, with sector-adjusted weights. The composite score is a weighted geometric mean — chosen specifically because it penalises extreme weaknesses rather than averaging them away.

A company that scores 5/5 on all dimensions except data quality (1/5) is not a 4.2/5 company. The geometric mean reflects this correctly. An arithmetic mean would obscure it.

What to Look For: Red Flags and Green Flags

Not every AI readiness gap is equal. Some are fixable in 12 months with budget. Others are structural and should change deal pricing or trigger conditions precedent.

Hard red flags — consider deal-blocking:

  • Active cybersecurity breach or unresolved data incident in the last 18 months
  • EU AI Act prohibited system in production (e.g., real-time biometric surveillance, social scoring)
  • Tech debt exceeding 20% of deal enterprise value to remediate
  • Evidence of data fabrication in management reporting
  • Regulatory enforcement action pending on AI systems

Soft red flags — price in or condition:

  • No named data owner or CDO-level role
  • Data across 5+ disconnected systems with no centralisation plan
  • AI roadmap that references only pilots — nothing in production
  • Leadership team cannot articulate AI strategy in deal-specific terms
  • GDPR compliance gaps related to third-party LLM data transmission

Green flags — value-creating signals:

  • Proprietary, high-volume, structured datasets built over 3+ years
  • AI use cases in production with documented performance metrics
  • CTO or CDO with a verifiable AI delivery track record
  • EU AI Act self-assessment already completed
  • Clear board-level sponsorship of AI transformation

The Data Room Checklist: What to Request

A standard AI readiness data room request should include the following documents. Note that management's ability to produce these quickly is itself a signal.

| Document | What It Reveals | If Missing | |---|---|---| | Data architecture diagram | Centralisation, schema quality | Amber flag | | AI/ML model inventory | Production vs. POC ratio | Red flag if empty | | Tech debt register | Estimated remediation cost | Red flag if none | | GDPR data flow map | LLM exposure, consent gaps | Red flag | | IT security audit (last 12 months) | Breach history, posture | Red flag if >18 months old | | AI roadmap with KPIs | Budget, commitment level | Amber flag | | Head of Tech / CTO CV and tenure | Execution credibility | Amber flag if <12 months |

How to Weight AI Readiness in the Investment Thesis

The weight you assign to AI readiness should vary by the role AI plays in the value creation thesis.

If AI is central to the thesis — you are buying the company to accelerate an AI transformation, or you are paying a premium for AI capabilities — then AI readiness deserves the same rigour as financial DD. A full structured assessment is warranted.

If AI is a value-creation lever — you plan to improve EBITDA through AI post-close — then you need a reliable estimate of the savings potential and the cost to realise it. A targeted assessment focused on D1 (data quality) and D3 (tech stack) is minimum viable.

If AI appears only in the growth story — the target's market is AI-exposed but the company itself is not deploying AI — then the relevant question is competitive positioning and disruption risk. D10 (competitive AI positioning) and D4 (regulatory posture) are the priority dimensions.

The Integration into IC Process

A well-structured AI readiness assessment produces two outputs that belong in every IC memo:

The AI readiness score — a single comparable number (0–10 scale) with sector percentile. This allows the IC to benchmark the target against deals it has already seen and against market data.

The AI value bridge — a structured view of EBITDA impact: current EBITDA, plus identified AI savings, plus AI revenue uplift, minus remediation costs, minus required AI investment, equals potential exit EBITDA. This is the same logic as an operational improvement bridge, applied to AI.

Neither output requires speculation. Both can be derived from structured assessment of the 12 dimensions, comparable transaction data, and sector benchmarks. The work is disciplined, not visionary.

Key Takeaways

  • AI readiness affects valuation through three concrete levers: operational savings, revenue upside, and remediation cost. All three can be quantified.
  • A 12-dimension structured assessment is more reliable than management conversation or slide review for evaluating AI readiness.
  • The weighted geometric mean scoring approach penalises extreme weaknesses — reflecting how a single structural failure (e.g., broken data foundation) undermines all other AI initiatives.
  • The data room checklist should be a standard component of any deal process where AI features in the thesis.
  • The IC memo should include an AI readiness score and an AI value bridge alongside the standard financial analysis.
  • AI DD is not a speculative exercise. It is a structured, benchmarkable, repeatable process — when approached with the right framework.