This checklist is the most requested tool from deal teams integrating AI readiness into their process. It is designed to be used across three phases of due diligence — document review, management meetings, and technical assessment — and produces a structured output that can be referenced in the IC memo.

The scoring system is simple: Green (confirmed, no action required), Amber (requires monitoring or clarification), Red (deal-blocker or mandatory condition precedent). If a target accumulates more than 5 Red flags across the 40 questions, there is a material AI readiness risk that should be quantified in the deal model and addressed in the conditions precedent.

This checklist is not a substitute for a full AI readiness assessment. It is the structured initial screen that determines whether a deeper assessment is warranted — and what dimensions to prioritise in it.

How to Use This Checklist

Phase 1 — Document Review (before management meetings). Questions are answered from the data room: data architecture diagrams, IT audit reports, AI system inventories, GDPR data flow maps, and technology debt registers. If these documents do not exist, that is itself a Red flag on the relevant dimension.

Phase 2 — Management Meeting. Follow-up questions from document review gaps, plus direct probing on leadership AI literacy (Section 3) and strategic roadmap (Section 6).

Phase 3 — Technical Assessment. Detailed review with CTO/Head of Technology on Sections 1, 4, and 5 if initial screen reveals Amber or Red flags.

Section 1 — Data Quality & Governance (8 Questions)

Data quality is the single most predictive dimension of AI value realisation potential. A company with excellent AI ambition but poor data infrastructure cannot execute. Questions 1–8 assess whether the foundation is there.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 1.1 | Is all core operational data centralised in one or two systems? | Single system or well-integrated ERP | 3–5 systems with partial integration | 6+ disconnected systems, no integration plan | | 1.2 | What percentage of customer/transactional data is structured and queryable? | >85% | 60–85% | <60% | | 1.3 | Is there a named data owner with governance authority? | Named CDO or Head of Data with team | Informal owner, part-time responsibility | No named owner | | 1.4 | Does the company maintain at least 24 months of historical operational data? | 3+ years, continuous | 18–24 months or gaps | <18 months or major gaps | | 1.5 | Are data definitions standardised across business units? | Documented, version-controlled | Informal alignment | Conflicting definitions across BUs | | 1.6 | Does the company own proprietary datasets not accessible to competitors? | Yes, material and growing | Partial — some proprietary, mostly commodity | No proprietary datasets | | 1.7 | Is there a documented data quality measurement process? | Regular audits, known error rate | Ad hoc, no benchmarks | Never measured | | 1.8 | Has a GDPR data flow map been completed in the last 18 months? | Yes, current | Exists but >18 months old | Does not exist |

Section 1 interpretation: Questions 1.1 and 1.3 are the most critical. A company that cannot pass both is unlikely to deploy AI at scale regardless of other strengths.

Section 2 — AI/ML Capabilities (7 Questions)

This section distinguishes companies with AI in production from companies with AI in presentation decks.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 2.1 | How many AI or ML use cases are currently in production? | 3 or more | 1–2 | Zero (POC only) | | 2.2 | What is the ratio of AI projects in production vs. total AI projects initiated? | >50% reach production | 20–50% | <20% (POC cemetery) | | 2.3 | Are there documented performance metrics for AI systems in production? | Yes, with dates and baselines | Informal tracking | No documented metrics | | 2.4 | Does the company use an MLOps or model management platform? | Dedicated platform (MLflow, Vertex, etc.) | Basic versioning in code repo | No model management | | 2.5 | Has the company deployed GenAI in any customer-facing product? | Yes, with EU AI Act classification reviewed | Yes, without compliance review | No — and it's in the roadmap without governance plan | | 2.6 | Are AI model outputs logged and monitored for drift? | Automated monitoring with alerts | Manual monitoring | No monitoring | | 2.7 | Does the company have a documented model risk management process? | Formal process, board-approved | Informal process | No process |

Section 3 — Leadership & Talent (7 Questions)

AI transformation fails at the leadership level as often as at the technical level. Questions 3.1–3.7 assess whether the organisation has the human capital to execute.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 3.1 | Can the CEO articulate the company's AI strategy in specific, deal-relevant terms? | Yes — production use cases, KPIs, investment | Narrative-level, no operational specificity | Cannot articulate, defers to CTO | | 3.2 | Is there board-level AI expertise (director with AI/data background)? | Yes, named board member | Advisory board or observer with AI background | No board-level AI expertise | | 3.3 | Is the CTO or CDO's track record in AI delivery verifiable? | Prior AI deployment in production at previous company | General technology background, AI aspirational | No relevant AI delivery experience | | 3.4 | What is the CTO/Head of Technology tenure? | >2 years | 12–24 months | <12 months or recently departed | | 3.5 | Has there been significant tech team attrition in the last 12 months? | <15% annual attrition | 15–25% | >25% or key AI/data team departures | | 3.6 | What is the ratio of data/AI engineers to total tech headcount? | >20% | 10–20% | <10% | | 3.7 | Does the company have a documented AI upskilling programme for non-technical staff? | Active programme with completion metrics | Planned or ad hoc training | None |

Section 4 — Technology & Technical Debt (8 Questions)

Technology infrastructure determines whether AI can be deployed efficiently or requires substantial remediation investment before any value can be realised.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 4.1 | How old is the core technology stack (primary application/ERP)? | <5 years or continuously modernised | 5–10 years with known migration plan | >10 years, no modernisation roadmap | | 4.2 | Is there a continuous integration / continuous deployment (CI/CD) pipeline? | Automated CI/CD with tests | Manual deployment with basic version control | No CI/CD | | 4.3 | What is the deployment frequency for production changes? | Weekly or more frequent | Monthly | Quarterly or less | | 4.4 | What percentage of the codebase is estimated to be technical debt? | <15% | 15–30% | >30% | | 4.5 | Is the application infrastructure cloud-hosted or cloud-ready? | Fully cloud-hosted (AWS, Azure, GCP) | Hybrid with cloud migration in progress | Fully on-premise, no migration plan | | 4.6 | Has a third-party technical audit been completed in the last 24 months? | Yes, current remediation plan exists | >24 months or partial audit | Never | | 4.7 | Are API interfaces documented and maintained? | Full API documentation, version-controlled | Partial documentation | Undocumented legacy integrations | | 4.8 | Is the company's infrastructure SLA >99.5% uptime? | Yes, measured and reported | 99–99.5%, measured | <99% or not measured |

Section 5 — EU AI Act & Compliance (6 Questions)

Given enforcement beginning August 2, 2026, this section is now a mandatory component of any deal process involving EU-operating targets.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 5.1 | Has the company completed an EU AI Act system classification assessment? | Yes, documented, within last 12 months | In progress or >12 months old | Not started | | 5.2 | Does the company operate any high-risk AI systems (per Annex III)? | No, or yes with conformity assessment completed | Yes, assessment in progress | Yes, no action taken | | 5.3 | Does the company operate any AI practices that may be prohibited under Article 5? | No, after assessment | Unknown, not reviewed | Potentially yes | | 5.4 | Is customer or employee data transmitted to third-party LLMs (OpenAI, Anthropic, etc.)? | No, or yes with DPAs and GDPR review | Yes, DPAs in place but incomplete | Yes, no GDPR review | | 5.5 | Is there a named EU AI Act compliance owner? | Yes, named with legal counsel engaged | Informally assigned | No owner | | 5.6 | Has GDPR compliance been formally assessed in the last 18 months? | Yes, current, no material findings | Yes, with open items | >18 months or never |

Section 5 note: Questions 5.2 and 5.3 are potential hard deal-blockers. If a target operates a prohibited system and refuses to decommission it, the deal should not proceed. If a high-risk system is in production without conformity assessment, this should be a mandatory condition precedent with a firm deadline.

Section 6 — Strategic AI Roadmap (4 Questions)

A credible AI roadmap is the bridge between current state and the value creation thesis. Four questions are sufficient to assess its quality.

| # | Question | Green | Amber | Red | |---|---|---|---|---| | 6.1 | Is there a documented AI roadmap with milestones and owners? | Yes, with quarterly milestones and named owners | High-level roadmap, no milestones | No roadmap or slide-deck level only | | 6.2 | Is there a dedicated AI/data budget line in the P&L? | Yes, defined and defended in management accounts | Ad hoc from IT budget | Not allocated | | 6.3 | Are AI KPIs tracked and reported to the board? | Yes, regular board reporting on AI metrics | Tracked informally | Not tracked | | 6.4 | Is the AI roadmap explicitly linked to the investment thesis? | Yes — specific use cases tied to identified EBITDA improvement | Partial alignment | No link to financial KPIs |

Key Takeaways

  • 5 or more Red flags across the 40 questions indicates a material AI readiness risk. This should be quantified in the deal model and addressed through conditions precedent, not noted as a qualitative observation.
  • Section 1 (Data Quality) and Section 5 (EU AI Act) carry the highest probability of deal-altering findings. Prioritise these if time is limited.
  • The ability to produce Section 1 and Section 4 documents quickly is itself a signal. Companies that cannot locate their data architecture diagram or last technical audit within 48 hours of the data room request have operational challenges independent of AI.
  • Section 3 (leadership) cannot be assessed from documents alone. The management meeting questions in 3.1–3.4 are specifically designed to distinguish AI fluency from AI delivery history.
  • This checklist is the foundation of the Valence intake process. A full assessment adds sector benchmarks, weighted geometric mean scoring, and a complete AI value bridge across all 12 dimensions.