Guide

PE Technology Diligence: A Practical Guide

How to structure a technology diligence read so it actually shapes the first 100 days, rather than sitting in a folder. Companion to the one-page checklist.

Most PE technology diligence reports are not actually useful. They catalog the stack. They produce a red-yellow-green heatmap. They sit in a folder after close. The operating partner who paid for the report reads the executive summary, notes a handful of risks, and moves on to the operational issues the CEO is pushing. The diligence was thorough. It did not shape the first 100 days.

The report that would have shaped the first 100 days is a different document. It is shorter. It is scoped against the investment thesis. It reads the team as hard as the stack. And it produces a first-100-day decision inventory that the incoming CEO or CTO can actually work from.

This guide is the long-form companion to the one-page PE Diligence Checklist. The checklist is the operational tool. This guide is the explanation of how to use each section, when it applies, and what separates a useful diligence from a slide-reading exercise.

Why most technology diligence reports are not useful

Three failure patterns repeat across almost every diligence I have seen from the post-close side of the table.

Stack inventory that names platforms without reading the architecture. The report lists the ERP, the commerce platform, the data warehouse, and the security tooling. It rarely answers the architecture questions that decide whether the stack can support the thesis: how the systems of record are delineated, how data flows between them, where the points of brittleness are, what the next integration will cost. Stack lists are necessary. They are not sufficient.

Team capability reads based on org charts, not on operational evidence. The report describes the CTO, the head of engineering, the size of the team, and the structure of the reporting hierarchy. It rarely reads the team against operational evidence: release cadence, incident postmortem quality, architectural decision memos, how senior engineers describe the systems they own. Org charts show who reports to whom. Operational evidence shows whether the team ships.

Risk registers that flag everything and prioritize nothing. The diligence reports risk in the categories the framework demands (security, compliance, vendor, data, integration, continuity). Every category gets populated. The weighting is procedural. What gets lost is the small number of risks that actually threaten the thesis, which usually live in one or two categories and are invisible unless the reader already knew to look for them.

Pre-close: what actually matters

Pre-close diligence has two stages. Pre-LOI and pre-close. They serve different purposes and require different depth.

Pre-LOI. The question is whether the investment thesis is plausible against the technology organization as it exists. The diligence is fast (3 to 5 working days), shallow on the stack, and sharp on two things: whether the team has the capability to execute the thesis, and whether there are category-killer risks that make the deal impossible at the proposed price. Most deals pass this stage. A useful pre-LOI diligence is the one that kills the deals that should not proceed, early.

Pre-close. The question is depth. The stack inventory gets filled in. The team capability read gets operational. The integration and data debt markers get specific. The deliverable is a document that the operating partner can use in the IC memo and that the incoming CEO can read on day one.

The pre-close report should answer four specific questions. Is the technology spend aligned with the investment thesis? Does the team have the capability to execute? Where are the hidden integration and data debts? What technology decisions must be made in the first 100 days? Reports that do not answer those four questions directly are reports that did not know which questions mattered.

Post-close: the first 100 days

The first 100 days is where most technology value creation either takes shape or quietly dies. The organization is open to change. The thesis is fresh. Decisions that would be politically impossible in year two are straightforward in month one.

Five decisions recur across almost every post-close engagement I run. Each has a default answer that is usually right and a specific condition under which the default is wrong.

Consolidate or defer on multiple ERPs. Default: defer consolidation past day 365 unless the thesis explicitly requires it sooner. Override: if the investment thesis depends on cost-out at the G&A line, and the ERP plurality is a primary driver of that cost, act earlier. Consolidating ERPs in the first year is expensive, disruptive, and rarely produces the value creation the thesis predicted at the timeline predicted.

Retain or replace the senior technology leader. Default: retain for the first 90 days and assess against delivery. Override: if diligence already identified specific disqualifiers, act earlier. The instinct to bring in a new leader on day one is usually wrong. The incumbent has context. The thesis is fresh. Give 90 days to assess the capability against the thesis before the retention decision.

Pause or continue in-flight technology programs. Default: pause all non-critical programs pending thesis alignment. Override: do not pause programs that unblock regulatory, security, or safety risk. Most portfolio companies are carrying two or three in-flight programs at close. Some fit the thesis. Most do not. The default-pause is a cheap information-gathering mechanism.

Invest in or starve the BI and data layer. Default: invest early. Most thesis metrics require reporting that the stack cannot produce today. Override: if the thesis is purely operational cost-out and the reporting needs are modest, the investment can wait. But for any thesis that depends on customer behavior, margin attribution, pricing, or cross-channel reporting, the BI and data work has to happen before the thesis is measurable.

Build or buy the next major capability. Default: buy unless the capability is a genuine differentiator. Override: if the market offerings cannot support the operating model the thesis requires. Most PE-backed companies are too small to justify building what they can buy, and the build decisions rarely survive contact with the hold-period timeline.

Portfolio monitoring cadence

For PE firms with multiple portfolio companies, a portfolio-level technology review cadence creates visibility that most operating partners currently lack. Quarterly is the right rhythm for most portfolios. Monthly is overkill except in active-intervention situations. Annual is too infrequent to catch drift.

A useful quarterly technology review looks different from a slide-reading session. The difference is in the questions on the agenda.

A useful review asks: what is behind schedule, what decision is stalled, and what would unblock it. It opens with the one or two technology decisions the operating partner needs to be aware of, not with a status update on every initiative. It closes with a specific next-quarter commitment, owned by a named executive, on a specific clock. The CFO and the CEO are in the room. The CIO is not the only one talking.

A slide-reading quarterly looks different. It opens with a category dashboard. It walks through initiative status slide by slide. It closes with a general commitment to keep the operating partner informed. The CIO is talking most of the time. Nobody leaves with a decision they did not have coming in.

The distinction matters. Useful quarterlies compound into real portfolio visibility across the hold period. Slide-reading quarterlies compound into nothing, and the operating partner loses the cadence by year two.

How to use the one-page checklist

The one-page checklist covers five sections. Each section has a specific purpose and a specific moment in the deal cycle where it matters most.

Section 1: Stack inventory. Use pre-close. The six categories (ERP and finance, commerce and channels, data and BI, infrastructure, security and identity, vendor concentration) are the highest-order buckets. Fill in what is known. The “what is unknown” column is often more valuable than the “what is in place” column. Unknowns are the risk surface for the first 100 days.

Section 2: Team capability read. Use pre-close, and revisit on day 60. Five signals and three disqualifiers. Each of the signals is operational: release cadence, incident postmortem quality, architecture ownership, alignment of roadmap to business metrics, senior engineering attrition. Any single disqualifier present means the team cannot execute the thesis without a leadership change. Two or three signals missing is a yellow flag, not a red one.

Section 3: Integration and data debt markers. Use in the first CTO or head of engineering conversation, pre-close if possible. Five things to look for. The number of systems of record for the same entity (customer, product, inventory) is often the sharpest single signal. Two systems of record for customer is normal. Four is a problem. The shape of the last major migration tells you whether the organization can ship a large program, which is often the most important single data point in the entire diligence.

Section 4: First 100 days decision template. Use post-close. Five decisions, each with a default and an override. Work through them in the first two weeks after close. Document the decision rationale even when the default applies. Most of the technology value creation in the first 100 days comes from making these decisions deliberately, not from the decisions themselves being novel.

Section 5: Investment thesis alignment scorecard. Use pre-close for deal validation and post-close for execution planning. Six rows map common PE theses (geographic expansion, roll-up, margin improvement, digital channel, pricing power, operational turnaround) to the capabilities the technology stack must support. Score each row in-place, partial, or absent. Any thesis with more than one absent in its required capability row is a thesis the technology stack cannot support without investment in the hold period. That investment belongs in the value creation plan, not in post-close surprises.

What I do not do

Honest section. The value of this framework comes from being specific about what it covers and what it does not.

I am not a CFO. I do not build financial models, run QofE, or sign audit opinions. If the engagement needs financial diligence, that work goes to a different provider.

I am not a legal or compliance diligence resource. Data privacy regulatory read, licensing audit, contract assignability review, and similar work is not what I deliver. A proper legal and compliance diligence alongside this technology read is the right structure for almost any deal.

I am not a Big Four IT controls audit. The SOC 2 read, the internal controls sufficiency review, the licensing true-up: all of that work is valuable and all of it comes from specialist providers. My value shows up in the operating questions (team capability, integration architecture, thesis alignment) that Big Four reports typically cover at a higher altitude.

The value of the framework is specifically the technology stack read and the team capability read, and the value shows up when those two reads contradict what the CEO is pitching or what the CIM is promising. When that contradiction is surfaced pre-close, the IC memo reflects it and the deal gets priced correctly. When it is surfaced post-close, it shapes the first 100 days. When it is not surfaced at all, it shows up in year two as performance that missed plan, and the operating partner is the one who has to explain it.

Frequently asked questions

How does this work alongside a Big Four IT diligence?

A Big Four IT diligence is typically broader (controls, compliance, licensing, SOC posture) and shallower on the operating questions that determine whether the technology team can execute the thesis. This guide and the companion checklist cover those operating questions specifically. The two diligences are companions, not substitutes. In practice I am usually engaged separately from, or in addition to, a Big Four scope.

What deal sizes does this framework fit?

Mid-market, roughly $50M to $1B in enterprise value. Below that the diligence scope tends to be too compressed to benefit from the team capability read. Above that the governance of the acquired business is usually mature enough that the value of an external read diminishes. Retail, consumer, healthcare, manufacturing, and distribution are where the operational grounding is deepest.

Can the checklist be used by a strategic acquirer, not just PE?

Yes, with one adjustment. Strategic acquirers usually have a stronger view on technology integration because they know their own stack. The checklist helps strategic acquirers when the target is being bought for its technology or when the integration surface is large enough to warrant a structured read. For smaller bolt-ons into a known stack, the checklist is overkill; the stack decisions are already made.

The difference between a diligence report that shapes the first 100 days and one that sits in a folder is not the depth of the stack analysis. It is whether the person who wrote it has sat in the chair the new CTO is about to sit in. Stack catalogs are commodity. Team reads grounded in operational evidence are not. A diligence that contains both, at the right depth for the thesis, is the one the operating partner will actually use.