Most healthcare technology programs fail at the governance layer before they fail at the implementation layer. A clinical system chosen by a steering committee that could not reach consensus on how it should be used in the first place does not get rescued by a better vendor. A capital program that did not agree on the decision cadence up front does not get unstuck by a better project plan.
I directed 50+ concurrent ICT capital projects across a provincial health system. The portfolio was wide enough that every failure mode in healthcare technology showed up somewhere, usually inside the same quarter. The pattern that repeated across programs was not technical. It was decisional. Programs that had a clear governance cadence shipped on budget. Programs that did not, did not.
This is not a healthcare problem specifically. It is a governance problem that happens to present most sharply in healthcare because the clinical, administrative, research, and finance stakeholders each have a legitimate claim on the decision, and the path of least resistance is always to defer.
Where healthcare technology programs actually fail
Committee-driven paralysis. Every technology decision in a health system touches clinical operations, administrative workflows, research protocols, and finance. That is reality. The failure mode is letting that reality structure every decision as a consensus exercise. Capital gets allocated, time burns, scope freezes, and the decision nobody made sits on a desk for eighteen months. The technology did not fail. The governance did.
Vendor lock-in by RFP. Healthcare procurement rewards vendors who can respond to complex RFPs with polished boilerplate. That filters the market toward the largest incumbents and away from the smaller platforms that might actually fit the operational need. The incumbents win, the organization gets a system that looks like every other deployment in its peer group, and the operational model stays the same as it was a decade ago.
Clinical and administrative IT running as parallel architectures. Two budgets, two teams, two integration strategies, and one patient whose care touches both. The cost of that split shows up in data governance, not in the original procurement. When leadership asks a question that spans the two, the answer is usually that a working group is looking at it.
Security posture inherited from the vendor. When the answer to “how is this system secured” is “whatever the vendor ships,” the attack surface grows every quarter. Healthcare has the same identity and access problems as retail or manufacturing, with HIPAA or provincial privacy exposure on top and a vendor ecosystem that has historically underinvested in security. The boundary between what the vendor owns and what the health system owns needs to be drawn explicitly, in writing, before the contract is signed.
AI pilots with no data governance foundation. Enthusiasm for AI in healthcare has outrun the organizational capacity to actually provision clinically validated data for it. Every AI pilot that stalls has the same root cause: the data pipeline is not governed, and the clinical risk office will not sign off on a deployment that depends on data it cannot attest to. This is a governance problem presenting as a technology problem, and it will not be solved by a better model.
How I work with health systems
Directing 50+ concurrent ICT capital projects across a provincial health system is unusual exposure. Most technology advisors have not stood in the middle of a portfolio that large, with that procurement complexity, and that many stakeholders with legitimate veto. The work that made the portfolio deliverable was rarely technical. It was governance cadence, decision rights, vendor accountability, and the discipline to draw an explicit boundary between what the health system owns and what the vendor owns on every program.
The engagements I take in healthcare today are advisory. I am not a clinical informaticist, and the people who configure EHRs and implement clinical workflows are exactly the people you want doing that work. I sit one level up. Capital program oversight, integration architecture, governance cadence, AI data readiness, and the conversations with vendors that the steering committee is not structured to have.
Healthcare-specific capabilities
Capital project oversight across concurrent workstreams. Portfolio-level cadence, decision rights that respect stakeholder legitimacy without letting every decision become consensus theatre, and executive reporting that surfaces the one or two stalled decisions that will otherwise sink the quarter.
Vendor consolidation and rationalization. Most health systems accumulate a long tail of vendors acquired across years of procurement cycles. Rationalizing that tail without breaking the clinical workflows it supports is a specific advisory skill, and it starts with an honest read of which vendors are actually operationally critical versus historically entrenched.
Clinical and administrative system integration architecture. The integration boundary between clinical systems (EHR, LIS, RIS, pharmacy) and administrative systems (ERP, finance, supply chain, workforce) is where most health systems carry their deepest technical debt. Designing that boundary so it can evolve as both sides modernize is harder than it sounds and rarely gets the architectural attention it deserves.
Data governance as a foundation for AI readiness. AI readiness in healthcare is 90% governance and 10% model selection. Which data sets are clinically validated. Who owns data quality for the pipelines feeding any production AI. How consent and access propagate. How the clinical risk office reviews the pipeline, not just the model. This is slow work. It is the work that determines whether the AI program ships or stays a pilot.
Cybersecurity posture review, especially identity and access. Most healthcare breaches in the last five years have touched identity and access rather than network perimeter. The review I run focuses on privileged access, identity lifecycle for vendors and contractors, scope creep in shared service accounts, and the degree to which vendor-managed systems have inherited access they no longer need.
AI governance alignment with clinical risk frameworks. Clinical risk offices have existed for decades and have their own evaluation frameworks for new technologies that touch care. AI governance that does not align with those frameworks will get rejected. The useful advisory work is translating between the AI governance frameworks (NIST AI RMF, ISO 42001, the organization’s own policy) and the clinical risk evaluation the chief medical officer will actually run.
Engagement patterns
Three shapes cover almost every healthcare engagement I run.
Assessment. 4 to 6 weeks. Scoped to a specific capital program, portfolio slice, or governance question. Delivered as a document leadership can use with the board, the clinical executive committee, and the CFO, not three different documents.
Program oversight retainer. Ongoing, usually a monthly rhythm with a named capital initiative or set of initiatives as the focus. Used by health system CIOs who want independent advisory on a program that is too important to get wrong and too political to govern internally alone.
Fractional technology leadership. Department- or service-line-level fractional CTO work for situations where a permanent leader is not yet in seat or where the scope does not justify a full-time role. Scoped with clear authority boundaries so it does not collide with permanent leadership.
Frequently asked questions
Is this public healthcare, private healthcare, or both?
The primary proof point is public-system work, directing 50+ concurrent ICT capital projects across a provincial health system. Private healthcare operators are welcome but are not the anchor of the positioning. If a private engagement is the right fit, I will say so; if it is not, I will also say so.
Is this US or Canadian healthcare?
The public health system experience is Canadian. The principles around capital program oversight, governance cadence, clinical and administrative integration, and AI data readiness travel across jurisdictions. The regulatory specifics do not. I scope every engagement against the specific regulatory context rather than assuming my experience transfers unmodified.
How do you handle clinical systems you have not personally implemented?
Capital project oversight is a different skillset than clinical system implementation, and the former is what I bring. I work alongside clinical informaticists, not in place of them. The value I add is governance, sequencing, vendor accountability, and the integration architecture that sits between clinical systems and the rest of the estate. The clinical configuration remains with the teams who actually run the workflow.
The question is not whether your health system will deploy more technology in the next five years. It is whether the governance cadence, the vendor boundaries, and the data foundation are set up so that the deployments produce clinical and operational outcomes, or another capital cycle of programs that ship on time and change nothing.