← Blog
AI & AutomationMay 20267 min read

The AI Labs Just Admitted the Real Cost of Enterprise AI

Last updated: May 12, 2026

By Morris Stern · Stern Technology Advisory

The AI Labs Just Admitted the Real Cost of Enterprise AI

The pitch for two years was that AI would compress professional services. That a model and an API would replace what consultants, integrators, and systems engineers used to do. In the first week of May 2026, two of the three largest AI labs contradicted that pitch with capital.

OpenAI acquired Tomoro, an applied AI consulting firm, and folded its 150 engineers into a new business unit called the OpenAI Development Company. The unit launched as a joint venture with Bain, Goldman Sachs, SoftBank, and sixteen other investment and consultancy firms. The same week, Anthropic launched a $1.5 billion firm with Goldman Sachs and Blackstone, aimed at accelerating AI adoption across hundreds of companies. Both labs described the people inside these new ventures with the same job title. Forward-deployed engineers.

The term is borrowed from Palantir, which spent fifteen years proving that selling a platform without sending engineers to live inside the customer is not a business. The frontier AI labs have now reached the same conclusion. The cost of enterprise deployment is not the model. It is the labor.

That is a useful admission for anyone budgeting AI investment in 2026.

Why API access was never the bottleneck

Most enterprises do not fail at AI because they cannot get to a model. They fail because the model lands in an environment where nothing is ready to receive it. The data sits in eleven systems with conflicting customer keys. The workflows were designed for humans who read screens and tolerate ambiguity. The governance committees meet quarterly. The integration runtime cannot pass an output back into the system of record without a person clicking a button.

A 2026 report from Coastal and Oxford Economics surveyed 800 U.S. business and technology leaders running AI in production. 74 percent are increasing AI investment. 46 percent report that their initiatives have fallen short of expectations. Only a small minority say AI is delivering measurable business value. This is not a model problem. The same models that produced compelling demos are the ones producing disappointing results in production.

The labs know this. They have watched their best logos sign large contracts, run pilots that look promising, and stall at the same gates every quarter. Forward-deployed engineering is the response. Not better models. People who can stand inside the customer's environment and rebuild the receiving end of the integration.

What forward-deployed engineering reveals about cost

When OpenAI describes the role, the language is precise. The engineer sits with the user, understands the workflow, connects the model to the back office, and helps the customer build the integration into a production capability. That is the description of a systems integrator with a model attached. It is professional services priced as platform.

The economics of enterprise AI are now visible in the open. The license is a small share of the total. The integration, change management, data preparation, governance setup, and ongoing operation are the rest. This is the same shape as ERP, payments, and PIM deployments. Anyone who has run one of those programs already knows that the software is the cheap part of the budget.

Two implications for buyers in 2026.

First, vendor pricing models will shift. The labs will price the forward-deployed work separately. Expect day rates, fixed-fee implementation packages, and managed service tiers attached to the model contract. Some of this will be cleanly disclosed. Some of it will get bundled into seat licenses and enterprise tier upcharges. The buyer's job is to force the work into a separate line item before signing.

Second, implementation labor is the binding constraint. The OpenAI Development Company starts with about 150 forward-deployed engineers. Anthropic's new firm is staffing up. Even at full hiring velocity, the labs cannot field enough people to cover the demand. The customers who get the labs' own engineers will be the largest contracts, the most strategic logos, and the ones with the loudest reference value. Everyone else gets routed to a named partner. The partner quality will be uneven for the next 18 months.

How to read your vendor relationships now

Three questions are worth asking out loud in the next AI vendor meeting.

How many customers in my segment have your engineers embedded? If the answer is "we work through partners," ask which partners and how those partners are compensated. A partner being paid by the lab to push the lab's product has a different incentive than a partner being paid by the customer to solve the customer's problem. Both have a place. They are not the same thing.

What does your version of forward-deployed engineering actually cost? The published license rate is rarely the total. Get the engineering hours, the duration, the milestones, and the success criteria into the contract before signing. Treat the implementation scope with the same discipline you treat an ERP statement of work.

What happens when the engagement ends? Forward-deployed engineering builds operational dependency. The lab has access to your workflows, your data shapes, and your integration topology. That is fine if you intend to keep buying. It is a lock-in cost if you ever want to switch models or move to an open alternative. The exit clause is worth more than most buyers think.

A version of this conversation is already happening inside private equity portfolios. Operating partners are looking at AI deployment line items and asking the same question they ask about any integration partner. Are we buying capability we can run, or capacity we will need to keep renting?

Where this breaks down

The forward-deployed model has limits worth naming plainly.

It does not scale to the middle market. A retailer with 80 stores or a manufacturer with $300 million in revenue is not getting a Tomoro engineer in their building. Those accounts will be routed to partners, and the experience will depend almost entirely on the partner.

It creates a two-tier market. The Fortune 500 customer with embedded lab engineers will get better outcomes than the customer with the same license and a generalist systems integrator. This is already visible in the cloud transformation market and will be more pronounced in AI.

It does not solve the governance gap. Forward-deployed engineering builds the integration. It does not, by itself, build the identity controls, the access scoping, the audit trail, or the runtime guardrails that an enterprise AI program needs. Those still have to be designed and operated by the customer, or by an advisor whose incentives are not tied to consumption of the model.

The honest framing for a buyer is that forward-deployed engineering is a useful capability to rent for a defined scope of work. It is not a replacement for owning the AI operating model.

What this signals

Two years ago, the AI labs sold a future where the model replaced the workforce. This year they are hiring the workforce that makes the model usable.

That is not a failure of the technology. It is a maturation of the market. The labs are now where Oracle, SAP, and Salesforce arrived a decade after their first enterprise contracts. The product is the model. The business is the deployment.

Anyone budgeting an AI program in 2026 should price the deployment first and the license second. The lab's own capital allocation just confirmed which side of that equation is the bigger number.

If you are running an AI initiative right now, what share of your 2026 budget is the model, and what share is the work to operate it?

At Stern Technology Advisory, I advise mid-market and PE-backed companies on enterprise AI programs — the operating model, the vendor contracts, and the deployment scope that determine whether the budget delivers. If your team is pricing an AI initiative for 2026 and the gap between the license rate and the all-in cost is starting to surface, happy to compare notes.

Continue your research

These links follow the same decision area and connect guidance to implementation paths.

Need outside perspective on ai & automation?

Book an advisory call to review your constraints, sequence options, and governance model.


Working through this at your organization?

I advise technology leaders on the same decisions these articles describe. A 30-minute call is the fastest way to see if an engagement fits.

Or follow on LinkedIn for weekly writing.