Table of Contents

The exclusive compute and revenue-sharing arrangement that has defined Microsoft’s AI positioning since 2023 is over. Microsoft’s announcement1 on April 27, 2026 confirms that OpenAI can now route workloads to any cloud provider, its IP license to Microsoft is no longer exclusive, and the AGI provisions that gave Microsoft a governance hedge are gone. Azure remains OpenAI’s primary cloud partner. It is no longer the only one.

What Changed on April 27

Microsoft’s blog post1 outlines four concrete changes.

License exclusivity removed. Microsoft’s license to OpenAI’s IP now runs as a non-exclusive arrangement through 2032. Previously exclusive, it covers models and products. OpenAI can now license the same IP to other parties.

First-refusal, not lock-in. OpenAI products will continue to ship first on Azure, with a carve-out: if Microsoft cannot support the necessary capabilities, OpenAI can go elsewhere without penalty. That clause matters. It converts what was a contractual guarantee into a performance obligation Azure has to earn every release cycle.

Revenue share restructured. Microsoft will no longer pay a revenue share to OpenAI. OpenAI’s revenue share payments to Microsoft continue through 2030 at the same percentage but are now subject to a total cap, and are no longer tied to OpenAI’s technology progress milestones. The revenue flows in the same direction; the ceiling and the trigger conditions changed.

AGI provisions deleted. The original agreement included clauses that would have given Microsoft governance standing if OpenAI declared AGI achieved. Those provisions are now removed2.

None of this constitutes a breakup. Microsoft retains approximately a 27% stake in OpenAI, reportedly valued around $135 billion3, at a $730 billion company valuation following a $110 billion funding round in February 2026. The two companies still intend to build products together. What changed is the structure of compulsion: OpenAI is no longer obligated to route its cloud spend through Azure, and Microsoft is no longer the sole entity licensed to ship OpenAI models at scale.

Azure’s AI Moat: How Exclusivity Built Enterprise Lock-In

Between 2023 and early 2026, Azure’s pitch to enterprise AI buyers had concrete backing that AWS and Google Cloud could not replicate: if you wanted GPT-4 class models at production scale with SLA guarantees, you were buying Azure. That exclusivity drove procurement decisions, shaped multi-year enterprise agreements, and influenced the architecture of applications that tied identity, storage, and inference together inside Azure’s stack.

The moat worked because it was structural. A company building a RAG pipeline on Azure OpenAI Service was binding to Azure’s networking, managed identity, private endpoints, and VNet integrations alongside the model itself. Model exclusivity opened the door; the surrounding infrastructure services provided the lock. Microsoft’s stock price reflects how the market valued that arrangement: shares dropped as much as 5% intraday3 on April 27 before recovering, closing around $423 per share. The exclusivity premium got written down in real time.

The Procurement Recalculation

The question enterprise buyers now face is a boring but expensive one: was the Azure commitment driven by model access, or by the infrastructure?

For buyers who chose Azure primarily because GPT-4o and its successors were only available there, the calculus is now open. OpenAI’s products will be available on AWS, Google Cloud, and Oracle. If pricing or performance terms are better elsewhere, there is no longer a contractual reason to stay.

For buyers who chose Azure because of Active Directory integration, compliance certifications, data residency requirements, or proximity to their existing Azure data estate, the announcement changes little. Those dependencies are sticky in ways model access never was.

The segment that should be actively repricing is the middle group: organizations that over-weighted AI model exclusivity in their procurement analysis, signed Azure EA extensions on the assumption of continued lock-in, or architected inference pipelines around Azure OpenAI Service specifically to be first in line for new model releases. The first-mover carve-out (“ships first on Azure unless Microsoft cannot support the necessary capabilities”) matters less if AWS and Google are on-boarding the same models within weeks at competitive pricing.

There is also a subtler repricing question for companies that built internal AI platforms on Azure OpenAI Service under the assumption the service would maintain a generational lead on model access. Any Azure AI Platform team that justified its build-versus-buy decision partly by citing exclusive access to frontier models should revisit that framing before the next budget cycle.

The Numbers: Revenue Flows and the 2030 Cap

Microsoft reported $7.6 billion from OpenAI2 in its most recent quarter (January 2026 reporting), though the precise accounting is reported at medium confidence and should be treated as directional. The revenue share from OpenAI to Microsoft continues through 2030 at the existing percentage, now capped in total.

The cap changes the financial character of the arrangement. The original deal’s revenue share functioned partly as a return mechanism on Microsoft’s investment: OpenAI grows, Microsoft’s share grows proportionally. With a ceiling on total payments and Microsoft’s own share obligation dropped, the structure looks less like a guaranteed return on capital and more like a time-limited licensing arrangement with a fixed maximum. That distinction matters for OpenAI’s ongoing for-profit restructuring: potential investors evaluating OpenAI’s books can now model the Microsoft payment as a finite liability with a known ceiling, rather than an open-ended obligation tied to model performance milestones.

The AGI clause removal is also financially relevant. That provision created a scenario in which Microsoft’s rights and governance standing could have been affected by an internal determination that AGI had been achieved. Removing it simplifies the cap table and eliminates a structural ambiguity that any S-1 process would have required substantial explanation to navigate. Whether that removal benefited Microsoft or OpenAI more depends on how you valued the clause, but its presence in a prospectus would have been awkward for both parties.

Competitive Landscape: AWS, Google Cloud, Oracle, and CoreWeave

OpenAI had already been building multi-cloud infrastructure independent of the partnership terms. According to Yahoo Finance reporting2, OpenAI has a $50 billion infrastructure deal with Amazon and is routing $12 billion into CoreWeave. The April 27 agreement makes these relationships commercially legitimate under the partnership terms rather than exceptions carved out around them.

For Google Cloud, the situation is structurally odd. Its own Gemini models now compete on the same platform as GPT-4 class models with official OpenAI access available. Historically, enterprise buyers on Google Cloud who wanted OpenAI models had limited options. That changes. Whether Google treats this as an opportunity to attract OpenAI workloads or as an implicit acknowledgment that its own model suite does not fully satisfy the market depends on execution over the next several quarters.

For AWS, the OpenAI relationship fits into Bedrock’s stated positioning as a cloud for any foundation model. AWS has been explicit that it does not want to be a single-model vendor. Gaining official OpenAI workloads validates that argument.

For Oracle and CoreWeave, both already had commercial relationships with OpenAI before April 27. The amendment consolidates what was already happening operationally.

What Happens to Azure OpenAI Service

The product continues. Azure OpenAI Service is an Azure-native managed service with its own SLAs, private endpoint support, RBAC, and compliance certifications. None of that disappears because the partnership exclusivity changed at the contractual level.

What changes is the competitive frame. Azure OpenAI Service’s pricing has historically been set in a market where it held a monopoly on production-scale GPT deployments. With AWS and Google Cloud now able to offer the same underlying models, Azure will need to compete on infrastructure merit: latency, data residency, integrated tooling, enterprise support. Not on model access alone.

The pitch shifts from “only place to get it” to “best place to run it.” That is a harder sell when AWS carries a $50 billion OpenAI infrastructure commitment and Google Cloud brings both its own competitive model stack and now official OpenAI access. It is not an impossible sell; Microsoft’s Azure AI infrastructure is deep and its enterprise relationships are real. But the differentiation argument now requires evidence, not contract language.

The frontier-model premium that Azure priced into its AI services was always partly a function of scarcity. That scarcity is now negotiated, not guaranteed. How much of Azure’s AI revenue was paying for exclusive access versus paying for actual infrastructure quality will become visible over the next several quarters as enterprise buyers test alternatives they previously did not have.

Frequently Asked Questions

How does this April 2026 amendment differ from the 2025 reports about Microsoft losing exclusive cloud provider status?

The 2025 reports were leaks and speculation about strategic tension, whereas the April 27 announcement is a formal, negotiated amendment with specific contractual boundaries: a non-exclusive IP license through 2032, a revenue-share cap through 2030, and the explicit deletion of AGI provisions. Earlier coverage lacked these precise terms and dates.

What should enterprises already mid-migration to Azure for OpenAI access do differently?

Finish the migration if the driver was genuine infrastructure fit—Active Directory, compliance, or data residency—but immediately decouple inference architecture from Azure-exclusive dependencies like Azure OpenAI Service-specific SDK bindings and private endpoints. OpenAI’s $50 billion Amazon infrastructure deal and $12 billion CoreWeave commitment mean multi-cloud inference is already operationally real, so cloud-agnostic architecture preserves optionality if Azure’s pricing premium erodes.

What governance rights does Microsoft still hold after the AGI provisions were removed?

Microsoft retains standard major-shareholder protections through its roughly 27% stake, which at a $135 billion valuation typically includes board representation and protective provisions over major transactions. What it lost is the unique contractual trigger that would have automatically escalated Microsoft’s governance standing if OpenAI internally declared AGI achieved—a contingency that would have created serious ambiguity during any IPO or restructuring process.

Could the 2030 revenue-share cap backfire on Microsoft?

Yes. Once OpenAI’s cumulative payments hit the cap—likely accelerated by its growth trajectory and $730 billion valuation—Microsoft loses its percentage claim on additional revenue while still bearing the operational cost of remaining OpenAI’s primary cloud partner. After 2030, OpenAI keeps every marginal dollar, giving it stronger incentive to route new workloads to AWS or CoreWeave rather than expand Azure consumption.

Footnotes

  1. The Next Phase of the Microsoft-OpenAI Partnership 2

  2. Microsoft, OpenAI Rework AI Deal 2 3

  3. Microsoft Stock Price Drops Sharply 2

Sources

  1. The Next Phase of the Microsoft-OpenAI Partnershipvendoraccessed 2026-04-28
  2. Microsoft, OpenAI Rework AI Dealanalysisaccessed 2026-04-28
  3. Microsoft Stock Price Drops Sharplyanalysisaccessed 2026-04-28

Enjoyed this article?

Stay updated with our latest insights on AI and technology.