Illustration for: Microsoft gains OpenAI deal allowing independent AGI pursuit or partnerships
LLMs & Generative AI

Microsoft gains OpenAI deal allowing independent AGI pursuit or partnerships

2 min read

When Microsoft signed the latest amendment with OpenAI, it seemed to clear a snag that had kept the two companies locked into each other’s roadmaps. The new language, unlike the old one, lets Microsoft plot its own AGI path - whether it goes it alone or teams up with someone else. The partnership isn’t gone; the link remains, but now Microsoft can launch its own projects without asking OpenAI for permission at every turn.

That matters because it sharpens who actually owns the underlying IP and who can claim a breakthrough. It also hints that the pace of building more capable systems could pick up, though it’s still unclear how quickly that will happen. Insiders, including Hayden Field, have been talking about the deal’s legal footing and the firm’s right to tap OpenAI’s assets.

Under the new deal, Microsoft can now “independently pursue AGI alone or in partnership with third parties.” And, as my colleague Hayden Field notes, “Microsoft is perfectly within its legal rights to use OpenAI’s IP to develop its own AGI and attempt to win the race.”

Under a new deal with OpenAI, Microsoft can now "independently pursue AGI alone or in partnership with third parties." And, as pointed out by my colleague Hayden Field, "Microsoft is perfectly within its legal rights to use OpenAI's IP to develop its own AGI and attempt to win the race." But Suleyman has a vision for "humanist" superintelligence with three main applications, which include serving as an AI companion that will help people "learn, act, be productive, and feel supported," offering assistance in the healthcare industry, and creating "new scientific breakthroughs" in clean energy.

Related Topics: #Microsoft #OpenAI #AGI #AI #IP #Hayden Field #superintelligent AI #humanist superintelligence #partnership

Microsoft’s new “humanist superintelligence” team says it wants AI that “serves humanity” and keeps us “at the top of the food chain.” The promise that a super-intelligent system won’t hurt people seems to rely mostly on internal design goals rather than any proven safeguards. Under the fresh OpenAI agreement, Microsoft can go it alone on AGI or bring in partners, and it’s allowed to tap OpenAI’s intellectual property. That freedom might speed things up, but it also leaves us wondering how independent oversight will work when the same firm holds the code and sets the strategy.

Suleyman’s blog post talks up a human-centric mission, yet the actual mechanisms to enforce that mission remain vague. Success will likely need more than bold slogans, it will require transparent testing, outside validation, and clear lines of accountability, none of which were spelled out in the announcement. It’s still unclear whether Microsoft can actually deliver a superintelligence that truly “won’t be terrible for humanity,” and the wider impact of using OpenAI’s IP in this way hasn’t been fully explored.

Common Questions Answered

What new freedom does the Microsoft‑OpenAI deal give Microsoft regarding AGI development?

The revised contract allows Microsoft to independently pursue artificial‑general intelligence, either on its own or in partnership with third parties. It also grants Microsoft the legal right to use OpenAI's intellectual property when building its own AGI solutions.

How does the new agreement change Microsoft's relationship to OpenAI's roadmap?

Previously, Microsoft was tied to OpenAI's development timeline, but the new language removes that hurdle, letting Microsoft chart its own AGI course. While the partnership remains, Microsoft is no longer obligated to follow OpenAI's roadmap exclusively.

What is meant by Microsoft’s “humanist superintelligence” team, as described in the article?

The team is tasked with creating a superintelligent AI that serves humanity, keeping people "at the top of the food chain" by focusing on learning, productivity, and well‑being. Its design goals emphasize human‑centric outcomes, though the article notes these safeguards are still largely internal concepts.

What potential risks does the article highlight about Microsoft using OpenAI's IP for AGI projects?

The article warns that while leveraging OpenAI's IP could accelerate progress, it also raises questions about safety and control, since the promised safeguards are based on internal design goals rather than proven mechanisms. This dual‑edge could lead to faster breakthroughs but also heightened concerns over unintended harms.