输入“/”快速插入内容

OpenAI 的错位与微软的收获(OpenAI’s Misalignment and Microsoft’s Gain)

本文讨论了OpenAI高层变动事件,包括山姆·奥特曼被解雇、埃米特·谢伊接任,以及奥特曼和团队转投微软等情况,还分析了OpenAI的非营利模式、ChatGPT带来的影响、微软与OpenAI的关系及行业格局变化。关键要点包括:
1.
OpenAI高层变动:周五,时任首席执行官山姆·奥特曼被OpenAI董事会解雇,周末有其回归传闻,最终OpenAI聘请埃米特·谢伊为首席执行官,周日晚奥特曼和团队宣布加入微软。
2.
OpenAI非营利模式:2015年成立为非营利公司,后创建OpenAI Global,微软为少数股东,虽可盈利但需遵循非营利使命。
3.
ChatGPT的影响:2022年11月底发布,每周用户超1亿,收入超10亿美元,改变了AI对话,但也使OpenAI内部意识形态分歧加剧。
4.
微软与董事会:微软在与OpenAI合作中投入巨大,而OpenAI董事会因非营利性质,在决策时更注重使命,导致奥特曼离职。
5.
奥特曼相关问题:董事会称其与董事会沟通不坦诚,其失去董事会信任,他转投微软引发对其动机的猜测。
6.
行业格局变化:微软在AI领域优势增强,谷歌可能需变革,Anthropic作为独立实体面临挑战。
7.
AI行业本质:短期内AI是维持性创新,主要受益者是大公司,大公司获胜关键是利用规模收购或快速跟进。
I have, as you might expect, authored several versions of this Article, both in my head and on the page, as the most extraordinary weekend of my career has unfolded. To briefly summarize:
On Friday, then-CEO Sam Altman was fired from OpenAI by the board that governs the non-profit; then-President Greg Brockman was removed from the board and subsequently resigned.
Overthe weekendrumorssurged that Altman was negotiating his return, only for OpenAI to hire former Twitch CEO Emmett Shear as CEO.
Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft.
This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP ( short of artificial general intelligence ), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit. 1↳
Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).
The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.
OpenAI’s Non-Profit Model
OpenAI was founded in 2015 as a “non-profit intelligence research company.” From the initial blog post :
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
I was pretty cynical about the motivations of OpenAI’s founders, at least Altman and Elon Musk; I wrote in a Daily Update :
Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has a mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence.
Whatever Altman and Musk’s motivations, the decision to make OpenAI a non-profit wasn’t just talk: the company is a 501(c)3; you can view their annual IRS filings here . The first question on Form 990 asks the organization to “Briefly describe the organization’s mission or most significant activities”; the first filing in 2016 stated:
OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.
Two years later, and the commitment to “openly share our plans and capabilities along the way” was gone; three years after that and the goal of “advanc[ing] digital intelligence” was replaced by “build[ing] general-purpose artificial intelligence”.
In 2018 Musk, according to a Semafor report earlier this year , attempted to takeoverthe company, but was rebuffed; he left the board and, more critically, stopped paying for OpenAI’s operations. That led to the second critical piece of background: faced with the need to pay for massive amounts of compute power, Altman, now firmly in charge of OpenAI, created OpenAI Global, LLC, a capped profit company with Microsoft as minority owner. This image of OpenAI’s current structure is from their website :
OpenAI Global could raise money and, critically to its investors, make it, but it still operated under the auspices of the non-profit and its mission; OpenAI Global’s operating agreement states:
The Company exists to advance OpenAI, Inc.’s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company’s duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedenceoverany obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company’s cash flow into research and development activities and/or related expenses without any obligation to the Members.
Microsoft, despite this constraint on OpenAI Global, was not only an investor, but also a customer, incorporating OpenAI into all of its products.
ChatGPT Tribes
The third critical piece of background is the most well-known, and what has driven those ambitions to new heights: ChatGPT was released at the end of November 2022, and it has taken the world by storm. Today ChatGPT hasover100 million weekly users andover$1 billion in revenue; it has also fundamentally altered the conversation about AI for nearly every major company and government.
What was most compelling to me, though, was the possibility I noted above, in which ChatGPT becomes the foundation of a new major consumer tech company, the most valuable and most difficult kind of company to build. I wrote earlier this year in The Accidental Consumer Tech Company :
When it comes to meaningful consumer tech companies, the product is actually the most important. The key to consumer products is efficient customer acquisition, which means word-of-mouth and/or network effects; ChatGPT doesn’t really have the latter (yes, it gets feedback), but it has an astronomical amount of the former. Indeed, the product that ChatGPT’s emergence most reminds me of is Google: it simply was better than anything else on the market, which meant it didn’t matter that it came from a couple of university students (the origin stories are not dissimilar!). Moreover, just like Google — and in opposition to Zuckerberg’s obsession with hardware — ChatGPT is so good people find a way to use it. There isn’t even an app! And yet there is now, a mere four months in, a platform.
The platform I was referring to was ChatGPT plugins ; it’s a compelling concept with a UI that didn’t quite work, and it was only eight months later at OpenAI’s first developer day that the company announced GPTs, their second take at being a platform. Meanwhile, Altman was reportedly exploring new companies outside of the OpenAI purview to build chips and hardware, apparently without the board’s knowledge. Some combination of these factors, or perhaps something else not yet reported, were the final straw for the board, which, led by Chief Scientist Ilya Sutskever, deposed Altmanoverthe weekend. The Atlantic reported :
Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno optimism, energized by rapid commercialization; the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution. For years, the two sides managed to coexist, with some bumps along the way.
This tenuous equilibrium broke one year ago almost to the day, according to current and former employees, thanks to the release of the very thing that brought OpenAI to global prominence: ChatGPT. From the outside, ChatGPT looked like one of the most successful product launches of all time. It grew faster than any other consumer app in history, and it seemed to single-handedly redefine how millions of people understood the threat — and promise — of automation. But it sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts. ChatGPT supercharged the race to create products for profit as it simultaneously heaped unprecedented pressure on the company’s infrastructure and on the employees focused on assessing and mitigating the technology’s risks. This strained the already tense relationship between OpenAI’s factions — which Altman referred to, in a 2019 staff email, as “tribes.”
Altman’s tribe — the one that was making OpenAI into much more of a traditional tech company — is certainly the one that is more familiar to people in tech, including myself. I even had a paragraph in my Article about the developer day keynote that remarked on OpenAI’s transition, that I unfortunately edited out. Here is what I wrote:
It was around this time that I started to, once again, bemoan OpenAI’s bizarre corporate structure . As a long-time Silicon Valley observer it is enjoyable watching OpenAI follow the traditional startup path: the company is clearly in the rapid expansion stage where product managers are suddenly considered useful, as they occupy that sweet spot of finding and delivering low-hanging fruit for an entity that doesn’t yet have the time or moat to tolerate kingdom building and feature creep.
What gives me pause is that the goal is not an IPO, retiring to a yacht, and giving money to causes that do a better job of soothing the guilt of being fabulously rich than actually making the world a better place. There is something about making money and answering to shareholders that holds the more messianic impulses in check; when I hear that Altman doesn’t own any equity in OpenAI that makes me more nervous than relieved. Or maybe I’m just biased because I won’t have S-1s or 10-Ks to analyze.
Obviously I regret the edit, but then again, I didn’t realize how prescient my underlying nervousness about OpenAI’s structure would prove to be, largely because I clearly wasn’t worried enough.
Microsoft vs. the Board
Much of the discussion on tech Twitteroverthe weekend has been shock that a board would incinerate so much value. First off, Altman is one of the Valley’s most-connected executives, and a prolific fund-raiser and dealmaker; second is the fact that several OpenAI employees already resigned, and more are expected to follow in the coming days. OpenAI may have had two tribes previously; it’s reasonable to assume that going forward it will only have one, led by a new CEO in Shear who puts the probability of AI doom at between 5 and 50 percent and has advocated a significant slowdown in development .
Here’s the reality of the matter, though: whether or not you agree with the Sutskever/Shear tribe, the board’s charter and responsibility is not to make money. This is not a for-profit corporation with a fiduciary duty to its shareholders; indeed, as I laid out above, OpenAI’s charter specifically states that it is “unconstrained by a need to generate financial return”. From that perspective the board is in fact doing its job, as counterintuitive as that may seem: to the extent the board believes that Altman and his tribe were not “build[ing] general-purpose artificial intelligence that benefits humanity” it is empowered to fire him; they do, and so they did.