One of the world's most important companies seems set to implode and lose its entire team, after a bizarre chain of events beginning with the unexplained sacking of CEO Sam Altman. The future of AI – and maybe humanity itself – swings in the balance.
We tend to focus on technology here at New Atlas, rather than boardroom drama or corporate shenanigans. But the absolute dumpster fire currently in progress at OpenAI, the world leader in Large Language Models (LLMs) and the company behind GPT-4, ChatGPT, Dall-E and many other transformational AI advancements, is a matter of global significance.
Because AI is not an ordinary industry. It seeks to recreate, and then to surpass, the very thing which has gifted humanity its dominion over the world. And OpenAI is not an ordinary company.
What is OpenAI?
The way Elon Musk tells it in a recent Walter Isaacson biography, OpenAI was originally founded as a way to hedge against the extinction-level dangers of having superintelligent AI being created and controlled by ruthless, profit-driven capitalists – specifically Google co-founder Larry Page, formerly a close friend of Musk's.
Page, says Musk, displayed a "cavalier" attitude to the potential threats of Artificial General Intelligence (AGI), going so far as to call Musk a "speciesist" for letting concerns about the fate of humanity slow down the advancement of a next-level intelligence. And when Musk couldn't convince Deepmind founder and CEO Demis Hassabis from letting Google acquire his company – the leading company in AI at the time – he got together with then-Y Combinator president Sam Altman to form a competing entity.
OpenAI was founded in 2015, primarily on donations from Musk (although details quickly get slippery), as a non-profit entity whose stated goal was "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."
Elon Musk: "I am the reason @OpenAI exists"@elonmusk invested 50 million dollars in OpenAI when it was founded. pic.twitter.com/ZsuXWROwLD
— Charly Wargnier (@DataChaz) May 19, 2023
One of its first hires was Chief Scientist Ilya Sutskever, a star AI researcher that Musk says he was able to wrestle away from Google and Deepmind in what he described as "one of the toughest recruiting battles I've ever had, but ... the lynchpin for OpenAI being successful."
Musk left the board of OpenAI in 2018, partially due to a potential conflict of interest with Tesla's development of its own AI technology, and partially due to his frustration with OpenAI's slow progress, after the board rejected his proposal to take over sole leadership himself.
Then, in 2019, OpenAI set up the unusual structure it now operates under: OpenAI, inc remains a non-profit research organization, and the sole controlling shareholder of another entity: OpenAI Global LLC. The latter is described as a "capped" for-profit company. This setup allowed OpenAI to attract the investment it needed to accelerate the expensive, compute-intensive research it was doing, and do things like grant stock options to atract and retain employees. Investors could theoretically make back many times over what they put in, but the returns would be capped at a certain level.
The non-profit would remain in charge, with the majority of the board barred from owning shares in the for-profit LLC, and its mission to work in the best interests of humanity intact. The commercial subsidiary of the business could use the power of capitalism to make sure the company wasn't left behind.
While Musk was aghast at the change, the results were spectacular. Investors, including Microsoft to the tune of US$11 billion, stepped up and kicked off an incredible run of progress, including the commercial release of ChatGPT, which quickly became the fastest-growing app in history and an uncanny insight into where this massively disruptive technology is headed.
What's the big deal about AI safety?
We've covered the existential risks of AGI at length before, but in brief: while today's LLMs, like ChatGPT, are already remarkably competent at writing, programming, understanding context, planning, communicating and even understanding the world visually through photos and videos, they're still janky and unreliable – and absolute toddlers compared to what's coming.
But even now, they operate at spine-chilling digital speed. GPT might not (I hope) write as well as I do, but given the right prompting, it can churn out technology articles in seconds that would take me hours.
So once we reach the point of AGI – where a model like this could theoretically learn to do any task as well as a human, they'll do it much faster than we can, and massively cheaper. The value of human intelligence will drop towards zero. There's barely a job or industry on Earth that doesn't stand to be completely upended.
Eventually, they're expected to achieve superintelligence – being able to do any task better than any human, including building and developing their own AI systems at a speed that's expected to accelerate so quickly it'll lead to a "singularity" in which technological progress goes more or less vertically upward on a chart.
By this point, you're talking about an unfathomable artificial mind, smarter than any human and with all the knowledge of humanity at its fingertips, thinking and acting millions of times faster than we can. We won't be chimpanzees next to a superintelligent AI; we'll be plants, powerless to stop a sufficiently advanced system or even figure out what it's doing until it's done.
Most researchers acknowledge that there's currently no way to make sure a superintelligent AI is "aligned" with human interests. And there's certainly no way of aligning human intelligence so that people don't simply take these tools, train them up and use them to do the worst things imaginable.
However silly these ideas might sound when you're trying for the sixth time to get ChatGPT to write something useful, these are the risks OpenAI was founded to protect humanity against. In the right hands, under the right incentive structures, AI could lead us to a post-scarcity promised land and solve all our problems. In the wrong hands, or under the wrong structures ... well, nigh-on anything – up to and far exceeding the worst fears of sci-fi writers – could happen.
So what the hell is going on?
On Friday, the non-profit OpenAI board – Chief Scientist Ilya Sutskever, Quora CEO Adam D'Angelo, entrepreneur Tasha McCauley and Helen Toner, strategy director for Georgetown University's Center for Security and Emerging Technlogy – fired co-founder Sam Altman as CEO over a Google Meet call. Chairman and President Greg Brockman was also removed from the board, upon which he quit the company.
In a "leadership transition" blog post, the board didn't elaborate on why it fired Altman, other than to say, "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."
Sam and I are shocked and saddened by what the board did today.
— Greg Brockman (@gdb) November 18, 2023
Let us first say thank you to all the incredible people who we have worked with at OpenAI, our customers, our investors, and all of those who have been reaching out.
We too are still trying to figure out exactly…
Investors, including Microsoft, Tiger Global Management and Sequoia Capital, were furious, and quickly attempted to facilitate Altman's reinstatement. Employees, too, including interim CEO Mira Murati, began strongly expressing their support for Altman's return.
The board appeared to relent under pressure from Microsoft CEO Satya Nadella, and Altman and Brockman returned to the OpenAI offices on Saturday to negotiate their return, contingent upon the removal of the board members. The board reportedly agreed, but didn't follow through by the agreed deadline.
Employees began to revolt, in an open letter demanding the resignation of the directors. This letter, threatening a mass resignation, has reportedly now been signed by more than 735 of the company's ~770-strong workforce.
Breaking: 505 of 700 employees @OpenAI tell the board to resign. pic.twitter.com/M4D0RX3Q7a
— Kara Swisher (@karaswisher) November 20, 2023
You may notice one particular sentence there: "You also informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission.'" You may also notice the 12th signatory to that letter: Ilya Sutskever himself, who upon reflection had now also joined Team Altman.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
Under all this pressure, the board doubled down, removed Mira Murati as interim CEO, and instead of reinstating Altman, threw a curveball: they put ex-Twitch CEO Emmett Shear in the job. Here's Shear, on a "recent podcast," saying that AI is "like someone invented a way to make, like, 10x more powerful fusion bombs out of sand and bleach, that, like, anyone could do at home. It's terrifying ... When I first realized, it was f*cking heart-stopping."
Emmett Shear, the new CEO of OpenAI on a recent podcast.pic.twitter.com/6LGEOinn7r
— Rowan Cheung (@rowancheung) November 20, 2023
So one interpretation of events is this: the OpenAI board has decided there's enormous risk in the way Altman was moving to accelerate and commercialize development of GPT and the company's other models. A risk to humanity itself, that the board sought to temper by putting somebody in charge who would pump the brakes, not the gas. And indeed, under their charter, that's exactly what the company structure was set up to allow.
But the decision, if this very charitable interpretation is indeed the case, could prove incredibly short-sighted. Altman and Brockman immediately agreed to go and work for Microsoft, where they'll have virtually unlimited resources and money to pursue AGI without a non-profit board to restrain them.
We remain committed to our partnership with OpenAI and have confidence in our product roadmap, our ability to continue to innovate with everything we announced at Microsoft Ignite, and in continuing to support our customers and partners. We look forward to getting to know Emmett…
— Satya Nadella (@satyanadella) November 20, 2023
Satya Nadella promised the new Microsoft AI division would set "a new pace for innovation," calling out none-too-subtly to the "go faster" people at OpenAI who might want to jump ship given the new "go slower" CEO. Nearly the entire OpenAI team looks like it may go with Altman and Brockman, and Microsoft is waiting with job offers for every single team member, as the rest of Silicon Valley licks its chops at the chance to poach bulk talent from the leading company in AI. The OpenAI board may soon find themselves the board of nothing.
From many perspectives, this is an unmitigated disaster. If OpenAI was supposed to be the "good guys" guiding a potentially very dangerous technology from the front, well, you can't do that without a team and the lights on. By failing to fully explain its decision, the board has demonstrably done a terrible job convincing the team it's doing the right thing.
Then there's those that have come to rely on the company. Developers and business owners who have staked everything on GPT-based products have been utterly blindsided, and will have no idea what to expect going forward.
Here is a small piece of evidence for the “Ilya Pushed The AGI Panic Button” Theory:
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) November 18, 2023
At APEC, one day before @Sama was fired:
Interviewer: What is the most remarkable surprise you expect to happen in 2024?
Sam Altman: "The model capability will have taken such a leap forward… https://t.co/9QBqnpyuNE pic.twitter.com/xT6oatP4MM
And while Microsoft might look like it's coming up roses, gaining unfettered access to some of the best minds in AI, the picture's not that simple. If the new AI division under Altman and Brockman goes ahead – and many believe it won't go far – it'll take time to get a product out that competes with GPT-4. As anyone who's followed AI lately knows, weeks can feel like years in this space.
Plus, a little separation can be nice; Microsoft hasn't had to claim responsibility when users find ways to get its GPT-based tools to say something racist or offensive. And let's not forget the $11 billion the company has already pledged to shovel into OpenAI, which is rapidly starting to look like a complete dumpster fire. If it continues to burn, Microsoft will be tied to a dead or dying GPT until such time as it gets its own models up.
There are reports that Altman and Brockman could still be returned to their posts, with only two board members needing to flip to make it happen, and others that the pair might skip Microsoft and start up a new venture; there certainly won't be a problem raising capital.
Whatever transpires, it does feel like there's a before and an after the events of this weekend. We may never get the real explanation for Altman's firing – although if the future of humanity is at stake, you'd think a press release or interview would probably be courteous.
Source: OpenAI