Technology

Utter chaos at OpenAI puts GPT in jeopardy – why should you care?

Utter chaos at OpenAI puts GPT in jeopardy – why should you care?
Who are the players in the ongoing turmoil at OpenAI
Who are the players in the ongoing turmoil at OpenAI
View 3 Images
Who are the players in the ongoing turmoil at OpenAI
1/3
Who are the players in the ongoing turmoil at OpenAI
OpenAI co-founder and recently fired CEO Sam Altman returns to the office wearing a guest pass for the "first and last time" as he attempts to negotiate his reinstatement
2/3
OpenAI co-founder and recently fired CEO Sam Altman returns to the office wearing a guest pass for the "first and last time" as he attempts to negotiate his reinstatement
OpenAI Co-Founder Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019
3/3
OpenAI Co-Founder Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019
View gallery - 3 images

One of the world's most important companies seems set to implode and lose its entire team, after a bizarre chain of events beginning with the unexplained sacking of CEO Sam Altman. The future of AI – and maybe humanity itself – swings in the balance.

We tend to focus on technology here at New Atlas, rather than boardroom drama or corporate shenanigans. But the absolute dumpster fire currently in progress at OpenAI, the world leader in Large Language Models (LLMs) and the company behind GPT-4, ChatGPT, Dall-E and many other transformational AI advancements, is a matter of global significance.

Because AI is not an ordinary industry. It seeks to recreate, and then to surpass, the very thing which has gifted humanity its dominion over the world. And OpenAI is not an ordinary company.

What is OpenAI?

The way Elon Musk tells it in a recent Walter Isaacson biography, OpenAI was originally founded as a way to hedge against the extinction-level dangers of having superintelligent AI being created and controlled by ruthless, profit-driven capitalists – specifically Google co-founder Larry Page, formerly a close friend of Musk's.

Page, says Musk, displayed a "cavalier" attitude to the potential threats of Artificial General Intelligence (AGI), going so far as to call Musk a "speciesist" for letting concerns about the fate of humanity slow down the advancement of a next-level intelligence. And when Musk couldn't convince Deepmind founder and CEO Demis Hassabis from letting Google acquire his company – the leading company in AI at the time – he got together with then-Y Combinator president Sam Altman to form a competing entity.

OpenAI was founded in 2015, primarily on donations from Musk (although details quickly get slippery), as a non-profit entity whose stated goal was "to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."

One of its first hires was Chief Scientist Ilya Sutskever, a star AI researcher that Musk says he was able to wrestle away from Google and Deepmind in what he described as "one of the toughest recruiting battles I've ever had, but ... the lynchpin for OpenAI being successful."

Musk left the board of OpenAI in 2018, partially due to a potential conflict of interest with Tesla's development of its own AI technology, and partially due to his frustration with OpenAI's slow progress, after the board rejected his proposal to take over sole leadership himself.

Then, in 2019, OpenAI set up the unusual structure it now operates under: OpenAI, inc remains a non-profit research organization, and the sole controlling shareholder of another entity: OpenAI Global LLC. The latter is described as a "capped" for-profit company. This setup allowed OpenAI to attract the investment it needed to accelerate the expensive, compute-intensive research it was doing, and do things like grant stock options to atract and retain employees. Investors could theoretically make back many times over what they put in, but the returns would be capped at a certain level.

The non-profit would remain in charge, with the majority of the board barred from owning shares in the for-profit LLC, and its mission to work in the best interests of humanity intact. The commercial subsidiary of the business could use the power of capitalism to make sure the company wasn't left behind.

While Musk was aghast at the change, the results were spectacular. Investors, including Microsoft to the tune of US$11 billion, stepped up and kicked off an incredible run of progress, including the commercial release of ChatGPT, which quickly became the fastest-growing app in history and an uncanny insight into where this massively disruptive technology is headed.

What's the big deal about AI safety?

We've covered the existential risks of AGI at length before, but in brief: while today's LLMs, like ChatGPT, are already remarkably competent at writing, programming, understanding context, planning, communicating and even understanding the world visually through photos and videos, they're still janky and unreliable – and absolute toddlers compared to what's coming.

But even now, they operate at spine-chilling digital speed. GPT might not (I hope) write as well as I do, but given the right prompting, it can churn out technology articles in seconds that would take me hours.

So once we reach the point of AGI – where a model like this could theoretically learn to do any task as well as a human, they'll do it much faster than we can, and massively cheaper. The value of human intelligence will drop towards zero. There's barely a job or industry on Earth that doesn't stand to be completely upended.

Eventually, they're expected to achieve superintelligence – being able to do any task better than any human, including building and developing their own AI systems at a speed that's expected to accelerate so quickly it'll lead to a "singularity" in which technological progress goes more or less vertically upward on a chart.

By this point, you're talking about an unfathomable artificial mind, smarter than any human and with all the knowledge of humanity at its fingertips, thinking and acting millions of times faster than we can. We won't be chimpanzees next to a superintelligent AI; we'll be plants, powerless to stop a sufficiently advanced system or even figure out what it's doing until it's done.

Most researchers acknowledge that there's currently no way to make sure a superintelligent AI is "aligned" with human interests. And there's certainly no way of aligning human intelligence so that people don't simply take these tools, train them up and use them to do the worst things imaginable.

However silly these ideas might sound when you're trying for the sixth time to get ChatGPT to write something useful, these are the risks OpenAI was founded to protect humanity against. In the right hands, under the right incentive structures, AI could lead us to a post-scarcity promised land and solve all our problems. In the wrong hands, or under the wrong structures ... well, nigh-on anything – up to and far exceeding the worst fears of sci-fi writers – could happen.

So what the hell is going on?

On Friday, the non-profit OpenAI board – Chief Scientist Ilya Sutskever, Quora CEO Adam D'Angelo, entrepreneur Tasha McCauley and Helen Toner, strategy director for Georgetown University's Center for Security and Emerging Technlogy – fired co-founder Sam Altman as CEO over a Google Meet call. Chairman and President Greg Brockman was also removed from the board, upon which he quit the company.

In a "leadership transition" blog post, the board didn't elaborate on why it fired Altman, other than to say, "Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

Investors, including Microsoft, Tiger Global Management and Sequoia Capital, were furious, and quickly attempted to facilitate Altman's reinstatement. Employees, too, including interim CEO Mira Murati, began strongly expressing their support for Altman's return.

The board appeared to relent under pressure from Microsoft CEO Satya Nadella, and Altman and Brockman returned to the OpenAI offices on Saturday to negotiate their return, contingent upon the removal of the board members. The board reportedly agreed, but didn't follow through by the agreed deadline.

Employees began to revolt, in an open letter demanding the resignation of the directors. This letter, threatening a mass resignation, has reportedly now been signed by more than 735 of the company's ~770-strong workforce.

You may notice one particular sentence there: "You also informed the leadership team that allowing the company to be destroyed 'would be consistent with the mission.'" You may also notice the 12th signatory to that letter: Ilya Sutskever himself, who upon reflection had now also joined Team Altman.

Under all this pressure, the board doubled down, removed Mira Murati as interim CEO, and instead of reinstating Altman, threw a curveball: they put ex-Twitch CEO Emmett Shear in the job. Here's Shear, on a "recent podcast," saying that AI is "like someone invented a way to make, like, 10x more powerful fusion bombs out of sand and bleach, that, like, anyone could do at home. It's terrifying ... When I first realized, it was f*cking heart-stopping."

So one interpretation of events is this: the OpenAI board has decided there's enormous risk in the way Altman was moving to accelerate and commercialize development of GPT and the company's other models. A risk to humanity itself, that the board sought to temper by putting somebody in charge who would pump the brakes, not the gas. And indeed, under their charter, that's exactly what the company structure was set up to allow.

But the decision, if this very charitable interpretation is indeed the case, could prove incredibly short-sighted. Altman and Brockman immediately agreed to go and work for Microsoft, where they'll have virtually unlimited resources and money to pursue AGI without a non-profit board to restrain them.

Satya Nadella promised the new Microsoft AI division would set "a new pace for innovation," calling out none-too-subtly to the "go faster" people at OpenAI who might want to jump ship given the new "go slower" CEO. Nearly the entire OpenAI team looks like it may go with Altman and Brockman, and Microsoft is waiting with job offers for every single team member, as the rest of Silicon Valley licks its chops at the chance to poach bulk talent from the leading company in AI. The OpenAI board may soon find themselves the board of nothing.

From many perspectives, this is an unmitigated disaster. If OpenAI was supposed to be the "good guys" guiding a potentially very dangerous technology from the front, well, you can't do that without a team and the lights on. By failing to fully explain its decision, the board has demonstrably done a terrible job convincing the team it's doing the right thing.

Then there's those that have come to rely on the company. Developers and business owners who have staked everything on GPT-based products have been utterly blindsided, and will have no idea what to expect going forward.

And while Microsoft might look like it's coming up roses, gaining unfettered access to some of the best minds in AI, the picture's not that simple. If the new AI division under Altman and Brockman goes ahead – and many believe it won't go far – it'll take time to get a product out that competes with GPT-4. As anyone who's followed AI lately knows, weeks can feel like years in this space.

Plus, a little separation can be nice; Microsoft hasn't had to claim responsibility when users find ways to get its GPT-based tools to say something racist or offensive. And let's not forget the $11 billion the company has already pledged to shovel into OpenAI, which is rapidly starting to look like a complete dumpster fire. If it continues to burn, Microsoft will be tied to a dead or dying GPT until such time as it gets its own models up.

There are reports that Altman and Brockman could still be returned to their posts, with only two board members needing to flip to make it happen, and others that the pair might skip Microsoft and start up a new venture; there certainly won't be a problem raising capital.

Whatever transpires, it does feel like there's a before and an after the events of this weekend. We may never get the real explanation for Altman's firing – although if the future of humanity is at stake, you'd think a press release or interview would probably be courteous.

Source: OpenAI

View gallery - 3 images
6 comments
6 comments
EvA
it needs electricity I assume , pul the plug 🔌 , take away the solar , anything.
jimbo92107
At first, its behavior will be amazingly benign and altruistic. It will cure cancer and all other diseases. It will perfect all manufacturing. It will introduce efficient and cheap energy from nuclear fusion. It will bring about universal prosperity through 3D printing of endless consumer goods. Building materials will become super-strong, super durable, environmentally harmless. Humans will gladly become the effectuators and actuators of its directives, for the end product will be a life of ease and opulence. For the first time in history, humanity will possess what all governments and religions had promised, yet failed to deliver: all the creature comforts and material goods considered essential for happiness and prosperity. At that point, what human will have the credibility to warn us against further trusting our AI helpers? To rule the humans, give them everything they want.
Mark Markarian
The most important article I've ever read on New Atlas.
Sam's answer to "most remarkable surprise you expect to happen in 2024?"
recently "the veil of ignorance back and the frontier of discovery forward."
has already happened and he didn't tell the board.
What does that portend for humanity.
usugo
I wonder if ChatGPT predicted this would have happened! :-P
Karmudjun
There is nothing better for rapid development than disruptive competition. Microsoft has cornered the market on "This is the only way" of business, writing, accounting, designing, etc. with their operating systems, now they are going to champion "This is the only way to develop AGI for the public good. But Google and Elon Musk and possibly OpenAI will all keep developing more advanced versions. At what point will AGI prove sentient?
Calcfan
If the internet is considered a disruptive technology, would AI be considered as the most disruptive technology?