Accusations fly as time bomb ticks down at OpenAI
As the wild standoff between three of OpenAI's four board members and 95% of the company's entire workforce drags on, messy allegations are starting to fly in both directions. So who are these directors, and why did they fire CEO Sam Altman?
With the world's leading AI company and its phenomenal GPT language model still in turmoil, new information is coming to light on what's been going on behind closed doors at OpenAI. Much of it remains speculative, since the board is yet to give a full explanation of its bombshell actions – either to the public, or to the more than 95% of OpenAI employees who have threatened to quit if they don't get their old leadership team back.
Indeed, according to Bloomberg, new interim CEO Emmett Shear is himself in the dark about why Altman was fired, and is telling people he'll walk as well if no explanation is forthcoming soon.
It still appears possible that the existing board will step down, allowing for the return of Altman and former board Chairman and President Greg Brockman, but for the time being, both have agreed to join Microsoft and pursue the project of achieving AGI under a vastly different business model than the one in place at OpenAI.
As we discussed yesterday, the company is set up this way so that it can leverage the power of capital investment, but still attempt to stay true to its overall mission of making sure that potentially highly dangerous advanced AI is developed in a way that's most likely to benefit humanity. The structure is designed to make sure that the interests of humanity take precedence over any profit motives; the board has the power it wielded on Friday literally so it can save the world if it needs to.
Who are the board, and what might their motivations be?
There are only four remaining board members at the non-profit company that wields final control over the for-profit arm of OpenAI. At the start of the year, there were nine, including Altman and Brockman. Resignations from Republican House representative Will Hurd, Neuralink CEO Shivon Zillis and LinkedIn founder Reid Hoffman earlier this year left seats open, and the board couldn't agree on how to fill them.
One of the remaining four is Chief Scientist Ilya Sutskever, who has now joined team Altman, saying he "deeply regret[s his] participation in the board's actions," so we'll focus on the others.
Much of the early speculation fell on Adam D'Angelo, who in his primary job as CEO of question-and-answer site Quora appears to have a possible conflict of interest. On October 26, D'Angelo announced an upgrade to Quora's Poe system, allowing creators to build, then monetize AI-powered bots created on the site. Just weeks later, Altman announced "GPTs" – effectively, a product that appears to compete directly with Quora's offering.
Since some people *still* don’t understand what’s happening, let me introduce you to the “lead” member of the board - Adam D’Angelo.— LeGate (@williamlegate) November 20, 2023
Adam has a company Poe which was made obsolete by the GPTs OpenAI announced on dev day.
Adam was furious the board didn’t give him advance notice… https://t.co/KxJIutnD9n
But now, reports are emerging centered on 30-something Australian Helen Toner, Strategy Director for Georgetown University's Center for Security and Emerging Technlogy, and her links to the Effective Altruism movement.
According to the New York Times, Toner and Altman clashed in October, when Altman took Toner to task for criticizing OpenAI's approach to AI safety signaling during the release of GPT4 in a research paper she co-wrote at Georgetown – while also appearing to praise key competitor Anthropic's approach. Things got heated enough that discussions were reportedly held around whether Toner should be removed from the board over the incident.
It was Toner, according to the Times, that told employees that destroying OpenAI would be consistent with the board's mission. And according to The Information, the OpenAI board went so far as to approach Anthropic and propose a potential merger after it fired Altman.
Anthropic was founded in 2021 by a group of former OpenAI senior team members that disagreed with OpenAI's decision to partner with Microsoft, among other things. Now partnered with Amazon, Anthropic has long been connected to the Effective Altruism movement, and was described by Times columnist Kevin Roose as "the white-hot center of A.I. Doomerism" in July.
Most ppl are missing the key point regarding OpenAI— Tomas Pueyo (@tomaspueyo) November 21, 2023
This is not standard startup drama
This is literally a fight for the survival of humanity, linked to the original purpose of OpenAI: saving us from the end of the world:
Effective Altruism is a philosophical movement of sorts, based around the idea of "using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis," according to the Centre for Effective Altriusm. The group, through a funding organization called Open Philanthropy, has a group entirely focused on the "global catastrophic risk" of advanced AI. It has come under intense scrutiny in recent months since one of its major funders, FTX founder Sam Bankman-Fried, went on trial.
Toner spent three years working at Open Philanthropy as a research analyst, and the organization also gave research grants to her group at Georgetown.
The movement is also linked to the remaining board member, Tasha McCauley, an adjunct senior management scientist at RAND Corporation, co-founder of Fellow Robots and Geosim Systems, and wife of actor Joseph Gorden-Levitt. McCauley sits on the UK board of the Effective Ventures Foundation, the parent organization of the Centre for Effective Altruism.
So some are positioning the board's firing of Altman as a safety-focused Effective Altruism takeover, and questioning whether there are conflicts of interest given the group's ties to Anthropic.
Anonymous allegations against Altman and Brockman
On the other side of the ledger, suggesting that Altman was fired for good cause, comes a letter claiming to be written by anonymous ex-OpenAI employees, accusing both Altman and Brockman of deceitful and manipulative behavior and claiming that Altman's push towards commercialization of the company's AI tech resulted in about half the company's employees leaving or being pushed out in the last couple of years.
The letter, tweeted out by Elon Musk but subsequently deleted, urges the remaining OpenAI board members to fully investigate Altman's recent conduct.
As the United States prepares for the Thanksgiving weekend, there are hopes, but perhaps not high hopes, that the situation will be resolved before the holiday break. The company's ChatGPT service is still functioning, but with more than 95% of the OpenAI workforce saying they're prepared to quit, it remains to be seen how long this will stay the case.