The power of OpenAI The struggle that gripped the tech world after co-founder Sam Altman was fired has finally reached its end – at least for now. But what to do with it?
It feels almost as if some praise is called – like OpenAI died and a new, but not necessarily improved, startup stands between it. Former Y Combinator president Altman is back at the helm, but is his return justified? OpenAI’s new board of directors got off to a less diverse start (ie it was entirely white and male), and the company’s philanthropic goals risked being co-opted by more capitalist interests.
That’s not to suggest that the old OpenAI was perfect in any respect.
As of Friday morning, OpenAI has a six-person board – Altman, OpenAI chief scientist Ilya Sutskever, OpenAI president Greg Brockman, tech entrepreneur Tasha McCauley, Quora CEO Adam D’Angelo and Helen Toner, director of Georgetown’s Center for Security and Emerging Technologies. . The board is technically tied to a nonprofit with a majority stake in OpenAI’s for-profit side, with full decision-making power over OpenAI’s activities, investments and overall direction.
OpenAI’s unusual structure was built by the company’s co-founders, including Altman, with the best of intentions. The nonprofit’s unusually short (500-word) charter stipulates that the board will make decisions ensuring “that artificial general intelligence benefits all humanity,” leaving it up to board members to decide what’s best. that interpretation. There is no “profit” or “income” mentioned in this North Star document; Toner IS reported Once told Altman’s executive team that triggering OpenAI’s collapse “is actually consistent with (the nonprofit’s) mission.”
Perhaps the arrangement would have worked in some parallel universe; over the years, it has shown that OpenAI works well. But when investors and powerful partners get involved, things become…
Altman’s firing brings together Microsoft, OpenAI employees
After the board abruptly canned Altman on Friday without notifying anyone, including most of OpenAI’s 770-person workforce, the startup’s supporters began expressing their discontent privately and publicly.
Satya Nadella, the CEO of Microsoft, a major collaborator of OpenAI, is supposedly “angry” to learn of Altman’s departure. Vinod Khosla, founder of Khosla Ventures, another OpenAI backer, told X (formerly Twitter) that the funding want Altman is back. Meanwhile, Thrive Capital, the aforementioned Khosla Ventures, Tiger Global Management and Sequoia Capital are said to be considering legal action against the board if negotiations over the weekend to reinstate Altman do not materialize.
Now, OpenAI employees are not there is no connection with these investors from outside appearances. On the contrary, nearly all of them – including Sutskever, in an apparent change of heart – signed a letter threatening the board with mass resignation if they chose not to reverse course. But one has to consider that these OpenAI employees have a lot to lose if OpenAI breaks – job offers from Microsoft and Salesforce outside.
OpenAI is in discussions, led by Thrive, to potentially sell shares to employees in a move that could boost the company’s valuation from $29 billion to somewhere between $80 billion and $90 billion. Altman’s sudden exit — and OpenAI’s rotating cast of questionable interim CEOs — gave Thrive cold feet, putting the sale at risk.
Altman won the five-day battle, but at what cost?
But now after several breathless, hair-pulling days, some form of resolution has been reached. Altman — along with Brockman, who resigned Friday in protest of the board’s decision — is back, though subject to a background investigation over concerns that precipitated his ouster. OpenAI has a new transitional board, which satisfies one of Altman’s demands. And OpenAI will reportedly maintain its structure, with investors’ earnings capped and the board free to make non-profit-driven decisions.
Salesforce’s Marc Benioff posted on X that the “good guys” won. But it may not be said yet.
Sure enough, Altman “won,” winning over a board that accused him of “not being consistently forthright” with board members and, according to some reports, putting growth over mission. In an example of said evil, Altman it is said that Toner was critical of a paper he co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he tried to push him off the board. On the other hand, Altman “ANGRY” Sutskever by rushing to launch AI-powered features at OpenAI’s first developer conference.
The board did not explain themselves even after repeated instances, citing possible legal challenges. And it’s safe to say they dismissed Altman in an unnecessarily histrionic manner. But there’s no denying that the directors may have had valid reasons for letting Altman go, at least depending on how they interpreted their humanitarian directive.
The new board seems likely to interpret that directive differently.
Currently, OpenAI’s board consists of former Salesforce co-CEO Bret Taylor, D’Angelo (the only holdover from the original board) and Larry Summers, the economist and former president of Harvard. Taylor is a business entrepreneur, who founded several companies, including FriendFeed (acquired by Facebook) and Quip (through his acquisition of Salesforce). Meanwhile, Summers has deep business and government connections — an asset to OpenAI, the thinking surrounding his selection is likely to go, at a time when regulatory scrutiny of AI is intensifying.
The directors don’t seem like an outright “win” to this reporter, though — not if different perspectives are the goal. While six more seats are yet to be filled, the first four set a fairly homogeneous tone; such a board is actually illegal in Europe, which mandates Companies reserve at least 40% of their board seats for women candidates.
Why some AI experts are worried about the new OpenAI board
I’m not the only one disappointed. Many AI academics turned to X to vent their frustrations earlier today.
Noah Giansiracusa, a professor of mathematics at Bentley University and the author of a book on social media recommendation algorithms, issued the same makeup of all men on the board and the nomination of Summers, which he said with history in the making. bad words about women.
“Whatever one of these incidents does, the optics are not good, to say the least – especially for a company that is leading the way in AI development and reshaping the world we live in. ,” Giansiracusa said via text. “What I find particularly troubling is that OpenAI’s primary goal is to develop artificial general intelligence that ‘benefits all of humanity.’ Since half of the people are women, recent events don’t give me a ton of confidence in this regard. Toner most directly represents the safety side of AI, and this is always a position women are placed in. , throughout history but especially in technology: protecting society from many harms while men get the credit for changing and ruling the world.
Christopher Manning, the director of Sanford’s AI Lab, was less generous than – but agreed with – Giansiracusa in his assessment:
“The newly formed OpenAI board is probably not complete yet,” he told TechCrunch. “However, the current membership of the board, which lacks anyone with deep knowledge about the responsible use of AI in human society and consists only of white men, is not a good start for an important and influential AI company.”
Inequality plagues the AI industry, from annotators who subject the data used to train generative AI models to harmful biases that often arise in those trained models, including OpenAI models. Summer, far away, THERE expressed concern over the potentially harmful consequences of AI — at least as it relates to livelihoods. But critics I spoke to found it hard to believe that a board like OpenAI’s current one would ever prioritize these challenges, at least not in the way that a more diverse board could.
This raises the question: Why didn’t OpenAI try to recruit a well-known AI ethicist like Timnit Gebru or Margaret Mitchell for the initial board? Are they “unusable”? Did they refuse? Or OpenAI didn’t make an effort in the first place? We may never know.
OpenAI has a chance to prove itself smarter and more worldly in choosing the five remaining board seats — or three, if Altman and a Microsoft executive take one each (as rumored). If they don’t go in different ways, what Daniel Colson, the director of the think tank AI Policy Institute, SAYS of X may be true: some people or a lab cannot be trusted to ensure that AI is developed responsibly.