Ex-OpenAI Board Members: More Guardrails Needed for Responsible AI
Former OpenAI board member Helen Toner broke her silence on why she and other directors briefly fired CEO Sam Altman in November.
![Former OpenAI board member Helen Toner questioned Sam Altman's prioritization of responsible AI Former OpenAI board member Helen Toner questioned Sam Altman's prioritization of responsible AI](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt59eedbc69dee40d0/655bbfc6ba0bf5040a4c178e/1_-_Sam_Altman.jpg?width=700&auto=webp&quality=80&disable=upscale)
jamesonwu1972/Shutterstock
Toner, who directs strategy for Georgetown’s Center for Security and Emerging Technology (CSET), joined the OpenAI board of directors in 2021. Altman at the time stated that the AI research brought an "emphasis on safety."
Toner and Altman clashed last October over a paper she co-wrote for CSET titled "Decoding Intentions." The report touched on "safety and ethics issues" ChatGPT and GPT-4 faced around copyright, data annotators' labor conditions and susceptibility to "jailbreaking." The paper moreover said that OpenAI kicked off "a sense of urgency" among other tech companies through its release of the first ChatGPT.
In the next paragraph, the authors said rival AI developer Anthropic's commitment to safe AI went "beyond words" with its decision to delay releases of its Claude chatbot "to avoid stoking the flames of AI hype."
The New York Times reported that Altman attempted to push Toner out, upset at her perceived criticism of his company and praise of a rival. Other media reports said Altman pitched board members on removing Toner.
The suddenness of Altman's ouster came as a shock to outsiders, but Toner said the board needed to act in secret.
And that was due to fear of retaliation, she said in the podcast interview.
"Once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something against him, that he [would] pull out all the stops [and] do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him," she said. "We were very careful, very deliberate about who we told, which was essentially almost no one, in advance, other than obviously for our legal team."
However, a letter with the signatures of 745 OpenAI staffers spurred the board to reach an agreement with Altman for his return.
Toner lamented "misreporting" by media outlets about Altman's firing that led to employees signing the letter.
"Pretty early on, the way the situation was being portrayed to people inside the company was, 'You have two options: Either Sam comes back immediately with no accountability (totally new board of his choosing), or the company will be destroyed.' Those weren't actually the only two options, and the outcome that we eventually landed on was neither," Toner said.
Toner said some employees were worried about maximizing the value of the Thrive Capital-led tender offer that was upcoming. Others simply loved their job and didn't want the company to crumble.
The Wall Street Journal reported that when Altman's former executive team warned that the company would collapse without Altman, Toner famously replied, “That would actually be consistent with the mission."
Toner also attributed the mass letter to "how scared people are to go against Sam."
"They experienced him retaliating against people, retaliating against them for past instances of being critical," she said. "They were really afraid of what might happen to them."
Toner said Altman has demonstrated negative behavior in the workplace throughout his career. She noted that Altman was fired from his previous job at Y Combinator, despite that news being "hushed up," she said. She added that management team at Loopt, Altman's first company, twice asked the board to fire him.
"If you actually looked at his track record, he doesn't exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the board, as much as he would love to kind of portray it that way," Toner said.
The tension between Altman and members of the board seems to reflect the dual nature of OpenAI, which in 2019 became a profit-capped company that the nonprofit board of directors still controlled. Being profit-capped meant that OpenAI's original investors and shareholders would only earn up to 100 times their investors, with the rest of the money going back into the company.
The board remained very much in a nonprofit model, with its members disallowed from owning equity in the company. But OpenAI's relationship with for-profit Microsoft (starting in 2019 and expanding in 2023) and the departure from an open-source model have made the company look much different than it did in 2015.
Source: OpenAI
Toner in her article said the company was using using for-profit mechanisms to raise necessary capital for R&D while using a nonprofit board to maintain the original mission: to establish artificial general intelligence systems for the benefit of all humanity. She concluded in the article that without government regulation, profit incentives will ultimately outweigh concerns about public good.
"Certainly, there are numerous genuine efforts in the private sector to guide the development of this technology responsibly, and we applaud those efforts," she and McCauley wrote. "But even with the best of intentions, without external oversight, this kind of self-regulation will end up unenforceable, especially under the pressure of immense profit incentives. Governments must play an active role."
![Ludwig-Eric_Rise-Technology.jpg Ludwig-Eric_Rise-Technology.jpg](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt4c3b8f5e0503e73a/6523f938a1862c7dab376d23/Ludwig-Eric_Rise-Technology.jpg?width=700&auto=webp&quality=80&disable=upscale)
RISE's Eric Ludwig
Eric Ludwig's tech advisory firm, RISE Technology Advisors, provides consulting for customer experience (CX) and contact-center-as-a-service (CCaaS) platforms that leverage conversational and generative AI. Ludwig said partners, suppliers and customers need to conduct a "deeper discussion around potential guardrails and transparency" in the AI sector.
"... the goal of most AI companies is to turn a profit. Fortunately, most of the firms in the space we operate use their technology for the betterment of their clients, across customer experience, security and infrastructure specifically. There is a way to be mindful, responsible and profitable," Ludwig told Channel Futures. "... We need to keep telling these stories to bring some clarity to our broader understanding of the technology and people, risks and rewards."
The month of May saw a flurry of changes and questions regarding the way OpenAI examines risks and guardrails for its technology.
News surfaced two weeks ago that OpenAI's Superalignment team was disbanding. Many members have reportedly been sent to other parts of the company, while some executives are leaving. The group originally was set to receive 20% of OpenAI's computing power for researching risks of AGI over the course of four years (starting in summer of 2023).
Superalignment team co-leaders Jan Leike and co-founder and chief scientist Ilya Sutskever are both leaving the company. Sutskever lost his board position last year alongside Toner after participating in Altman's firing, though he expressed remorse.
Leike, who is joining OpenAI rival Anthropic AI, wrote in an X/Twitter post that he wished OpenAI and its employees well, but pointed to differences in opinion between him and OpenAI leadership when it came to responsible AI. He said the Superalignment team was “sailing against the wind" in its efforts.
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity," he wrote. "But over the past years, safety culture and processes have taken a backseat to shiny products. We are long overdue in getting incredibly serious about the implications of AGI. We must prioritize preparing for them as best we can. Only then can we ensure AGI benefits all of humanity. OpenAI must become a safety-first AGI company.”
OpenAI announced on Wednesday that it has launched a Safety and Security Committee, which will examine the company's “processes and safeguards” over the course of 90 days and then give recommendations to the board.
Its leaders? Altman and three other board members.
Triano said it seemed like an "inadequate" step to put Altman in such a group.
"It's a challenge to have someone as the CEO leading the safety committee, especially if they have been cited previously by their own board members for being less than truthful," Triano told Channel Futures. "Will safety be important to him when there is a looming financial issue or product deadline? Perhaps they should look to others outside the organization to provide the oversight."
The generative AI capabilities of products like ChatGPT have captured the attention of the world over the last two years, but the acronym AGI, which refers to a broader and more advanced artificial general intelligence, is picking up steam in tech media.
Some say the actual arrival of AGI at OpenAI has spurred the latest flurry of actions at the company. Elon Musk in particular claimed in a lawsuit that OpenAI is developing an algorithm called Q* with far more advanced qualities than what is currently available to the public. But prior to that lawsuit, Reuters reported on an internal letter at OpenAI that acknowledged such a project exists. The letter circulated in the days leading up to Altman's firing. Reuters reported that its source said Q* "could threaten humanity."
Blair Pleasant, who studies unified communications and contact center for COMMFusion, said we don't need to look far into the future to grasp how irresponsible AI can cause harm.
![COMMfusion's Blair Pleasant COMMfusion's Blair Pleasant](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/bltad15fd7f20e82286/65260d731c7e3a55b15d29bf/Pleasant-Blair_Commfusion.jpg?width=700&auto=webp&quality=80&disable=upscale)
COMMfusion's Blair Pleasant
"Basically, I’m not as worried about AGI (which I don’t expect to see for a while) as much as I’m worried about AI today and the damage that it can cause to people who are unaware of its limitations. OpenAI brought generative AI to the masses, which is awesome, but it also comes with awesome responsibility that the company must take seriously," Pleasant told Channel Futures. "As an end user of these tools, I’ve experienced generative AI making up information and even providing fake links and resources. If people aren’t aware of this, it can create dangerous consequences. OpenAI and other AI companies need to do more to ensure that the technology they’ve introduced does more harm than good, and I don’t see that happening right now."
Toner in her podcast interview with the Ted AI Show touched on two particular harms AI can cause: discrimination and privacy violation.
She noted that because are currently federal discrimination laws on the book, AI-fueled discriminatory actions may not require new legislation.
Privacy, however, is another case, Toner said.
"The U.S. has no federal privacy laws. There are no rules on the books for how companies can use data. The U.S. is pretty unique in terms of how few protections there are of what kinds of personal data are protected and what ways," she said. "Efforts to make laws have just failed over and over again, but there's now this sudden, stealthy new effort that people think might actually have a chance. So who knows? Maybe this problem is on the way to getting solve. But right now it's a big hole for sure."
Toner in her podcast interview with the Ted AI Show touched on two particular harms AI can cause: discrimination and privacy violation.
She noted that because are currently federal discrimination laws on the book, AI-fueled discriminatory actions may not require new legislation.
Privacy, however, is another case, Toner said.
"The U.S. has no federal privacy laws. There are no rules on the books for how companies can use data. The U.S. is pretty unique in terms of how few protections there are of what kinds of personal data are protected and what ways," she said. "Efforts to make laws have just failed over and over again, but there's now this sudden, stealthy new effort that people think might actually have a chance. So who knows? Maybe this problem is on the way to getting solve. But right now it's a big hole for sure."
Multiple former board members of artificial intelligence behemoth OpenAI – including those who temporarily ousted CEO Sam Altman – have publicly raised concerns about the company's commitment to ensuring responsible AI.
Former OpenAI board members Helen Toner and Tasha McCauley penned an article for the Economist on May 26 expressing their concern for how profit incentives are shaping AI companies' behavior. The authors, who both resigned in late 2023, argued that OpenAI's "experiment in self-governance" is insufficient in aligning with the public good without proper regulatory frameworks.
"OpenAI was founded as a bold experiment to develop increasingly capable AI while prioritizing the public good over profits. Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice," Toner and McCauley wrote.
Toner served on the OpenAI board of directors from 2021 until her removal in late 2024. She was part of the nucleus of directors who moved to fire Altman in November. After backlash from 49% owner Microsoft and a letter from various OpenAI employees, Altman returned as CEO after four days, and Toner, McCauley and Ilya Sutskever resigned from the board.
While the board's initial explanation for ousting Altman was his being inconsistently candid in communications with them, Toner in a recent podcast interview on The Ted AI Show revealed further reasons related to Altman's character and behavior.
Toner said Altman's inconsistent communication included not telling the board that OpenAI was about to release ChatGPT in November 2022. Toner said she and fellow board members learned about the launch on Twitter. Altman also did not disclose to the board that the OpenAI startup fund for venture capital, according to Toner (OpenAI revoked Altman's ownership of the fund in April).
A breakdown in trust had already occurred leading up to Altman's firing, she said.
"That's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just helping a CEO to raise more money," Toner said in the podcast. "Not trusting the word of the CEO who is your main conduit to the company – your main source of information about the company – is totally impossible."
Furthermore, Toner said said different OpenAI executives came forward to report negative behavior they had experienced from Altman. Many had been afraid to come forward with their stories, Toner said.
"Telling us how they couldn't trust him, [telling us] about the toxic atmosphere he was creating," Toner said. "They used the phrase 'psychological abuse.'"
![Altman_Sam_OpenAI_135x180_2023.jpg Altman_Sam_OpenAI_135x180_2023.jpg](https://eu-images.contentstack.com/v3/assets/blt10e444bce2d36aa8/blt9d908617acd8a048/6557e42dc3f8c4040a3f8dbb/Altman_Sam_OpenAI_135x180_2023.jpg?width=700&auto=webp&quality=80&disable=upscale)
OpenAI's Sam Altman
Toner said board members kept their plans closely concealed for fear of Altman. That's a key reason why Altman's firing appeared to be so sudden, she said.
All of this came as OpenAI was inching nearer and nearer to achieving artificial general intelligence (AGI), defined by OpenAI as "a highly autonomous system that outperforms humans at most economically valuable work."
Six months later, OpenAI is moving forward at a fast clip under Altman's leadership. The company on Wednesday made the news for its deal with Apple to embed technology like ChatGPT into Apple products. At the same time, OpenAI has seen an exodus of executives and researchers who were working to understand the risks of such a phenomenon. News reports surfaced on May 17 that OpenAI's "superalignment" team, formed in 2023 to investigate long-term implications of AI, has dissolved.
Does the drama at OpenAI have implications for business customers and the channel partners that serve them? Sources speaking to Channel Futures think so.
"A company and its leadership's commitment to ethical AI development influences and impacts the trust and reliability end users place in that company's products, and I think OpenAI is no different," said John Triano, a conversational AI expert who has worked for 8x8, Five9 and now Auraya Systems.
Triano said the disbanding of the OpenAI Superalignment team reflects challenges felt across the AI market.
"The industry is moving at an unprecedented rate, as are the people working in and leading the industry. The demands of the market to create compelling cutting edge AI technology is forcing companies to get out ahead of their skis in my opinion," Triano told Channel Futures. "Seems products being 'beneficial' may be outweighing 'safe.'"
Bret Taylor and Larry Summers, who joined the OpenAI board after Toner and McCauley's departures, published a rebuttal Economist article on Thursday. They cited a review by law firm Wilmer Hale, which reportedly "rejected the idea that any kind of AI safety concern necessitated Mr. Altman’s replacement."
In the slideshow above, Channel Futures recaps Toner's allegations against Altman and recent developments in OpenAI's AI guardrails, and we discuss trickle-down effect on the channel.
About the Author(s)
You May Also Like