After the shock ousting at the weekend of Sam Altman, CEO of the world’s leading artificial intelligence (AI) firm OpenAI, all eyes turned to the handful of board members who made one of the most consequential decisions in recent technology history.
OpenAI — which became one of the world’s most successful start-ups with a valuation of A$120 billion after its launch of ChatGPT late last year — is a non-profit that until late last week was run by six board members.
Late on Friday afternoon US time, the board removed Altman as CEO and from the board, citing the Silicon Valley titan’s failure to be “consistently candid” with the board. His ally Greg Brockman steps down as chairman but remains with the company.
Since then, the board has comprised four people: OpenAI chief scientist Ilya Sutskever, founder and CEO of Q&A website Quora Adam D’Angelo, scientist and tech entrepreneur Tasha McCauley, and Helen Toner, a 30-something University of Melbourne alumna whose background in AI and effective altruism has placed her in the middle of a fight over AI safety that is threatening to tear Silicon Valley apart.
“I feel really bad,” said one person, who knew Toner in Australia, of her role in the controversy.
Long before she was appointed to OpenAI’s board in 2021, Toner attended the expensive Anglican girls school Melbourne Girls Grammar. In 2009, Toner told The Age that she’d received a 45 in final year German — placing her in the top 2% of students — the year before and was taking Latin for her final year studies: “Latin grammar is almost identical to German grammar, except the German’s a bit simpler.”
One person who attended a neighbouring high school and knew Toner told Crikey she had stood out for her academic performance: “The main thing I remember was that she was a really high-achieving student.”
After finishing her VCE, Toner achieved the top possible university admission score of 99.95 and was recognised with a scholarship to the University of Melbourne. One activity she was involved in during her time at university was United Nations Youth Australia (UNYA). A fellow participant remembers Toner as being “nice, sweet and smart”. Another described her to Crikey as highly intelligent and ambitious — and very aware of it: “Anyone who was at UNYA is de facto ambitious. She knew quite early she was headed for the big leagues.”
It was during her time there that Toner was introduced to effective altruism, a 21st-century philosophical movement that became influential (and increasingly controversial) in tech circles for its evidence-based approach to philanthropy.
“I was actually planning a career in aid and development and then I came across effective altruism,” she told a 2017 effective altruism conference.
Toner met two organisers of Effective Altruism Melbourne who first convinced her not only of the merits of the movement but also of the potential risks of AI. At first she was convinced they were “philosophically confused and … over-enthusiastic science fiction nerds”, but she soon came around to their view, eventually leading the Melbourne group in 2014.
Among a handful of obsessions, effective altruists are keenly interested in AI safety as they view the future development of artificial intelligence (most specifically, the as-yet-unachieved idea of an “artificial general intelligence” [AGI] — best understood as an AI bot that becomes smarter and more capable than humans) as a major risk to the human race. Specifically, they fear that an AGI could bring about an apocalypse that would destroy humanity. As a result, effective altruists have preached the need to be cautious about the development of artificial intelligence and encourage the building of safeguards.
After a brief stint as an analyst with Melbourne-based bespoke investment firm Vesparum Capital, Toner went on to dedicate her working life to effective altruism and AI safety. According to her LinkedIn, Toner worked at effective altruism not-for-profit GiveWell before joining the Open Philanthropy Project where she had her first interaction with OpenAI.
In that 2017 conference talk, Toner mentioned that the organisation had partnered with OpenAI to “shape their work on safety and governance”.
Since then, she has taken on a number of AI-specific roles: as a research affiliate at the University of Oxford’s centre for the governance of AI, before becoming Georgetown’s Center for Security and Emerging Technology’s director of strategy and foundational research grants. It was her expertise in AI policy and work on AI strategy research that was heralded by Altman and Brockman when she was appointed to OpenAI’s board.
“Helen brings an understanding of the global AI landscape with an emphasis on safety, which is critical for our efforts and mission,” Altman said at the time.
Although the board has been mum about what exactly Altman had misled it about, reporting suggests a schism had emerged over the safety of the company’s developments as its for-profit arm pushed to commercialise its technology. On Tuesday, US tech journalist Kara Swisher posted on X, formerly Twitter, that sources told her the “key tension was between @sama [Sam Altman] and Helen Toner, who might have been pressed to leaving the board before”.
Some have pointed to a quote given to the Financial Times by Toner about companies self-regulating AI safety as foreshadowing what was to come: “They’re the ones who potentially stand to profit from them. So I think it’s really important to make sure that there is outside oversight. Even if their hearts are in the right place we shouldn’t rely on that as our primary way of ensuring they do the right thing.”
Crikey approached Toner for comment but she did not respond in time for publication.