Sam Altman is as CEO of OpenAI after a “coup in the boardroom» Friday that shook the technology industry. Some compare his ouster to the dismissal of Steve Jobs has Applea sign of the importance of this upheaval in the midst of an AI boom that has rejuvenated Silicon Valley.
Altman, of course, had a lot to do with this boom, sparked by OpenAI’s release of ChatGPT to the public late last year. Since then, he has traveled the world speaking with world leaders about the promise and dangers of artificial intelligence. Indeed, for many, he has become the face of AI.
The exact direction of things remains unclear. In the latest twists and turnssome reports suggest that Altman could return to OpenAI and others suggest that he is already considering a new startup.
Either way, his ouster seems momentous, and given that, his latest appearance as CEO of OpenAI is worth paying attention to. This happened on Thursday at the APEC CEO summit in San Francisco. The besieged city, where OpenAI is basedhosted the Asia-Pacific Economic Cooperation summit this week, after first cleared embarrassing homeless encampments (even if it remains suffered embarrassment when thieves stole equipment from a Czech press team).
Altmann answered questions on stage from, somewhat ironically, moderator Laurene Powell Jobs, the billionaire widow of Apple’s late co-founder. She asked Altman how policymakers can strike the right balance between regulating AI companies while also being open to evolution as the technology itself evolves.
Altman began by noting that he had dinner this summer with historian and author Yuval Noah Harari, who issued severe warnings on the dangers of artificial intelligence for democracies, even suggesting to tech executives should face 20 years in prison for letting AI robots sneakily pretend to be humans.
THE Sapiens The author, Altman said, “was very concerned and I understand that. I really understand why if you haven’t been following the field closely it feels like things have gone vertical…I think a lot of the world has collectively gone through a lurch this year to catch up.
He stressed that people can now talk to ChatGPTsaying that it is “like the Star Trek computer, I was always promised. The first time people use such products, he says, “it feels much more like a creature than a tool,” but eventually they get used to it and see its limitations (as some say). embarrassed lawyers to have).
He said that while AI has the potential to do wonderful things, like cure diseases, on the one hand, on the other hand, “How can we make sure that it is a tool equipped with appropriate safeguards as it becomes truly powerful? »
Today’s AI tools, he said, are “not that powerful,” but “people are smart and see where this takes us.” And while we can’t really understand exponentials as a species, we can tell when something will continue, and it will continue.
The questions, he said, are what limits will be placed on the technology, who will decide on them and how they will be enforced internationally.
Answering these questions “has been a lot of my time over the last year,” he noted, adding: “I really think the world is going to rise to the occasion and everyone wants do the right thing.”
According to him, current technology does not need heavy regulation. “But at some point – when the model can produce the equivalent of the output of an entire company, then an entire country, then the entire world – perhaps we will want collective global oversight of this and a collective decision-making. »
For now, Altman says, it’s difficult to “get this message across” and not suggest that policymakers ignore current harms. He also doesn’t want to suggest that regulators should go after AI startups or open source models, or bless AI leaders like OpenAI with “regulatory capture.”
“We say, you know, ‘Trust us, this is going to get really powerful and really scary.’ It will have to be regulated later” – a very difficult needle to thread through all of this.