Who is Ilya Sutskever, man at center of OpenAI shakeup?

As speculation swirls around leadership shakeup at OpenAI announced Friday, the focus turns more to one man at the center of it all: Ilya Sutskever. The company’s chief scientist, Sutskever, also sits on the OpenAI board that ousted CEO Sam Altman yesterday, claiming somewhat cryptically that Altman had not been “always candid” to his subject.
Last month, Sutskever, who generally stays out of the media spotlight, spoke with MIT Technology Review for a long interview. The Israeli-Canadian told the magazine his new priority is how to prevent an artificial superintelligence — which can outperform humans but, as far as we know, doesn’t yet exist — from going rogue.
Sutskever was born in Soviet Russia but grew up in Jerusalem from the age of five. He then studied at the University of Toronto with Geoffrey Hinton, an artificial intelligence pioneer sometimes called the “godfather of AI.”
Earlier this year, Hinton left Google and warned that AI companies were rushing toward danger by aggressively building generative AI tools like OpenAI’s ChatGPT. “It’s hard to see how to stop bad actors from using it for bad things,” he said. said THE New York Times.
Hinton and two of his graduate students, including Sutskever, developed a neural network in 2021 that they trained to identify objects in photos. Called AlexNet, the project showed that neural networks were much better at pattern recognition than commonly thought.
Impressed, Google bought Hinton’s DNNresearch spinoff and hired Sutskever. At the tech giant, Sutskever helped show that the same type of pattern recognition displayed by AlexNet for images could also work for words and sentences.
But Sutskever quickly attracted the attention of another powerful player in artificial intelligence: You’re here CEO Elon Musk. The unpredictable billionaire has long warned of the potential dangers that AI poses to humanity. Years ago, he expressed alarm that Google co-founder Larry Page didn’t care about AI safety. said THE Lex Friedman Podcast this month, and by the concentration of AI talent at Google, especially after its acquisition deep mind in 2014.
HAS Musk insists, Sutskever left Google in 2015 to become co-founder and chief scientist of OpenAI, then a nonprofit that Musk envisioned as a counterweight to Google in the AI space. (Musk later fell out with OpenAIwhich decided not to become a nonprofit and withdrew billions in investment from Microsoftand now has a ChapGPT competitor called Grok.)
“This has been one of the toughest recruiting battles I’ve ever had, but it’s truly the key to OpenAI’s success,” Musk said. adding that Sutskeverin addition to being intelligent, was a “good human” with a “good heart”.
At OpenAI, Sutskever played a key role in the development of large language models, including GPT-2, GPT-3, and the DALL-E text-image model.
Then came the release of ChatGPT late last year, which gained 100 million users in less than two months and sparked the current AI boom. Sutskever said Technology review that the AI chatbot gave people a glimpse of what was possible, even if it then disappointed them by returning incorrect results. (Lawyers embarrassed after trusting ChatGPT too much are among the disappointed.)
But more recently, Sutskever has focused on the potential dangers of AI, particularly when AI superintelligence capable of surpassing humans arrives, which he says could happen within 10 years . (He distinguishes it from artificial general intelligence, or AGI, which can simply match humans.)
At the heart of OpenAI’s management shakeup on Friday was the question of AI security, according to anonymous sources who spoke to BloombergSutskever disagreeing with Altman on how quickly to bring generative AI products to market and the steps needed to reduce potential harm to the public.
“It’s obviously important that any superintelligence that someone builds doesn’t go rogue,” Sutskever said. Technology review.
With this in mind, his thoughts turned to alignment – directing AI systems towards people’s goals or ethical principles rather than pursuing unforeseen goals – but as this might apply to the superintelligence of AI.
In July, Sutskever and his colleague Jan Leike wrote an OpenAI announcement about a project on the alignment of superintelligence, or “superalignment”. They warned that while superintelligence could help “solve many of the world’s most important problems,” it could also “be very dangerous and lead to humanity’s disempowerment or even extinction.” humanity”.