Yesterday, 28 countries, including the United States, EU members and China signed a declaration warning that artificial intelligence is advancing at such speed and with such uncertainty that it could cause “serious, even catastrophic, harm.”
The statement, announced at the AI Safety Summit organized by the British government and held at the historic Second World War decryption site, Bletchley Parkalso calls for international collaboration to define and explore the risks of developing more powerful AI models, including large language models such as those that power chatbots like ChatGPT.
“This is a historic achievement that sees the world’s leading AI powers agree on the urgency of understanding the risks of AI, to help secure the long-term future of our children and grandchildren,” British Prime Minister Rishi Sunak said in a statement. .
The Summit venue paid tribute to Alan Turing, the British mathematician who did seminal work on computing and AI, and who helped the Allies break Nazi codes during World War II by developing the first computing devices. (A previous British government apologized in 2009 for the manner in which Turing was prosecuted for homosexuality in 1952.)
The AI hype train, however, has a way of turning even close allies into competitors. The ink on the forward-looking statement was barely dry before the United States asserted its leadership role in the development and direction of AI, as Vice President Kamala Harris declared. gave a warning speech that the dangers of AI – including deepfakes and biased algorithms – are already here. The White House announced a sweeping executive order aimed at defining rules to govern and regulate AI early this week, and yesterday outlined new rules for prevent government algorithms from harming.
“When a senior is excluded from their health plan because of a faulty AI algorithm, isn’t that existential for them? Harris said. “When a woman is threatened by a violent partner with explicit and doctored photographs, isn’t that existential for her?”
The cocktail of collaboration and competition swirling in the UK stems from the remarkable, surprising and slightly frightening abilities that leading language exemplars have demonstrated over the past year. The AI has proven capable to do things that many experts thought would remain impossible in the years to come. This suggests to some researchers that systems capable of replicating something resembling the type of general intelligence that humans take for granted might I will suddenly be much closer.
Leading AI experts were in London this week ahead of the summit. The people I met included Yoshua Bengio, a pioneer of deep learning which says its mission is to alert governments on the risks of more advanced AI; Percy Liang, who directs the Stanford Foundation Models Research Center; Rumman Chowdhury, an expert inred team“AI systems for vulnerabilities which told me it was still a nascent discipline; and Demis Hassabis, who, like CEO of Google DeepMind leads the search giant’s AI projects. He supports humanity only has a limited time to ensure that AI reflects our best interests rather than our worst behaviors.