The world is engaged in a race and competition for AI dominance, but today a few of them seem to have come together to say they would rather collaborate when it comes to mitigate risks.
Speaking at the AI Security Summit at Bletchley Park in England, UK Technology Minister Michelle Donelan announced a new policy document, titled Bletchley Statement, which aims to build global consensus on how to address the risks posed by AI, now and in the future as it develops. She also said the summit would become a regular, recurring event: another gathering is planned in Korea in six months, she said; and another in France six months later.
Much like the tone of the conference itself, the document released today is of a relatively high standard.
“To achieve this, we affirm that, for the good of all, AI must be designed, developed, deployed and used securely, so that they are human-centered, trustworthy and accountable,” the document notes. It also draws attention specifically to the type of large language models being developed by companies like OpenAI, Meta, and Google and the specific threats they could pose if misused.
“Particular security risks appear at the “border” AIunderstood to be these high-performance versatile devices AI models, including foundation models, which could perform a wide variety of tasks – as well as relevant specific narrow models. AI that could exhibit capabilities that cause damage – that match or exceed the capabilities present in today’s most advanced models,” he noted.
At the same time, concrete developments have taken place.
Gina Raimondo, the US Secretary of Commerce, announced the creation of a new AI Security Institute to be housed within the Department of Commerce and specifically under the National Institute of Standards and Technology (NIST). ) of the department.
The goal, she said, would be for this organization to work closely with other AI security groups created by other governments, announcing plans for a Security Institute which the United Kingdom is also considering implementing.
“We have to get to work and between our institutes, we have to get to work to [achieve] policy alignment across the world,” Raimondo said.
Political leaders present at today’s opening plenary session included not only representatives of the world’s largest economies, but also a number of representatives from developing countries, collectively from the South.
Participants included Wu Zhaohui, Chinese Vice Minister of Science and Technology; Vera Jourova, Vice-President of the European Commission responsible for values and transparency; Rajeev Chandrasekhar, Indian Minister of State for Electronics and Information Technology; Omar Sultan al Olama, UAE Minister of State for Artificial Intelligence; and Bosun Tijani, Minister of Technology of Nigeria. Collectively, they have spoken of inclusiveness and accountability, but with so many question marks hanging over how this will be implemented, the proof of their dedication remains to be seen.
“I fear that a race to create powerful machines will outpace our ability to protect society,” said Ian Hogarth, a founder, investor and engineer who currently chairs the UK government’s Fundamental Designs Working Group. AI. big hand to play in the preparation of this conference. “No one in this room knows for sure how or if these next advances in computing power will result in benefits or harms. We tried to ground [concerns of risks] in empiricism and rigor [but] our current lack of understanding… is quite striking.
“History will judge our ability to meet this challenge. He will judge us by what we do and say over the next two days.