IBM and Salesforce commit to White House on list of eight AI security assurances

Assurances include watermarking, capacity and risk reporting, investing in safeguards to avoid bias and much more.

Some of the largest generative AI companies operating in the United States plan to watermark their content, a White House fact sheet revealed on Friday July 21. Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI have agreed to eight voluntary commitments around the use and monitoring of generative AI, including watermarking. In September, eight more companies have accepted the voluntary standards: Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI and Stability AI.
This follows a March statement regarding the White House’s concerns about the misuse of AI. The deal comes at a time when regulators are establishing procedures to manage the effect of generative artificial intelligence on technology and the way people interact with it since ChatGPT brought AI content into the public eye. public in November 2022.
Jump to:
What are the eight AI security commitments?
The eight AI security commitments include:
- Internal and external security testing of AI systems before release.
- Share information within industry and with governments, civil society and academia on managing AI risks.
- Invest in cybersecurity and insider threat protection measures, particularly to protect model weights, which impact biases and concepts associated by the AI model.
- Encourage third parties to discover and report vulnerabilities in their AI systems.
- Publicly report the capabilities, limitations, and areas of appropriate and inappropriate use of all AI systems.
- Prioritize research on bias and privacy.
- Helping to use AI for beneficial purposes such as cancer research.
- Develop robust technical mechanisms for tattooing.
Commitment to watermarking involves generative AI companies developing a way to mark textual, audio, or visual content as machine-generated; it will apply to any publicly available generative AI content created after the watermarking system is locked. Since the watermarking system has not yet been created, it will be some time before a standard way of knowing whether content is AI-generated becomes publicly available.
SEE: Recruitment Kit: Prompt Engineer (TechRepublic Premium)
Government regulation of AI could deter bad actors
Moe Tanabian, former global vice president of Microsoft Azure and current chief product officer of Cognite, supports government regulation of generative AI. He compared the current era of generative AI to the rise of social media, including its possible downsides like the Cambridge Analytica data privacy scandal and other misinformation during the 2016 election, when ‘a conversation with TechRepublic.
“There are many opportunities for bad actors to take advantage of (generative AI), use it and misuse it, and they do. So I think governments need to have a watermark, some element of root of trust that they need to instantiate and define,” Tanabian said.
“For example, phones should be able to detect whether bad actors are using AI-generated voices to leave fraudulent voicemails,” he said.
“Technologically, we are not disadvantaged. We know how to (detect AI-generated content),” Tanabian said. “Requiring the industry and putting these regulations in place so that there is a root of trust in our ability to authenticate this AI-generated content is key.”