Last month, a 120-page American report decree presented the Biden administration’s plans to oversee companies developing artificial intelligence technologies and guidance on how the federal government should expand its adoption of AI. However, at its core, the paper focused heavily on AI-related security issues, both in finding and fixing problems. vulnerabilities in AI products and develop defenses against potential AI-powered cybersecurity attacks. As with any decree, the problem lies in how a sprawling and abstract document will be transformed into concrete action. Today, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) will announce an “Artificial Intelligence Roadmap” that outlines its plan for implementing the order.
CISA divides its plans for addressing AI cybersecurity and critical infrastructure topics into five categories. Two relate to promoting communication, collaboration, and workforce expertise through public and private partnerships, and three relate more concretely to the implementation of specific EO components . CISA is housed within the U.S. Department of Homeland Security (DHS).
“It’s important to be able to put this out there and hold us, frankly, accountable for both the big things that we need to do for our mission, but also what was in the executive order,” said CISA Director Jen Easterly, to WIRED ahead of the event. the publication of the roadmap. “AI as software will clearly have a phenomenal impact on society, but just as it will make our lives better and easier, it could very well have the same effect on our adversaries large and small. We are therefore focusing on how we can ensure the safe and secure development and implementation of these systems.
CISA’s plan focuses on the responsible use of AI, but also the aggressive use of U.S. digital defense. Easterly points out that while the agency is “focusing on security over speed” in terms of developing AI-based defense capabilities, the fact is that attackers will exploit these tools – and in some cases already are…it is therefore necessary and urgent that the American government also use them.
With this in mind, CISA’s approach to promoting the use of AI in digital defense will center around established ideas that the public and private sectors can learn from traditional cybersecurity. As Easterly says, “AI is a form of software, and we can’t treat it as some sort of exotic thing to which new rules must apply.” AI systems should be “secure by design,” meaning they have been developed with constraints and security in mind, rather than attempting to retroactively add protections to a completed platform. afterwards. CISA also intends to promote the use of “software bills of material” and other measures to keep AI systems open to scrutiny and supply chain audits.
« AI manufacturers [need] taking responsibility for safety outcomes – that’s the whole idea of shifting the burden to the companies that can bear it the most,” Easterly says. “They’re the ones building and designing these technologies, and it’s about the importance of embracing radical transparency. Making sure we know what’s in that software so we can ensure it’s protected.