Generative AI lures DevOps and SecOps into risky territory

Application security managers are more optimistic than developer managers about generative AI, although both agree it will lead to more widespread security vulnerabilities in software development, according to Sonatype.
According to DevOps and SecOps leaders surveyed, 97% use this technology today, and 74% say they feel pressure to use it despite identified security risks.
In fact, most respondents agree that security risks are their biggest concern associated with technology, highlighting the critical need for responsible adoption of AI that will improve both software and security.
SecOps teams save more time
Although DevOps and SecOps respondents in most cases have similar perspectives on generative AI, there are notable differences in adoption and productivity.
45% of SecOps leaders have already implemented generative AI in the software development process, compared to 31% for DevOps. SecOps leaders see greater time savings than their DevOps counterparts, with 57% saying generative AI saves them at least 6 hours per week, compared to just 31% of DevOps respondents.
When asked about the most positive impacts of this technology, DevOps respondents report faster software development (16%) and more secure software (15%). SecOps leaders cite increased productivity (21%) and faster problem identification/resolution (16%) as top benefits.
More than three-quarters of DevOps leaders say using generative AI will lead to more vulnerabilities in open source code. Surprisingly, SecOps leaders are less concerned (58%). Additionally, 42% of DevOps respondents and 40% of SecOps executives say the lack of regulation could deter developers from contributing to open source projects.
DevOps and SecOps leaders both want more regulation
When asked who they think is responsible for regulating the use of generative AI, 59% of DevOps and 78% of SecOps respondents say government and individual companies should be responsible for regulating the use of generative AI. regulation.
“The AI era feels like the beginnings of open source, like we’re building the plane as we fly it in terms of security, policy and regulation,” said Brian Fox, CTO at Sonatype. “Adoption has become widespread across the board, and the software development lifecycle is no exception. While the productivity dividends are clear, our data also reveals a worrying reality: the security threats posed by this still-nascent technology. With every cycle of innovation comes new risks, and it is critical that developers and application security leaders approach AI adoption with safety and security in mind.
The debate over licensing and compensation was also a priority for both groups – without it, developers could find themselves in legal limbo in the face of plagiarism complaints against LLMs. Notably, rulings against copyright protection on AI-generated art have already sparked debate over the amount of human intervention necessary to respect what current law defines as true authorship.
Respondents agreed that creators should own copyright to AI-generated results in the absence of copyright law (40%), and both largely agreed that developers should be paid for the code they wrote if it is used in open source artifacts in LLMs (DevOps 93% vs. SecOps 88%).