Software Development

Sonatype Reveals DevOps and SecOps Leaders’ Perspectives on Generative AI

Although the technology community remains divided on the potential of generative AI tools, there is consensus that their impact on the industry is comparable to the adoption of cloud technology.

Software engineers leverage generative AI to explore libraries, create new code, and improve their development process, while application security professionals use it for code analysis and security testing.

A recent survey conducted by Sonatype in the United States highlights how generative AI is influencing software engineers and the software development lifecycle.

The survey covered 400 DevOps and 400 SecOps leaders, revealing massive adoption of generative AI. However, some respondents expressed concerns about security risks and potential job losses due to AI.

Role-based perspectives

DevOps and SecOps leaders share concerns about security (52%) and job loss (49%) as their top apprehensions about generative AI. These concerns reflect the ongoing debate about AI’s potential to replace technical roles.

Additionally, a significant portion (74%) of respondents feel pressured to adopt generative AI, with DevOps leaders feeling this pressure more intensely than their counterparts.

Both groups agree that creators should own copyright to AI-generated results in the absence of copyright law (40%) and support compensating developers (90%).

Most respondents (97%) currently use generative AI to some extent in their work fields, with 84% using it regularly and 41% using it daily.

Most Popular Generative AI Tools

Generative AI tools are growing in popularity, with engineers and security teams widely employing them for various tasks.

ChatGPT is the preferred tool of 86% of respondents, followed by GitHub Copilot at 70%.

Differences in feelings

DevOps and SecOps leaders differ in how they feel about generative AI.

While 61% of DevOps leaders believe the technology is overrated, 90% of SecOps leaders believe its impact on the industry will be similar to that of cloud technology.

SecOps leaders also report greater time savings (57% save at least 6 hours per week vs. 47% for DevOps) and a higher rate of complete implementation (45% vs. 31%).

Security issues

Not surprisingly, safety concerns predominate in both groups. 77% of DevOps leaders and 71% of SecOps leaders feel compelled to use generative AI despite security concerns.

DevOps leaders are more pessimistic about the technology’s potential to lead to more security vulnerabilities (77%), especially in open source code (58%). They also anticipate it will make threat detection more complex (55%).

“The AI ​​era feels like the beginnings of open source, like we’re building the plane while we’re flying it in terms of security, policy and regulation,” comments Brian Fox, co-founder and CTO at Sonatype.

“Adoption has become widespread across the board, and the software development lifecycle is no exception. While the productivity dividends are clear, our data also reveals a worrying reality: the security threats posed by this still-nascent technology.

“Every cycle of innovation brings new risks, and it is critical that developers and application security leaders approach AI adoption with safety and security in mind. »

Responsible use and regulation

Organizations are responding to concerns through generative AI policies and looking forward to regulation in a space lacking overarching governance.

Notably, 71% of respondents say their organization has established policies for the use of generative AI, while 20% are in the process of developing them.

In terms of regulation, 15% of DevOps leaders believe the government should regulate, while 6% of SecOps leaders share this view. A majority (78% of SecOps and 59% of DevOps) suggest that government and individual companies should play a role in regulation.

Copyright and compensation

Respondents agree that developers should be compensated (90%) for their code if it is used in open source artifacts in LLMs. Additionally, 67% believe that the organization that uses the code in its software should pay the developers.

As generative AI continues to evolve, it will be critical to balance its potential benefits with the need for responsible and ethical implementation.

You can find a full copy of Sonatype’s report here.

(Photo by Olloweb Agency on Unsplash)

See also: Niantic 8th Wall improves WebAR with powerful GenAI modules

Want to learn more about AI and Big Data from industry leaders? Check AI and Big Data Exhibition taking place in Amsterdam, California and London. The entire event is co-located with Digital Transformation Week.

Check out more upcoming enterprise technology events and webinars from TechForge here.

  • Ryan Davies

    Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be spotted at tech conferences with a strong coffee in one hand and a laptop in the other. If he’s a geek, he’s probably interested. Find him on Twitter (@Gadget_Ry) or Mastodon (@[email protected])

    View all posts

Keywords: AI, artificial intelligence, coding, cybersecurity, development, devops, generative AI, programming, report, research, secops, security, sonatype, study

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button