Bridging Security Gaps: DevSecOps in the age of large language models and AI

Developers
Devops
Testers

As AI and large language models reshape the tech landscape, they also introduce new security risks. How is DevSecOps tackling these challenges?
large language models

In today’s fast-evolving technological landscape, security has become one of the most critical aspects of software development. Organizations adopt DevOps practices to streamline operations, but integrating security into every phase of the development lifecycle is more important than ever. However, with the advent of Artificial Intelligence (AI) and Large Language Models (LLMs), security risks have grown more complex and pervasive, requiring teams to adopt more sophisticated and proactive measures.  

This article explores the importance of security in DevOps, focusing on how AI and LLMs heighten the need for robust security strategies. 

The rising threat landscape 

AI is now embedded in a wide variety of applications, ranging from customer service chatbots and recommendation engines to fraud detection systems and autonomous vehicles. These AI-powered applications depend on complex algorithms, machine learning models, and vast datasets. While AI brings significant benefits, it also introduces new threats in form of vulnerabilities that can be exploited by malicious actors. 

Blending software development (Dev) with IT operations (Ops) has revolutionized how organizations deliver software. By promoting continuous integration and continuous delivery (CI/CD), DevOps accelerates the release of software updates, reduces time-to-market, and improves overall productivity. However, the speed and automation brought by DevOps also introduce potential security vulnerabilities. This necessitated the birth of DevSecOps, where security is natively integrated with Dev and Ops rather than just being an afterthought. 

AI models like LLMs are being used to enhance various stages of the DevOps lifecycle, from automated code generation and testing to anomaly detection and infrastructure management. These models can be powerful tools, but they also introduce novel security challenges: 

  1. Data exposure: LLMs are trained on massive datasets, which can unintentionally leak sensitive data or inadvertently provide access to proprietary information, posing among others serious privacy concerns. Attackers may analyze the output of these models to infer confidential data, potentially exposing organizations to legal and financial risks. 
  2. Ethical concerns and bias: We’ve known for a long time that AI systems are only as good as the data they are trained on. When the training data contains biases, AI models can unintentionally perpetuate or even amplify these biases, leading to unfair or discriminatory outcomes. This issue raises ethical concerns, and poses potential legal and reputational risks for organizations.
  3. Risks related to automated code generation: AI-powered tools like Copilot or Codeium are now widely used to generate code. The models behind these tools are trained from publicly available code repositories like Stack Overflow or GitHub, which means that any problems commonly occurring in these repositories will be reflected in the generated code as well. This is well aligned to an old and well-known problem in computer science: “garbage in, garbage out”. Therefore, AI-generated code often lacks proper input validation or does not implement secure coding practices, leading to exploitable software.
  4. Dependency on third-party components: Like any other applications, the AI-powered ones also depend on third-party APIs, libraries, and pre-trained models. Although these components can speed up development and time to market, they bring additional security risks. A vulnerability in a third-party AI library can potentially compromise the entire application. 

large language models

DevSecOps in AI: addressing emerging security challenges 

Given the rise of security concerns in the AI-driven DevOps environment, the industry has moved towards DevSecOps, where security is no longer an afterthought but an integral part of the entire software lifecycle. DevSecOps embeds security practices directly into the DevOps pipeline, ensuring that security is addressed at every stage — from planning and development to deployment and monitoring. 

Some of the possible security measures addressing the above threats and ensuring secure integration of AI into DevOps include: 

  1. Security of the models and the training process: Training AI models, especially large-scale LLMs, often requires vast amounts of data. Organizations must ensure that sensitive data used for training is properly anonymized. Additionally, access control should be in place to prevent unauthorized access to model outputs during inference. DevSecOps should provide the framework for integrating security controls into the AI workflows. Furthermore, to address the ethical and bias challenges, security practices should ensure that the data used to train AI models is not only anonymized, but also unbiased.
  2. Securing code generation: from security point of view, automatically generated code is not different from that written manually – it can contain vulnerabilities. We therefore need to put some output filters that will analyze the generated code specifically from security point of view. But automation of this is yet to come. So, we need to continuously check any automatically generated code, and for this human experience and expertise in secure coding is still crucial. Remember, you are still the pilot in charge, the tool is just a copilot! 
  3. Securing the supply chain: DevSecOps encourages the use of automated tools to scan for vulnerabilities in dependencies, ensuring that the AI models and their underlying libraries are secure. Aligned to this, we should apply all well-known practices used in software development and secure coding: vetting of third-party components, regular updates, and the use of sandboxing techniques to isolate untrusted components from critical parts of the system.
  4. Shift-left approach: In DevOps, security is a shared responsibility and a collaboration among developers, security teams, and operations. In DevSecOps, security should be interwoven in all phases of the software development cycle. Considering the above presented threats this is especially true for AI. “Shifting left” ensures that problems are identified in the earliest possible moment. Simply because its much cheaper this way. 

AI in DevSecOps: securing the pipeline 

While AI and LLMs introduce new risks, they can also play a crucial role in strengthening the security of the development process and the resulting product. Some examples where AI can boost security in a DevSecOps environment: 

  1. Securing the supply chain with AI: Besides the necessity to secure the AI supply chain, it is worth noting that AI can also aid in vulnerability management by automatically detecting potential weaknesses in code, dependencies, and infrastructure.
  2. Monitoring and threat detection: Continuous monitoring of the CI/CD pipeline and the production environment is inherently part of DevSecOps. By analyzing logs, network traffic and user behavior, AI-powered tools can identify unusual patterns more effectively, helping security teams to respond faster to events traditional tools may miss.
  3. Automated security testing: Automation is a key component of DevSecOps, and secure coding practices facilitate the integration of automated security testing tools into the CI/CD pipeline. AI is however more and more used as part of the functional and security testing to detect bugs and vulnerabilities in a more effective way. 

The future of AI and DevOps 

As AI continues to shape the future of technology, it also needs novel controls and protection techniques to mitigate a new set of yet unseen threats. The need for robust security practices has never been more urgent. But in this new era of AI-enhanced DevSecOps, engineering teams must not only prioritize security; they can (and should) also leverage AI to enhance security. All this needs a new set of skills – not only in DevOps, but also in security as well as in AI. 

At Cydrill we recognize the unique mixture of these challenges. We are committed to helping DevOps teams to develop the skills and knowledge needed to build novel AI-powered applications in a secure way. Our secure coding training programs are tailored to the needs of modern DevSecOps engineers, equipping them with the best practices in safeguarding the next generation of AI-driven innovations.