Vibe coding isn’t just a new way to write software, it’s a new way to think about the entire process of software development – but it also opens the door to new cyber threats, says Matt Moore of Chainguard
The way we write software is being transformed faster than many of us anticipated. At the heart of this shift is “vibe coding,” where developers use generative AI to deliver code more efficiently. Gartner predicts that 40% of new business software will be AI-assisted in the next three years, with techniques involving AI bots translating plain English prompts into usable code.
From GitHub Copilot to Claude-powered IDEs and bespoke internal models, developers are using AI-assisted programming tools to ship code faster. According to the 2024 Stack Overflow developer survey, nearly 82% of AI adopters use these tools specifically to write code. This is not a fad or fringe movement but a foundational shift in how code is written, tested, and sometimes deployed.
Yet, the practice has already fostered a new cybersecurity challenge: vibe hacking – the flip side of vibe coding. As more people interact with code through natural language prompts, developers are likely to become further removed from directly authoring source code. This opens up avenues for malicious code to get injected in software stacks.
The promise and the problems
Vibe coding unlocks new potential for developers. It accelerates the rapid prototyping phase of development to new levels, routine tasks like generating boilerplate code and API stitching are done much quicker – thus freeing up time to concentrate on architecture, logic, and delivering product-specific value.
In the hands of an experienced developer, its speed also changes the economics of iteration, enabling developers to strive for higher quality code because the time investment is significantly reduced. While vibe coding doesn’t inherently make developers better, it encourages those aiming for improved code to engage with the model through feedback, a process that can arguably prepare them better for success in leadership positions.
At the same time, vibe coding increases the surface area of the codebase in subtle but critical ways, opening it up for mistakes, misuse, or even malicious injection. Even if the individual quality of AI-generated code were to improve – which isn’t universally guaranteed – the sheer increase in the volume of code produced will likely lead to a higher overall rate of bugs. Ultimately, the responsibility to review, understand, and thoroughly test all code, whether written by a human or an AI remains paramount to prevent bugs and insecure patterns from slipping through.
AI is becoming a tireless, no-ego junior engineer that is highly capable but also needs constant oversight and guardrails. With the right processes, the output can be game-changing. Without them, we risk introducing vulnerabilities by omission.
Vibe hacking: Exploitation at the speed of AI
The same generative AI tools that empower developers can be used by malicious actors. In the past, hackers relied on zero-day vulnerabilities (flaws unknown to defenders) for sophisticated attacks. The term “zero day” comes from vulnerabilities that defenders have had zero days to patch before having to defend against them. Previously this effectively required an attacker to know about a vulnerability before the software vendor.
Today, vibe hacking means hackers no longer need to find or purchase expensive “zero-days” to launch an attack. This is because upstream distributions often take weeks or months to patch CVEs, and users take even longer to apply those patches. This speed means attackers no longer need expensive zero-days for a first-mover advantage. They can exploit vulnerabilities before most organisations can patch them. Defenders now have a dramatically shorter window to react. To keep up, organisations need distributions that can patch and deploy fixes at the speed of the threat.
In a vibe-hacked world, security must be continuous, proactive, and fully integrated into the development lifecycle. The need to secure the entire software supply chain and provide daily patches for vulnerabilities in code has never been higher. Modernising software defence is both an opportunity and a necessity for organisations, ensuring that AI-enhanced development prioritises quality of code over sheer quantity.
“The same generative AI tools that empower developers can be used by malicious actors”
A lesson from open source
In many ways, the rise of vibe coding is like the rise of open source software. Open source makes up 90% of the code in modern applications stacks we use today, and it allows engineers to spend their time delivering product-specific value rather than building foundational tools from scratch. With open source software, organisations had to learn an important lesson: even if you didn’t build every component, you’re still accountable for what runs in your environments.
With vibe coding, the same principles apply. Whether created by a staff engineer or suggested by an AI model, it will still run in your environment. And that accountability necessitates applying rigorous code review, security testing, and governance.
Oversight, provenance, and securing the future by default
AI doesn’t eliminate the need for craftsmanship, it redefines where that craftsmanship is applied. To thrive in this new world of AI-generated code, developers must evolve how they approach quality, review, and ownership. Be wary of accepting auto-generated suggestions without review and unintentionally becoming conduits for vulnerability.
Vibe coding isn’t just a new way to write software, it’s a new way to think about the entire process of software development. As engineering leaders, we need to foster environments where AI is used transparently and safely. A GitHub study shows that while 97% of developers admit having used AI coding tools at work, company-level support varies widely from 59% to 88%. Bridging that gap is a key step in empowering teams. As Farhan Thawar, VP & Head of Engineering at Shopify recently put it, closing this gap and truly empowering development teams with AI tools “starts at the top”.
We need to invest in building systems that embrace supply chain provenance, so if AI is the delivery vehicle for an exploit, we are able to trace it, understand it, and respond quickly. The future of software is faster, smarter, and more collaborative than ever. To ensure that it is secure throughout, we need to make the safe path the easiest one – especially in an AI-powered world.
About the author
Matt Moore is co-founder and CTO of Chainguard.

Further reading
This article was first published in issue 2 of Business 4.0.

