Vibe Coding: Can It Self-Secure?

Vibe coding is a hot new trend in software development that prioritizes AI-assisted, high-speed coding using large language models (LLMs). Instead of manually coding line-by-line, developers work with agentic AI models that understand intent, generate full implementations, and refine through iterative feedback. This allows for rapid development, but it also introduces considerable challenges in security risks, debugging, and architectural oversight.

One of the key features of vibe coding is its ability to offload significant portions of development to AI, allowing human engineers to focus on higher-level decision-making. This shift in workflow has sparked discussions about the role of software architects and the necessity of maintaining strong security protocols despite the automation of large portions of code creation. I recently sat down with Steve Ellis, a Resonance Security advisor and proponent of agentic AI coding, to understand more about the trend, its promises and pitfalls, and how to make sure security implications don’t fall by the wayside when using it.  

MCP Servers = Additional Functionality

One of Steve’s biggest tools in his agentic AI toolbox is Model Context Protocol (MCP) servers. MCP servers provide AI coding agents with specialized abilities. They extend the functionality of AI, similar to how APIs extend the functionality of web applications. Increased AI memory, emotional tracking, and decision retention are key ways Steve uses his personal MCP server. These extended abilities allow his AI to retain past security decisions, maintain consistency across development cycles, reduce repetitive mistakes, and learn from past feedback.

I found the emotional dashboard of his server especially intriguing. While AI does not have true emotions, this system uses Steve’s feedback to track AI confidence, happiness, and frustration and allows him to gauge how well the AI understands a given project. For example, if Steve frequently corrects the AI’s logic in a particular area, the AI may adjust its approach and become more cautious when generating similar code in the future. When the AI’s confidence is high, it is more likely to generate reliable code, whereas fluctuations in confidence may indicate uncertainty and require human intervention for guidance. 

Pros vs Security Risks

Promises and Pitfalls

Vibe coding can offer several advantages that make it seem to be an attractive development approach. Speed is perhaps the most significant benefit, with the rapid development reducing manual effort and allowing developers to focus on refining logic rather than writing boilerplate code. Steve even believes it has the potential to eliminate SaaS models by enabling localized AI-powered development that does not rely on third-party services.

However, vibe coding, as it is today, is not without its challenges, particularly when it comes to debugging, security risks, and designing advanced architecture. One of the main issues developers face is the debugging complexity and AI’s generic way of thinking. AI models, despite their impressive capabilities, are far from perfect. They sometimes produce code that looks syntactically correct, but is logically flawed. This is because the AI doesn't have an inherent understanding of business logic or use case requirements. What appears as a valid solution might miss key edge cases, handle errors incorrectly, or fail to account for all functional aspects of the system. The AI may also hallucinate, generating misleading or entirely incorrect outputs that seem plausible, but do not align with the actual requirements.

These logical errors can be particularly challenging to identify and fix because they are often subtle and not immediately obvious in the code's structure. Developers must stay on the lookout, constantly reviewing the code to catch these issues early. Even though the AI may generate code faster than a human developer, it often requires extensive manual debugging and validation to ensure that the logic it produces actually works within the intended system. This introduces additional development overhead despite the promise of increased speed. 

AI-generated code often introduces security risks due to its rapid development pace and lack of deep contextual understanding of secure coding practices. Unlike traditional review processes, AI may overlook critical security considerations, leading to issues such as insecure data storage, weak authentication, and unnecessary API exposures. Since large language models do not inherently prioritize security, developers must implement continuous security checks and integrate tools like static code analyzers or AI security scanners. Without proper human oversight, AI-generated code can introduce critical weaknesses that leave systems vulnerable to attacks.

Manually Coding for Automatic Security

So we’ve learned that vibe coding has many security implications, but is there a way that people can still leverage this incredibly fast and intuitive technology while staying secure? Yes, but it’s going to take some human work. 

Vibe coding begins in an Integrated Development Environment (IDE). Each IDE has rules that the developer can use to govern the project they’re working on. Since Steve uses Cursor, we’ll stick to talking about Cursor Rules here. Cursor Rules can be used to apply style, reduce hallucinations, implement security best practices, and much more. They are mandatory instructions given to the AI by the developer. You can find the full database of Cursor Rules here.

Cursor Rules can be a powerful tool for enforcing cybersecurity measures. They can be applied for general best practices, such as:

“You maintain a continuous and proactive focus on security, approaching it with a defense-in-depth mindset that considers multiple layers of protection. At every step—whether modifying code, designing systems, or reviewing architecture—you critically assess the potential for introducing vulnerabilities or insecure patterns.”

Or they can be more granular, fixing issues such as SQL injections or improper access control, an extremely important security measure that is very often overlooked by AI code agents. This can help the AI automatically sanitize inputs, apply the Principle of Least Privilege, enforce RBAC policies, and more. Such a rule may look like this:

“When implementing or reviewing web application code, pay close attention to risks such as input injection (XSS, SQLi, command injection), broken authentication or session management, improper access control, insecure deserialization, SSRF, and leakage of sensitive data via verbose errors or misconfigured headers.”

Another critical security measure involves ensuring that sensitive information, such as API keys, are not hardcoded into the source code. To address this, a Cursor Rule can be implemented to detect any appearance of API keys, passwords, or other sensitive tokens. Instead of just flagging the issue, the AI will automatically extract the hardcoded key and move it to a secure environment variable. That rule might look like this:

“If a hardcoded key is found, extract it and replace it with a reference to an environment variable. Ensure that the .env file (or equivalent configuration) includes the key securely.”

Overall, IDE rules can help enforce security practices and keep the AI on track, but this is not automatic. The developer must manually set these rules and continue to oversee the AI’s work to ensure these rules are being followed. 

Looking to the Future

While vibe coding does present an exciting future, dramatically increasing coding speed and allowing developers to reduce research time by instantly creating common structures, it is not a silver bullet. The speed and ease of AI-assisted coding come with inherent risks, particularly around debugging and security. Large language models, despite their sophistication, lack deep contextual understanding of business logic and cybersecurity threats. This means that human oversight remains essential. Developers must proactively set rules, review AI outputs, and implement their own security measures to mitigate risks.

Developers can enhance security by using MCP Servers to maintain consistency and IDE rules to guide AI on secure coding practices. They should also monitor AI hallucinations and logic gaps, as AI may produce flawed code that appears functional, but contains hidden vulnerabilities. Continuous testing and debugging are essential to catching these issues early. Ultimately, vibe coding should be seen as a tool to support developers rather than replace them. While AI can automate mundane coding tasks, it lacks a true understanding of the code it generates and does not prioritize security by default. Developers remain responsible for debugging, architecture, and security oversight. At that point, some may find it easier to just write the code themselves instead of using vibe coding at all. 

If done correctly, vibe coding can be an incredible partner in building full-scale applications, as shown by Steve’s projects, Nuclearclarity.com and  crabapi.com, that were written by agentic AI (with heavy Steve oversight, of course). Whether you find micromanaging the AI’s every move worth it or not, that’s up to you; however, due to its potential, Resonance is keeping a close eye on this space, and we may just have something in the works that can help enable secure vibe coding very soon. Stay tuned!

our certifications
OSCP certificationOSCE CertificationOSWE certificationCART CertificationAzure certifcationCyclone CertificationCARTP CertificationCRTP Certification