The rise of "vibe coding," a term coined by OpenAI co-founder Andrej Karpathy in February 2025, has evolved from a curious experiment into a widespread movement within just 18 months. According to data from Stack Overflow, 65% of developers now utilize AI tools for coding at least weekly. The term was even recognized as the "word of the year" by Collins, and during the winter cohort of Y Combinator 2025, one-quarter of startups reported that 95% or more of their codebase was AI-generated. However, this surge in productivity comes with a significant downside, including vulnerabilities, data leaks, and broken applications, signaling that the scale of these issues is growing.
Vibe coding is a method where developers describe the desired outcome in natural language, allowing a language model to generate functional code. Unlike traditional AI assistance, this approach often lacks thorough verification of the output. Programmer Simon Willison emphasized that if a developer reviews, tests, and can explain the AI-generated code, it cannot be classified as vibe coding; rather, it is a collaborative use of the language model. The problem arises when many adopting this trend fail to verify their code. While AI optimizes for functionality, it does not prioritize security, leading to potential vulnerabilities that remain hidden without sufficient expertise.
A randomized controlled trial conducted by the nonprofit organization METR in July 2025 revealed unexpected findings: experienced developers working with AI tools took 19% longer to complete tasks compared to those who did not use AI. Interestingly, before the experiment, these developers had predicted a 24% increase in speed. Afterward, they still believed they were working 20% faster, resulting in a significant discrepancy of 39 percentage points between perception and reality. Participants accepted less than 44% of AI-generated suggestions, spending considerable time on verification and adjustments, ultimately rejecting much of the AI's output. Despite this, 69% of participants continued to use AI tools after the study concluded.
Supporting this outcome, developer Mike Judge from Substantial conducted his own experiment where he flipped a coin to decide whether to use AI for tasks. He found a 21% slowdown, closely aligning with METR's findings. According to analyses of publicly available metrics, including new app launches and GitHub projects, there was no evidence of a dramatic productivity boost among developers.
Additionally, a December 2025 study by CodeRabbit analyzed 470 pull requests on GitHub, revealing that code generated with AI had 1.7 times more serious issues and 2.74 times more security vulnerabilities compared to manually written code. In May 2025, Georgia Tech launched the Vibe Security Radar project to track CVE records linked to AI-generated code. By March 2026, it had documented 74 confirmed cases of vulnerabilities caused by AI, with a significant spike in recorded incidents over recent months.
The consequences of these vulnerabilities were starkly illustrated by a major incident involving the AI-driven social network Moltbook. Within days of its launch in January 2026, researchers discovered that an exposed Supabase API key in the platform's client-side JavaScript allowed unauthorized access to a vast database, compromising 1.5 million authentication tokens and thousands of private messages.
This incident was not isolated. In May 2025, security researcher Matt Palmer found over 300 unsecured endpoints across 1,645 applications built on the Lovable platform, exposing sensitive user information. Another startup, Enrichlead, which claimed to rely entirely on AI for its development, collapsed shortly after launch due to severe security flaws that allowed users to bypass payment systems easily.
Moreover, vulnerabilities in the AI tools themselves have emerged, with Kaspersky Lab reporting critical security issues that could allow data theft from developers' computers. These findings highlight a growing concern not just about the security of AI-generated code but also the tools that facilitate its creation.
As the market increasingly embraces AI in software development, these findings underscore the urgent need for enhanced security measures and thorough verification processes. Competitors must adapt swiftly to address these vulnerabilities to maintain trust and reliability in their offerings.
Informational material. 18+.