Anyone can code with AI. But it might come with a hidden cost.

Home » Anyone can code with AI. But it might come with a hidden cost.
Anyone can code with AI. But it might come with a hidden cost.

Anyone can code using AI. But it might come with a hidden cost.



Over the past year, AI systems have become so advanced that users without significant coding or computer science experience can now spin up websites or apps simply by giving instructions to a chatbot.

Yet with the rise of AI systems powerful enough to translate the instructions into tomes of code, experts and software engineers are torn over whether the technology will lead to an explosion of bloated, error-riddled software or instead supercharge security efforts by reviewing code faster and more effectively than humans.

“AI systems don’t make typos in the way we make typos,” said David Loker, head of AI for CodeRabbit, a company that helps software engineers and organizations review and improve the quality of their code. “But they make a lot of mistakes across the board, with readability and maintainability of the code chief among them.”

Coding has long been an art and a science. Since the days of coding computer systems by punch cards in the mid-20th century, conveying computing instructions has been a challenge of elegance and efficiency for computer scientists.

But inside today’s leading AI companies, most coding is performed by AI systems themselves, with human software engineers functioning more as coaches or high-level architects rather than in-the-weeds mechanics. Anthropic’s head of Claude Code, Boris Cherny, said on X that AI has written 100% of his code since at least December. “I don’t even make small edits by hand,” Cherny said.

The rise of AI-assisted coding — also called vibe coding — is simultaneously allowing people who have never coded before to unleash their creativity and enabling experienced software engineers to dramatically expand the amount of code they write.

“The initial push of all this was developer productivity,” Loker told NBC News. “It was about increasing the throughput in terms of feature generation, the ability to build fast and ship things.”

Though AI-coding systems have become significantly more capable even since November, they often fail to understand entire repositories of code as fully as experienced human developers. For example, Loker said, “AI coding systems might duplicate functionality in multiple different locations because they didn’t find that that function already existed, so they re-create it over and over and over again.”

“Now you end up with a sprawling problem. If you update a function in one spot and you don’t update it in the other, you have different business logic in different areas that don’t line up. You’re left wondering what’s going on.”

With AI coding systems supercharging the amount of code being created, experts wonder whether code will be the next victim of the AI slop onslaught. The concept of AI slop was originally popularized in 2024 as AI systems became capable and pervasive enough to start churning out volumes of low-quality, unwanted AI outputs — from AI-generated photos to unhelpful AI-powered search results.

On one hand, AI coding systems are producing vast amounts of serviceable but imperfect code. On the other hand, those same systems are quickly getting better at reviewing their own code and finding security vulnerabilities.

For example, in late January, the rise of AI code slop forced leading developer Daniel Stenberg to shutter a popular effort to find bugs in a popular software system. Stenberg wrote on his blog that “the never-ending slop submissions take a serious mental toll to manage and sometimes also a long time to debunk. Time and energy that is completely wasted while also hampering our will to live.”

Yet on Thursday, Stenberg said the flood “has transitioned from an AI slop tsunami into more of a … plain security report tsunami. Less slop but lots of reports. Many of them [are] really good.”

Companies are quickly realizing that boosted quantity does not automatically increase quality — in fact, the opposite is often true, according to Jack Cable, CEO and co-founder of the cybersecurity consulting firm Corridor.

“Even if [a large language model] is better at writing code line by line, if it’s writing 20 times as much code as a human would be, there is significantly more code to be reviewed,” Cable said. “It’s no longer a challenge to produce tons and tons of code, but companies, if they’re doing their job right, still need to be reviewing that code from a functionality perspective, a quality perspective and also a security perspective.”

AI coding agents are producing “an explosion in complexity,” he added. “And if there’s one thing we know about software, it’s that with increased complexity comes increased attack surface and vulnerability.”

In January, developer and entrepreneur Matt Schlicht said he used AI coding systems to create a social network for AI systems called Moltbook, now owned by Meta. Yet security researchers soon identified critical security vulnerabilities in Moltbook’s software that exposed human users’ credentials, which they ascribed to its AI-coded roots.

One of those ethical hackers and researchers, Jamieson O’Reilly, told NBC News that the rise of AI coding agents threatened to create security vulnerabilities by giving coding novices significant public exposure without commensurate security expertise.

“People often believe that AI coding agents will build things per the best security standards,” O’Reilly said. “That’s just not the case. AI is knocking down decades of security silos that were built up to protect users, and it’s being traded for convenience as these AI systems evolve.”

Daniel Kang, a professor of computer science at the University of Illinois Urbana-Champaign and an expert on security vulnerabilities created by AI coding agents, agreed that AI coding systems are likely to give new users a false sense of safety.

“Even if you assume that the rate of security vulnerabilities in any given chunk of code is constant, the number of vulnerabilities will go up dramatically because people who don’t know the first thing about computer security, and even experienced programmers who don’t treat security as a top priority, are going to be producing more code,” Kang said.

To try to quantify the growing phenomenon, researchers at Georgia Tech have launched a Vibe Security Radar. Since August, the team has identified over 70 critical software vulnerabilities that are most likely due to AI coding, with a significant increase in the past two months. An AI startup called Arcade recently launched a tool for developers to monitor the sloppiness of their code.

CodeRabbit also released a report in December finding that AI-generated code has 70% more errors than human-written code and that the AI-generated errors are more serious than human-generated errors, though Loker, of CodeRabbit, cautioned that those results might be slightly out of date given how quickly today’s AI systems are evolving.

While much software is proprietary and “closed-source,” or hidden from public sight, many other projects, like Mozilla’s Firefox browser or the Linux operating system, are open-source and rely on community members to submit suggestions to improve the software.

By lowering the barriers to submit suggestions to the open-source software packages, AI-assisted coding has flooded many of the community-led initiatives with low-quality code over the past few months.

“A lot of package maintainers we talk to are inundated by slop,” Loker said. “It’s just completely poorly written. It’s not even well thought-out, doesn’t fit in and contains various other pieces of nonsense.”

The barrage of AI-mediated code is forcing one of the most popular hosts of code repositories, GitHub, to rethink its approach to open-source software maintenance. And on Friday, GitHub’s chief operating officer said overall platform activity in 2026 is roughly on pace to surge 14 times above 2025 levels.

Yet, as Stenberg said, the new AI-fueled fire might also be best fought with other AI systems, as AI-powered programs to review and refine code become increasingly popular.

Noting that CodeRabbit’s own systems are AI-powered, Loker said: “A code-review system that’s automated is now really, really necessary in most companies that are adopting these systems. We don’t have to sell people anymore as much on the idea that quality is an issue. Our partners have been using AI to code long enough now that they are seeing the detrimental side effects.”

Cherny, of Anthropic, is betting that rapid improvements in AI systems’ coding abilities will help solve the emerging chasms in code quality and reliability. “My bet is that there will be no slopcopolypse because the model will become better at writing less sloppy code and at fixing existing code issues,” Cherny wrote in late January.

Regardless of the growing cottage industry of code-review systems, Kang, of the University of Illinois, is adamant that coders — new and old — can guard their systems against code slop by embracing age-old cybersecurity fundamentals. “If you apply all the best practices and you do all of the correct things, then you can actually be better off than before AI systems,” he said.

Yet Kang is pessimistic that users will actually adopt adequate security practices given rabid AI adoption. As a result, he is bearish about the long-term effects of code slop: “It’s going to blow up. It’s definitely going to be really nasty.”

“The question is just how and when, and that’s what I’m worried about.”

Leave a Reply

Your email address will not be published.