← Back to Blog

More Code Is Not Better Code

The quiet crisis of AI slop, and why shipping 37,000 lines in a day should be a warning sign, not a badge of honour.

There is a strange new status symbol in tech circles. It is not the elegance of a well-crafted system or the discipline of a clean codebase. It is volume. Raw, unreviewed, AI-generated volume. And it is producing some of the messiest software the web has ever seen.

We are living through a moment that will look embarrassing in hindsight. Developers and founders are competing to brag about how many lines of code their AI tools have generated, as though quantity were a proxy for quality. It is not. It has never been. And the gap between those two things is getting dangerously wide.

What is filling that gap has a name: AI slop. Code that compiles. Code that ships. Code that is, underneath the surface, a disaster.

37,000 Lines in a Day Is Not a Flex

Gary Tan, CEO of Y Combinator, made waves recently when he claimed to have written 37,000 lines of code in a single day across five different projects. He did this using GStack, a toolset built on markdown files that uses AI to generate websites. The announcement was celebrated in some corners. In others, it was read as a confession.

Here is the simple truth: no human being can read and understand 37,000 lines of code in 24 hours. Not even close. And if you cannot read the code, you do not know what is in it. You do not know what it is doing to your users, to your data, to your infrastructure. You are not an engineer at that point. You are a bystander watching an AI cannon fire into production.

Real quality is not about how many lines you write. It is about how well those lines work together.

This is the kind of metric that sounds impressive to people who do not build software for a living. To anyone who does, it is a red flag. Lines of code are not a measure of value. They are often a measure of waste.

What Happens When Nobody Checks the Work

A developer named Gregorian decided to look under the hood of Gary’s site, a project representing 78,000 lines of AI-generated code built over a 72-day shipping streak. The findings were not subtle.

28 test files shipped to every visitor. 4MB of uncompressed image weight on a single page. Full page content rendered twice, once per device type. 47 images with no alt text for screen readers.

The site was bundling 28 test files and sending them to every visitor. These are files that exist only to check whether the code works during development. They have no business being on a live website. Their presence adds roughly 300 kilobytes of dead weight to every page load. More importantly, it reveals that no one reviewed what was going into the build.

The site was also loading scaffolding from the Rails framework — specifically the “Hello World” starting template that developers are supposed to discard as soon as they begin building. Leaving it in a finished product is not a minor oversight. It is a signal that the construction was so automated, so unexamined, that no human walked the site before it went live.

In 2026, there is no excuse for serving uncompressed images. Every modern build tool handles this automatically. Yet this site was loading a 2-megabyte PNG and another image approaching the same size. These images make pages slow, waste mobile data, and punish users with worse connections. The AI generated the code. Nobody set expectations about what the output should look like.

The DOM, the structural layer of the web page, was rendering the full content twice — once for mobile users, once for desktop. This is an old pattern that the industry abandoned for good reason. It doubles page weight and complexity for no user benefit. A scoring tool gave the site an 80 out of 100. That score hides the 47 missing image descriptions, the inaccessible interface elements, the bloated bundles. An 80 is not a passing grade for a site serving real people.

You Cannot Outsource the Standard

Some will argue that none of this matters. The site loads. People can use it. The AI will get better. Future models will catch these mistakes automatically. This argument is seductive and almost entirely wrong.

The issue is not the AI. The issue is the assumption that the AI is responsible for quality. It is not. The person who clicks publish is responsible. If your site sends test files to users, that is your failure. If your site is invisible to someone using a screen reader, that is your failure. You cannot reassign that to the model that generated the code.

There is also a competence problem buried here. If you are telling an AI to “make it good” and “make no bugs,” and you cannot tell the difference between a good result and a bad one, you are not using a tool. You are hoping. And hope is not an engineering practice.

Accessibility is not a bonus feature. Fast load times are not a luxury. These things matter to real people trying to use your product. Building with AI does not exempt you from caring about them. It makes caring about them more important, because the AI will not do it for you.

Slow Down to Build Better

Mario Zner, developer of Pi, has articulated something important here. He loves AI tools. He uses them constantly. But he believes, correctly, that the right posture when working with AI is engaged and critical, not passive and credulous.

Five principles for AI-assisted development that does not embarrass you.

One, let AI handle the dull work. Use it for repetitive, mechanical tasks that teach you nothing new and carry low risk if they go wrong. Two, set strict output limits. Only ask the AI to write as much as you can actually read in one sitting. If you cannot review it, you cannot ship it. Three, read every line before you commit it. Not skim. Read. Understand what each block is doing and why it is there. Four, run analysis tools on every build. Dead code, duplicate logic, bloated bundles, missing accessibility attributes. These tools exist. Use them every time. Five, fix mistakes yourself. Do not loop the AI on the same error indefinitely. Sometimes the fastest path is to write the correct line yourself. This also keeps your skills sharp.

These principles are not anti-AI. They are pro-quality. There is a meaningful difference between using AI to accelerate good work and using AI to automate indifference. The first is a genuine advantage. The second is what we now call a slop cannon.

The fundamentals of the web have not changed because AI tools exist. Images still need to be compressed. Pages still need to work on slow connections. Disabled users still deserve access to the content you publish. Knowing how these things work is not optional knowledge that the AI makes redundant. It is the knowledge that lets you tell the AI what to do and recognise when it has failed.

The bottom line

The rise of AI slop is not inevitable. It is a choice.

Every time someone ships 37,000 lines without reading them, they are making that choice. Every time someone runs an accessibility audit and skips the alt text, they are making that choice. The internet is a shared space and what you put into it has consequences for real people.

Use the tools. Move fast. But read the code. Check the images. Test the accessibility. That is not slowness. That is the job.