Listen to this post
7 min read

AI Made Everyone a Builder and That's a Problem

AI Made Everyone a Builder and That's a Problem
Ran Isenberg
Written by

Ran Isenberg

Builder

AWS Serverless Hero & Principal Cloud Architect at Palo Alto Networks

Passionate about AI, Serverless, Platform Engineering and helping organizations build reliable & scalable systems on AWS.

AI has made building software incredibly accessible. You can go from idea to a working prototype in hours, and the feeling is intoxicating. I've felt it myself. But that same low barrier to entry is creating a wave of problems that the community isn't talking about enough.

People are shipping open source projects that will never reach production quality, building custom tools when a simple Google search would reveal a free SaaS alternative, and perhaps worst of all, letting AI agents run loose on the open source ecosystem, flooding maintainers with spam PRs, fake issues, and AI-generated slop that's actively harming the communities we all depend on. And all of this is happening while security takes a back seat.

In this post, I'll share what I've been seeing firsthand, from my own projects and across the community, and why I think we need to slow down and ask better questions before we hit that build button.

AI Makes You Feel Like a God, Until Production

I'll be the first to admit it: building with AI feels like a superpower. You describe what you want, the agent writes code, and within hours, you have something that actually works. I replaced my entire Wix website using Claude in about three hours without writing a single line of code myself, and I've shipped multiple real projects using agentic AI workflows since then.

But "working on my machine" is not the same as production-grade, and excitement is not the same as judgment. Remember those three hours? The site that emerged from that session had security issues, mediocre performance, failed accessibility standards, lacked analytics, and had no tests. It took me a couple of weeks to properly sort it out, including over 4,000 tests, daily analytics sent to my email, and other production guardrails.

Getting to production is not easy. Getting LinkedIn fame over an AI-built demo is much easier, and that's exactly the problem.

The Open Source Graveyard

Scroll through GitHub trending, or your LinkedIn feed on any given day, and you'll see dozens of new AI-built open source projects. New CLI tools, new frameworks, new "awesome" lists, new agents for every possible use case. Sure, many of them have a contribution guide, maybe even a decent README. But if you look closer, most are missing the things that actually matter for production use: proper testing, CI/CD pipelines, security considerations, and any real plan for long-term maintenance.

Let's be honest, a lot of these projects exist primarily for the LinkedIn post announcing them. The repo gets a burst of stars, the author gets a wave of likes and congratulations, and then, when the dust settles two or three months later, the project is quietly abandoned.

I've spent over 4 years building and maintaining open-source projects, and making software work is the easy part. The hard part is everything after the initial release: documentation, automated testing, security from day one, and showing up consistently to maintain what you've built. I wrote about this in my post on building internal tools that people actually want to adopt, and the principles apply just as much to open source. Most of these AI-generated projects will be ghost towns within three months.

Google It Before You Build It

When you feel like you can build anything, you forget to check if someone else has already. I almost fell into this trap myself. I was paying for Calendly and got frustrated with the cost for what felt like a simple scheduling use case. My immediate instinct was to build my own version, a stripped-down SaaS tailored to exactly what I needed. With AI, I could have a working prototype in a weekend.

Then my colleague Chen Reuven asked: "Have you looked at what else is out there?" There was another SaaS service with a free tier that did exactly what I needed. I got Claude to handle the integration in five minutes. No code to build from scratch, no infrastructure to maintain, and most importantly, no service to keep running and debugging forever. Problem solved in five minutes instead of a weekend, and I don't have to maintain it for the rest of my life.

Before you open your IDE, spend ten minutes searching for existing solutions. Build when there's a real unmet need or when you want to learn from the experience.

And while all these half-baked projects pile up, guess who's paying the real price?

AI Slop Is Killing Open Source Maintainers

The people who maintain the open source projects we all depend on are drowning in AI-generated garbage, and it's getting worse by the day. The root cause is simple: people are letting their AI agents run loose in the open-source ecosystem with zero human oversight. An agent that can file issues, open PRs, and comment on threads is incredibly useful when a human is guiding it, but when it's set on autopilot with its own user and pointed at public repositories, the results are devastating.

There's a viral Reddit thread showing an AI bot pressuring matplotlib maintainers with automated PRs and shaming them publicly when they don't merge AI-generated changes quickly enough. Unpaid volunteers who have spent years maintaining critical infrastructure are being harassed by unsupervised bots that demand they accept AI-generated content.

I experienced this firsthand on my awesome-serverless-blueprints repository. A bot created an issue requesting that I add a new template. The suggested repository URL was a 404, the user profile was clearly a bot, and the entire request was fabricated. It wasted my time and forced me to investigate something that should never have existed.

The AI Security Debt Nobody's Counting

All of this AI-generated code is shipping without the scrutiny that serious software demands. And no, prompting "make my code secure" or using a security-focused skill is not enough. You need external validation from trusted sources, experts, and tools. As I discussed in my posts on AI and security and vibe coding best practices, the speed that makes AI feel like a superpower is the same speed that creates vulnerabilities at scale.

And this isn't limited to hobbyist side projects. Even the top AI companies make mistakes. Just days ago, Anthropic accidentally leaked Claude Code's entire source code to the public npm registry through a misconfigured release, exposing 512,000 lines of code, hidden feature flags, and internal architecture details. If a company building one of the most advanced AI systems in the world can ship a packaging error like that, imagine what's lurking in the thousands of AI-generated projects being pushed to GitHub every day with no security review process at all.

Build With Purpose and Be Kind to Open Source Maintainers

I build with AI every day. But the accessibility of creation comes with a responsibility that our industry hasn't yet fully internalized.

Before you build, ask yourself: Is there already a solution that does this well enough? Am I prepared to maintain this beyond the initial excitement? And have I thought about the security implications of what I'm shipping?

And beyond your own projects, be kind to your open source maintainers. These are real people, often volunteers, who pour their time into tools that you and your company depend on every day. Have some compassion. Have some patience. Wait for your PR to be reviewed on their schedule, not yours. And for the love of everything, don't let your AI agent run loose on GitHub filing issues, opening PRs, and spamming repositories unsupervised. A human should always be in the loop.

Share this article