Open source has an AI pollution problem
By Oleg Sidorkin, CTO of Cinevva
Rémi Verschelde is one of the people who keeps Godot running. Not as a side project or a hobby. As a life's work. He's been maintaining the engine since before most people had heard of it, reviewing contributions, merging patches, making sure the thing millions of developers depend on actually works.
Last month, he described what's happening to Godot's contribution pipeline as "draining and demoralizing."
The cause: AI-generated pull requests. Lots of them.
"We now have to second-guess nearly every pull request from new contributors."
— Rémi Verschelde, via Game Developer
Godot has 4,681 open pull requests on GitHub right now. A growing percentage of new submissions are generated by people who typed a prompt, got some code, and submitted it without understanding what it does. The code often looks plausible at first glance. It compiles. The variable names make sense. Then a maintainer spends twenty minutes figuring out that it introduces a subtle bug, breaks an edge case, or solves a problem that doesn't exist.
The time spent rejecting bad PRs is time not spent reviewing good ones.
The irony writes itself
AI tools are supposed to make developers more productive. That's the pitch. That's why companies are raising billions to build them. And at the individual level, they do. I use AI tools every day. Our entire platform uses AI for game creation, music generation, 3D models, and more. I'm not anti-AI.
But there's a system-level effect that nobody talks about in the investor decks. When AI makes it trivially easy to generate a contribution, and the cost of submitting drops to zero, but the cost of reviewing stays exactly where it was, you get a pollution problem.
The people submitting these PRs aren't malicious. Most of them genuinely want to contribute. They've been told that AI tools let them contribute to open source without deep expertise. And the tools do let them generate something that looks like a contribution. It just isn't one.
Verschelde acknowledged that using AI to detect AI-generated PRs would be "horribly ironic." He's right. Fighting AI output with AI detection is an arms race nobody wins.
What this really tells us
Making something is now cheap. Making something good still costs the same.
That's the lesson showing up everywhere, not just in open-source code review. It shows up in game development, in music production, in content creation. AI dropped the floor. The minimum viable contribution, the minimum viable game, the minimum viable blog post can now be generated in seconds. But the ceiling didn't move.
The people who were already good at their craft are now faster. The gap between "made a thing" and "made a thing worth someone's time" is actually wider than it used to be, because the volume of mediocre output has exploded while the number of people who can evaluate quality hasn't changed.
Godot's contribution guidelines require disclosure of AI assistance. People ignore them. You could make the rules stricter, but enforcement requires the same human review time you're trying to save.
The real solution is boring
Verschelde's primary ask is funding. Hire more maintainers. More humans reviewing the work. That's not a technical solution. It's an organizational one. And it's probably the only one that works.
Open-source projects are getting the same lesson the rest of us are learning: AI doesn't eliminate the need for human judgment. It increases it. The more AI-generated content flows into any system, the more you need people who can tell the difference between something that looks right and something that is right.
We think about this constantly when building Cinevva's tools. The goal was never to remove human judgment from game creation. It's to let creative people focus their judgment on what matters: does this feel right, does this work, would someone enjoy this? The grunt work gets handled. The taste doesn't get automated.
Godot will figure this out. The engine is too important and the community too strong for it not to. But the pattern they're dealing with isn't going away. Every open-source project, every creative platform, every system that accepts contributions from the public is going to face this same question: how do you handle a world where producing something is nearly free but evaluating it isn't?
Related:
- Agentic AI code tools — responsible use of AI coding tools
- Frontier Open-Source Gen AI Models — the open-source models driving this shift
- AI controversy, trust, and the post-AI economy — the broader trust question around AI in creative work
- Web Game Engines Comparison — Godot and other engines affected by this trend