What a terrible article, putting the blame and responsibility on maintainers and pretending adding a few files will make everything fine…
Open source maintainers don’t want to hear this, but this is the way people code now, and you need to do your part to prepare your repo for AI coding assistants.
- Plenty of people still code without LLMs
- If someone can’t submit a PR without it being obvious they used an LLM (regardless of if they actually do) they shouldn’t be submitting a PR. This is a quality issue rather than prejudice against tools.
- This bubble is going to pop, the SaaS based assistants are going to be either gone or significantly more expensive. Whether these problematic “contributors” will still be around will be interesting to see.
- How fucking dare this person insist open source maintainers do all this work they’re not interested in to cater to low quality “contributors”.
you forgot to mention that this person is also recommending OSS maintainers pony up for AI to literally fight AI spam. Huh??? So now OSS maintainers who already don’t get enough donations have to spend it on AI for contributors whose code is so bad that it’s recognizable on AI? It’d be better to just not have contributions at all!
Even if the bubble pops, the existing large language models will remain, as will AI assisted coding.
The models will remain but the companies that own the hardware that they run on may not. Of course you can still run your own, but the model size will be much, much smaller unless you have a significant setup.
- What AI is good for (boilerplate, tests, docs, refactoring) and what it’s not (security critical code, architectural changes, code you don’t understand)
Incorrect. AI is only good for boilerplate. Letting it write tests will give you broken and incorrect tests. Letting it write docs will give you incorrect docs. Letting it refactor will give you bugs. AI is passable at generating boilerplate.
Well, it’s also good at writing code to use as the “Incorrect” part of a Correct/Incorrect example.
I asked Gemini to write just the most basic use case for my tokenizer library the other day (checking to see if a search query is found in a set of already computed tokens), and it couldn’t even get that right, but boy was it absolutely certain that it did. Pathetic. If it were an unpaid intern it would be fired.
Here’s the thing… code is much closer to doing group therapy than it is to flipping burgers. The final produced artifact is somewhat less important than converging on the tiny itty-bitty details of how to collaborate on the journey of getting there. This is especially true for open source projects.
Spending time trying to understand the perspective of a chatbot-managed contributor is a waste of time. You’re not going to be able to build a community around having a good developer experience for doing XYZ if half of the people weighing in on “how to do that” are not developers who are directly experiencing the thing they’re poking at.
Edit: Also, on testing specifically… this is something I see teams get wrong all the time. The real value of a test suite is not from making sure your system works correctly, but from making sure it’s easy to inspect how your system works. If it’s hard to write or modify tests, that’s an indication that you’ve got some unwieldy abstractions floating around. LLMs don’t care about whether the friction of test-writing is increasing.
Man, I understand that it’s trying to give tips, but this really comes off as condescending. “Just create these three pieces of complex, non-obvious documentation and ensure you have highly automated specification and code quality checks.”
I also have to say, if you expect maintainers to be experts in how to correctly prompt LLMs, and expect them to be hot for reviewing/rewriting generated code, then they might as well prompt the LLMs themselves.
Sure, there may be extra effort involved by outside contributors – may, because they do attract folks who have no interest in putting in any effort – but is that really worth the overhead of having to communicate with the LLM through a middleman?Agree. Continuous Deliver has been the right path before AI and has become more important.




