I've been seeing more and more open source maintainers throwing up their hands over AI generated pull requests. Going so far as to stop accepting PRs from external contributors.
This week we're going to begin automatically closing pull requests from external contributors. I hate this, sorry. pic.twitter.com/85GLG7i1fU
— tldraw (@tldraw) January 15,
What AI is good for (boilerplate, tests, docs, refactoring) and what it’s not (security critical code, architectural changes, code you don’t understand)
Incorrect. AI is only good for boilerplate. Letting it write tests will give you broken and incorrect tests. Letting it write docs will give you incorrect docs. Letting it refactor will give you bugs. AI is passable at generating boilerplate.
Well, it’s also good at writing code to use as the “Incorrect” part of a Correct/Incorrect example.
I asked Gemini to write just the most basic use case for my tokenizer library the other day (checking to see if a search query is found in a set of already computed tokens), and it couldn’t even get that right, but boy was it absolutely certain that it did. Pathetic. If it were an unpaid intern it would be fired.
Incorrect. AI is only good for boilerplate. Letting it write tests will give you broken and incorrect tests. Letting it write docs will give you incorrect docs. Letting it refactor will give you bugs. AI is passable at generating boilerplate.
Well, it’s also good at writing code to use as the “Incorrect” part of a Correct/Incorrect example.
I asked Gemini to write just the most basic use case for my tokenizer library the other day (checking to see if a search query is found in a set of already computed tokens), and it couldn’t even get that right, but boy was it absolutely certain that it did. Pathetic. If it were an unpaid intern it would be fired.