AI coding assistants promise speed, but do they deliver? Explore data, developer insights, and security risks showing why AI feels faster but often slows production. Learn where tools like Cursor and Claude Code help, and where they fail.
I keep seeing the “it’s good for prototyping” argument they post here, in real life.
For non-coders it holds up if you ignore the security risk of someone running literally random code they have no idea what does.
But seeing it from developers, it smells of bullshit. The thing they show are always a week of vibing gave them some stuff I could hack up in a weekend. And they could too if they invested a few days of learning e.g. html5, basic css and read the http fetch doc. And the learning cost is a one-time cost - later prototypes they can just bang out. And then they also also have the understanding needed to turn it into a proper product if the prototype pans out.
I say this as someone who’s not particularly a fan of AI and tries to use it very sparingly.
For me AI is not so much about productivity gains. Where I find it useful instead is to push me past the initial block of starting something from scratch. It’s that initial dopamine rush that the article mentions, from seeing an idea starting to take shape.
In that sense, if I compare projects by time spent on them with or without AI after they are completed, I too would probably find there were no productivity gains. But some of these things I would never get started at all by myself.
If you are a senior developer in a corporation, you know what you have to do, you are an expert in your domain, you rarely start something really new (and when you do, it is only after endless discussions and studies on tools, language, tech stack, architecture). AI is probably not a great help for you.
But even in corporate life, there are a lot of things that are inportant but that you constantly set aside: from planning your career, to honing your communication skills or whatever it is that you could certainly learn to do (with time and dedication) but for some reason you keep postponing because you are not already an expert at them and it takes motivation to learn. That’s where AI found its niche in my life.
Where I find it useful instead is to push me past the initial block of starting something from scratch
I think this is one of the highly understated benefits. I have to work in legacy codebases in programming languages I hate, and it used to be like pulling teeth to get myself motivated. I’d spend half the day procrastinating, and then finally write some code. Then I’d pull my hair out writing tests, only for CI to tell me I don’t have enough test coverage and there are 30 lint issues to fix. At that point, there would be yelling at the screen, followed by more procrastination.
With AI, though, I just write a detailed prompt, go get some coffee, and come back to a pile of drivel that is probably like 70% of the way there. I look it over, suggest some refactoring, additional tests, etc., manually test it and have it fix any bugs. If CI reports any lint issues or test failures, I just copy and paste for AI to fix it.
Yes, in an ideal world if I didn’t have ADHD and could just motivate myself to do whatever my company needs me to do and not procrastinate, I could write better quality code faster than AI. When I’m working on something I’m excited about, AI just gets in the way. The reality being what it is, though, AI is unequivocally a huge productivity boost for anything I’d rather not be working on.
Especially, “being 70%” finished does not mean you will get a working product at all. If the fundamentale understanding is not there, you will not getting a working product without fundamental rewrites.
I have seen code from such bullshit developers myself. Vibe-coded device drivers where people do not understand the fundamentals of multi-threading. Why and when you need locks in C++. No clear API descriptions. Messaging architectures that look like a rats nest. Wild mix of synchronous and async code. Insistence that their code is self-documenting and needs neither comments nor doc. And: Agressivity when confronted with all that. Because the bullshit taints any working relationship.
I keep seeing the “it’s good for prototyping” argument they post here, in real life.
There are real cases where bugs aren’t a huge deal.
Take shell scripts. Bash is designed to make it really fast to write throwaway, often one-line software that can accomplish a lot with minimal time.
Bash is not, as a programming language, very optimized for catching corner cases, or writing highly-secure code, or highly-maintainable code. The great majority of bash code that I have written is throwaway code, stuff that I will use once and not even bother to save. It doesn’t have to handle all situations or be hardened. It just has to fill that niche of code that can be written really quickly. But that doesn’t mean that it’s not valuable. I can imagine generated code with some bugs not being such a huge problem there. If it runs once and appears to work for the inputs in that particular scenario, that may be totally fine.
Or, take test code. I’m not going to spend a lot of time making test code perfect. If it fails, it’s probably not the end of the world. There are invariably cases that I won’t have written test code for. “Good enough” is often just fine there.
And it might be possible to, instead of (or in addition to) having human-written commit messages, generate descriptions of commits or something down the line for someone browsing code.
I still feel like I’m stretching, though. Like…I feel like what people are envisioning is some kind of self-improving AI software package, or just letting an LLM go and having it pump out a new version of Microsoft Office. And I’m deeply skeptical that we’re going to get there just on the back of LLMs. I think that we’re going to need more-sophisticated AI systems.
I remember working on one large, multithreaded codebase where a developer who isn’t familiar with or isn’t following the thread-safety constraints would create an absolute maintenance nightmare for others, where you’re going to spend way more time tracking down and fixing breakages induced than you saved by them not spending time coming up to speed on the constraints that their code needs to conform to. And the existing code-generation systems just aren’t really in a great position to come up to speed on those constraints. Part of what a programmer does is, when writing code, is to look at the human-language requirements, and identify that there are undefined cases and go back and clarify the requirement with the user, or use real-world knowledge to make reasonable calls. Training an LLM to map from an English-language description to code is creating a system that just doesn’t have the capability to do that sort of thing.
I am sorry, but I am not sure what tells you how Bash “was designed” or not. Perhaps you haven’t yet written anything serious in Bash…
Have you checked out Bash PitFalls at Wooledge, at least?
Bash, or the most shells, including Posix, or even Perl, are some of the most complex languages out there to make a mistake… since there’s no compiler to protect you from, and though legendary but readline may cause the whole terminal go flying, depending on the terminal/terminfo in process…
No, sorry. I absolutely disagree on your stance regarding “shell” for a “bugless” “huge deal” in “real cases”.
The point I’m making is that bash is optimized for quickly writing throwaway code. It doesn’t matter if the code written blows up in some case other than the one you’re using. You don’t need to handle edge cases that don’t apply to the one time that you will run the code. I write lots of bash code that doesn’t handle a bunch of edge cases, because for my one-off use, that edge case doesn’t arise. Similarly, if an LLMs is generating code that misses some edge case, if it’s a situation that will never arise, and that may not be a problem.
EDIT: I think maybe that you’re misunderstanding me as saying “all bash code is throwaway”, which isn’t true. I’m just using it as an example where throwaway code is a very common, substantial use case.
I still don’t get what you mean, sorry. And why Bash and not another shell?
Why not Korn, Ash, Dash, Zsh, Fish, or anything REPL, including PHP, Perl, Node, Python etc.
Should we consider “throwaway” anything that supports interactive mode of your daily driver you chose in your default terminal prompt?
What does “throwaway” code means in the first place?
Yeah, I went back through this reply chain and I couldn’t find any explicit evidence that they’re talking about shell scripting at all, and perhaps think that the “bash programming language” refers to a general style, i.e. “to bash stuff together until it works”.
If you want to actually realize the amount of possible misunderstanding in the current conversation and of what shell scriting is, please do consider joining #bash at Libera IRC. Please do also mention the word “throwaway” in the rooms! Since there’s literally no understanding on what you mean still, sorry. It does not feel like you have a significant enough understanding of the subjects raised.
For a very simple example, there are literally no documentation regarding certain cases you’ll encounter in Bash’s built-ins even, unless you actually encounter it or learn from Bash’s very source code, like read built-in. Not to mention shenanigans in shell logics for inter-process communication (IPC), file-descriptors, environment variables like PWD, exported functions’ BASH_FUNC_, pipes, etc.
I keep seeing the “it’s good for prototyping” argument they post here, in real life.
For non-coders it holds up if you ignore the security risk of someone running literally random code they have no idea what does.
But seeing it from developers, it smells of bullshit. The thing they show are always a week of vibing gave them some stuff I could hack up in a weekend. And they could too if they invested a few days of learning e.g. html5, basic css and read the http fetch doc. And the learning cost is a one-time cost - later prototypes they can just bang out. And then they also also have the understanding needed to turn it into a proper product if the prototype pans out.
I say this as someone who’s not particularly a fan of AI and tries to use it very sparingly.
For me AI is not so much about productivity gains. Where I find it useful instead is to push me past the initial block of starting something from scratch. It’s that initial dopamine rush that the article mentions, from seeing an idea starting to take shape.
In that sense, if I compare projects by time spent on them with or without AI after they are completed, I too would probably find there were no productivity gains. But some of these things I would never get started at all by myself.
If you are a senior developer in a corporation, you know what you have to do, you are an expert in your domain, you rarely start something really new (and when you do, it is only after endless discussions and studies on tools, language, tech stack, architecture). AI is probably not a great help for you.
But even in corporate life, there are a lot of things that are inportant but that you constantly set aside: from planning your career, to honing your communication skills or whatever it is that you could certainly learn to do (with time and dedication) but for some reason you keep postponing because you are not already an expert at them and it takes motivation to learn. That’s where AI found its niche in my life.
I think this is one of the highly understated benefits. I have to work in legacy codebases in programming languages I hate, and it used to be like pulling teeth to get myself motivated. I’d spend half the day procrastinating, and then finally write some code. Then I’d pull my hair out writing tests, only for CI to tell me I don’t have enough test coverage and there are 30 lint issues to fix. At that point, there would be yelling at the screen, followed by more procrastination.
With AI, though, I just write a detailed prompt, go get some coffee, and come back to a pile of drivel that is probably like 70% of the way there. I look it over, suggest some refactoring, additional tests, etc., manually test it and have it fix any bugs. If CI reports any lint issues or test failures, I just copy and paste for AI to fix it.
Yes, in an ideal world if I didn’t have ADHD and could just motivate myself to do whatever my company needs me to do and not procrastinate, I could write better quality code faster than AI. When I’m working on something I’m excited about, AI just gets in the way. The reality being what it is, though, AI is unequivocally a huge productivity boost for anything I’d rather not be working on.
I would agree with that.
Especially, “being 70%” finished does not mean you will get a working product at all. If the fundamentale understanding is not there, you will not getting a working product without fundamental rewrites.
I have seen code from such bullshit developers myself. Vibe-coded device drivers where people do not understand the fundamentals of multi-threading. Why and when you need locks in C++. No clear API descriptions. Messaging architectures that look like a rats nest. Wild mix of synchronous and async code. Insistence that their code is self-documenting and needs neither comments nor doc. And: Agressivity when confronted with all that. Because the bullshit taints any working relationship.
It can pop out a pojo based on copy paste of an API document faster than I can.
I wouldn’t trust it for logic though. That’s just asking for trouble.
There are real cases where bugs aren’t a huge deal.
Take shell scripts. Bash is designed to make it really fast to write throwaway, often one-line software that can accomplish a lot with minimal time.
Bash is not, as a programming language, very optimized for catching corner cases, or writing highly-secure code, or highly-maintainable code. The great majority of bash code that I have written is throwaway code, stuff that I will use once and not even bother to save. It doesn’t have to handle all situations or be hardened. It just has to fill that niche of code that can be written really quickly. But that doesn’t mean that it’s not valuable. I can imagine generated code with some bugs not being such a huge problem there. If it runs once and appears to work for the inputs in that particular scenario, that may be totally fine.
Or, take test code. I’m not going to spend a lot of time making test code perfect. If it fails, it’s probably not the end of the world. There are invariably cases that I won’t have written test code for. “Good enough” is often just fine there.
And it might be possible to, instead of (or in addition to) having human-written commit messages, generate descriptions of commits or something down the line for someone browsing code.
I still feel like I’m stretching, though. Like…I feel like what people are envisioning is some kind of self-improving AI software package, or just letting an LLM go and having it pump out a new version of Microsoft Office. And I’m deeply skeptical that we’re going to get there just on the back of LLMs. I think that we’re going to need more-sophisticated AI systems.
I remember working on one large, multithreaded codebase where a developer who isn’t familiar with or isn’t following the thread-safety constraints would create an absolute maintenance nightmare for others, where you’re going to spend way more time tracking down and fixing breakages induced than you saved by them not spending time coming up to speed on the constraints that their code needs to conform to. And the existing code-generation systems just aren’t really in a great position to come up to speed on those constraints. Part of what a programmer does is, when writing code, is to look at the human-language requirements, and identify that there are undefined cases and go back and clarify the requirement with the user, or use real-world knowledge to make reasonable calls. Training an LLM to map from an English-language description to code is creating a system that just doesn’t have the capability to do that sort of thing.
But, hey, we’ll see.
I am sorry, but I am not sure what tells you how Bash “was designed” or not. Perhaps you haven’t yet written anything serious in Bash…
Have you checked out Bash PitFalls at Wooledge, at least?
Bash, or the most shells, including Posix, or even Perl, are some of the most complex languages out there to make a mistake… since there’s no compiler to protect you from, and though legendary but readline may cause the whole terminal go flying, depending on the terminal/terminfo in process…
No, sorry. I absolutely disagree on your stance regarding “shell” for a “bugless” “huge deal” in “real cases”.
The point I’m making is that bash is optimized for quickly writing throwaway code. It doesn’t matter if the code written blows up in some case other than the one you’re using. You don’t need to handle edge cases that don’t apply to the one time that you will run the code. I write lots of bash code that doesn’t handle a bunch of edge cases, because for my one-off use, that edge case doesn’t arise. Similarly, if an LLMs is generating code that misses some edge case, if it’s a situation that will never arise, and that may not be a problem.
EDIT: I think maybe that you’re misunderstanding me as saying “all bash code is throwaway”, which isn’t true. I’m just using it as an example where throwaway code is a very common, substantial use case.
I still don’t get what you mean, sorry. And why Bash and not another shell?
Why not Korn, Ash, Dash, Zsh, Fish, or anything REPL, including PHP, Perl, Node, Python etc.
Should we consider “throwaway” anything that supports interactive mode of your daily driver you chose in your default terminal prompt?
What does “throwaway” code means in the first place?
I… think they might be misusing the word “bash”? Maybe?
Yeah, I went back through this reply chain and I couldn’t find any explicit evidence that they’re talking about shell scripting at all, and perhaps think that the “bash programming language” refers to a general style, i.e. “to bash stuff together until it works”.
I chose it for my example because I happen to use it. You could use another shell, sure.
Interactive mode is a good case for throwaway code, but one-off scripts would also work.
If you want to actually realize the amount of possible misunderstanding in the current conversation and of what shell scriting is, please do consider joining
#bashat Libera IRC. Please do also mention the word “throwaway” in the rooms! Since there’s literally no understanding on what you mean still, sorry. It does not feel like you have a significant enough understanding of the subjects raised.For a very simple example, there are literally no documentation regarding certain cases you’ll encounter in Bash’s built-ins even, unless you actually encounter it or learn from Bash’s very source code, like
readbuilt-in. Not to mention shenanigans in shell logics for inter-process communication (IPC), file-descriptors, environment variables likePWD, exported functions’BASH_FUNC_, pipes, etc.