The biggest reason I’m able to use LLMs efficiently and safely, is because of all my prior experience. I’m able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).
If a junior hasn’t already manually established this knowledge and experience, much of the code that they’re going to produce with AI is gonna be crap with varying levels of deviation.
They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.
E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.
We use it within the scope of things we already know about.
Except it becomes more dangerous for a novice to use an LLM.
It will introduce vulnerabilities and issues that the novice will overlook.
This is extremely valid.
The biggest reason I’m able to use LLMs efficiently and safely, is because of all my prior experience. I’m able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).
If a junior hasn’t already manually established this knowledge and experience, much of the code that they’re going to produce with AI is gonna be crap with varying levels of deviation.
How I guide the juniors under me is to have it generate singular methods to accomplish specific tasks, but not entire classes/files.
You know it’s crazy, someone just told me that’s more dangerous than having them do nothing.
They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.
E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.
We use it within the scope of things we already know about.