LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.
Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it’s important to find a good fit and to learn to use the tool effectively.
We have three different models at work, and they work a lot differently and are good at different things.
LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.
You need to actively have the relevant code in context.
I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.
Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it’s important to find a good fit and to learn to use the tool effectively.
We have three different models at work, and they work a lot differently and are good at different things.