Four months ago I asked if and how people used AI here in this community (https://lemmy.world/post/37760851).
Many people said that didn’t use it, or used only for consulting a few times.
But in those 4 months AIs evolved a lot, so I wonder, is there people who still don’t use AI daily for programming?


More like a manual. Google has become really shitty for complex queries, LLMs can find relevant keywords, documents much realiably. Granted, if you are asking questions about niche libraries it hallucinates functions quite often so I never ask it to write full pieces of code but just use it more like a stepping stone.
I find it amusing how shamelessly it lies about its hallucinations though. When I point out that a certain function it makes up does not exist the answer is always sth of the form “Sorry you are right that function existed before version X / that function existed in some of the online documentation” etc lol. It is like a halluception. If you ask it to find some links regarding these old versions or documentations they also somehow don’t exist anymore.
I wonder if you need to explicitly prompt it to check if a function really exists before suggesting it? Think about how a human brain works… we are constantly evaluating whether or not things are really true based on info in our heads… but we are not telling the models to do the same thing and instead they just yolo some shit that is confidently-wrong (not unlike many humans, admittedly).