Doesn’t surprise me. The internet is filled with outdated code and personal blogs detailing stuff you should never put in production code.
AI is just going to regurgitate those instead of the actual optimal answer, so unless you steer it in the right direction, it spits out what it sees most in it’s training data.
Doesn’t surprise me. The internet is filled with outdated code and personal blogs detailing stuff you should never put in production code.
AI is just going to regurgitate those instead of the actual optimal answer, so unless you steer it in the right direction, it spits out what it sees most in it’s training data.
one of these days an llm is going to sudo rm -rf /* itself and i need to buy an appropriate alcohol.
No, dead ass. all it’s training data becomes useless the moment you make a breaking change to an API.
Didn’t the training data become useless the moment AI code ended up in it?