My biggest objection is unit tests. LLMs can actually be a useful tool for populating out unit tests. But of you let them run amuck, you get vast quantities of tests that add no value but now you have to maintain in perpetuity
This one junior developer didn’t notice the ai brought in a whole new mocking tool for a few tests and didn’t understand my objection.
LLMs can actually be a useful tool for populating out unit tests.
My experience with this is the LLM commenting out the existing logic and just returning true, or putting in a skeleton unit test with a comment that says “we’ll populate the code for this unit test later”.
My biggest objection is unit tests. LLMs can actually be a useful tool for populating out unit tests. But of you let them run amuck, you get vast quantities of tests that add no value but now you have to maintain in perpetuity
This one junior developer didn’t notice the ai brought in a whole new mocking tool for a few tests and didn’t understand my objection.
Relying on a chance machine to thoroughly test your code sounds like a recipe for disaster
My experience with this is the LLM commenting out the existing logic and just returning true, or putting in a skeleton unit test with a comment that says “we’ll populate the code for this unit test later”.
I had a dev add a load of unit tests that mocked values and then tested for the mocked values. I mean… They passed…