• 14th_cylon@lemmy.zip
    link
    fedilink
    arrow-up
    25
    ·
    12 hours ago

    i have read it all hoping to find out what he is talking about… instead, the blog post ended 🤷‍♂️

    • gtrcoi@programming.dev
      link
      fedilink
      arrow-up
      3
      ·
      4 hours ago

      I’m guessing he’s alluding to a bunch of asserts, data sanitization, and granular error reporting. But yea, who knows.

  • FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    19
    ·
    13 hours ago

    The word you are looking for is “robust”.

    Debugging isn’t the worst thing in programming. The worst thing is having a task you need to do and a solution already written, but not knowing how to use the solution to solve the task.

  • spireghost@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    10
    ·
    12 hours ago

    Large language models can generate defensive code, but if you’ve never written defensively yourself and you learn to program primarily with AI assistance, your software will probably remain fragile.

    This is the thesis of this argument, and it’s completely unfounded. “AI can’t create antifragile code” Why not? Effective tests and debug time checks, at this point, come straight from claude without me even prompting for it. Even if you are rolling the code yourself, you can use AI to throw a hundred prompts at it asking “does this make sense? are there any flaws here? what remains untested or out of scope that I’m not considering?” like a juiced up static analyzer

    • TehPers@beehaw.org
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 hours ago

      Why not?

      Are you asking the author or people in general? If the author didn’t answer “why not” for you, then I can.

      Yes, I’ve used Claude. Let’s skip that part.

      If you don’t know how to write or identify defensive code, you can’t know if the LLM generated defensive code. So in order for a LLM to be trusted to generate defensive code, it needs to do so 100% of the time, or very close to that.

      You seem to be under the impression that Claude does so, but you presumably can tell if code is written with sufficient guards and tests. You know to ask the LLM to evaluate and revise the code. Someone without experience will not know to ask that.

      Speaking now from my experience, after using Claude for work to write tests, I came out of that project with no additional experience writing tests. I had to do another personal project after that to learn the testing library we used. Had that work project given me sufficient time to actually do the work, I’d have spent some time learning the testing library we used. That was unfortunately not the case.

      The tests Claude generated were too rigid. It didn’t test important functionality of the software. It tested exact inputs/outputs using localized output values, meaning changing localizations was potentially enough to break tests. It tested cases that didn’t need to be tested, like whether certain dependency calls were done in a specific order (those calls were done in parallel anyway). It wrote some good tests, but a lot of additional tests that weren’t needed, and skipped some tests that were needed.

      As a tool to help someone who already knows what they’re doing, it can be useful. It’s not a good tool for people who don’t know what they’re doing.