Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.

The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

  • InvalidName2@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    9
    ·
    13 hours ago

    And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that’s not already covered by your framework, etc. That it’s not able to completely write 100% of your codebase perfectly from the get-go does not mean it’s entirely useless.

    • Soggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      1
      ·
      12 hours ago

      Other than that it’s work that junior coders could be doing, to develop the next generation of actual good developers.

      • SreudianFlip@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        11 hours ago

        Yes, and that’s exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

        If you have no junior developers, who will turn into senior developers later on?

        • pinball_wizard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 hours ago

          If you have no junior developers, who will turn into senior developers later on?

          At least it isn’t my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j… I can just keep enjoying today’s version of the Internet, unchanged.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      8 hours ago

      And then there are actual good developers who could or would tell you that LLMs can be useful for coding

      The only people who believe that are managers and bad developers.

      • keegomatic@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          3 hours ago

          There’s a difference between vibe coding and responsible use.

          There’s also a difference between the occasional evening getting drunk and alcoholism. That doesn’t make an occasional event healthy, nor does it mean you are qualified to drive a car in that state.

          People who use LLMs in production code are - by definition - not “good developers”. Because:

          • a good developer has a clear grasp on every single instruction in the code - and critically reviewing code generated by someone else is more effort than writing it yourself
          • pushing code to production without critical review is grossly negligent and compromises data & security

          This already means the net gain with use of LLMs is negative. Can you use it to quickly push out some production code & impress your manager? Possibly. Will it be efficient? It might be. Will it be bug-free and secure? You’ll never know until shit hits the fan.

          Also: using LLMs to generate code, a dev will likely be violating copyrights of open source left and right, effectively copy-pasting licensed code from other people without attributing authorship, i.e. they exhibit parasitic behavior & outright violate laws. Furthermore the stuff that applies to all users of LLMs applies:

          • they contribute to the hype, fucking up our planet, causing brain rot and skill loss on average, and pumping hardware prices to insane heights.