• Olap@lemmy.world
    link
    fedilink
    arrow-up
    27
    ·
    9 hours ago

    My management are measuring code written by AI as a metric, and expect it to go up…

  • HaraldvonBlauzahn@feddit.orgOP
    link
    fedilink
    arrow-up
    21
    ·
    edit-2
    7 hours ago

    A full-stack developer based in India, who identified himself to The Register but asked not to be named, explained that the financial software company where he’s worked for the past few months has made a concerted effort to force developers to use AI coding tools while downsizing development staff.

    […]

    He also said the AI-generated code is often full of bugs. He cited one issue that occurred before his arrival that meant there was no session handling in his employer’s application, so anybody could see the data of any organization using his company’s software.

    This kind of things are exactly what I see with a mid-level dev who enthusiastically tries to use GenAI in embedded development: He produces code that seems to work, but misses essential correctness features, like using correct locking in multi-threaded code. With the effect that his code is full of subtle races conditions, unexpected crashes, things that can’t work but would take months to debug because the errors are non-deterministic. He has not fully understood why locks are necessary or what Undefined Behaviour in C++ really means. For example, he does not see a problem with a function with a declared return value to not return a value (inconceivably, gcc accepts such code by default, but using the value is undefined behaviour). He resists to eliminate compiler warnings or instrument his code with -Werror -Wall.

    Unfortunately, I am not in the position to fire him. He was the top developer for two years. Also, the company was quite successful in the past and has, over these successful years, developed an unhealthy high level of tolerance for technucal debt.

    And more unfortunately, the company’s balance sheet is already underwater, because of extreme short-term thinking in upper management and large shifts in markets, and is unlikely to survive the resulting mess.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      6
      ·
      9 hours ago

      And that’s why GenAI has chances to leave kind of a double blast crater in tech: Deceptive advertising and completely unsustainable financing, followed by equally unsustainable technical decisions and development practices.

  • Serinus@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    9 hours ago

    Pretty good and well balanced article.

    As a professional software dev, AI is absolutely useful. But forcing people to use it is weird. And I never want to have to deal with a PM using AI to generate a PR and then having to review it. That’s absolutely not how you use AI, and more often or not that will be more work than just doing the whole thing yourself.

    It’s critical to understand everything the AI is doing as it does it. Because, as the article said, if you don’t, you’re going to get subtle bugs that will be even more difficult to find later. And some of those bugs can be devastating. Add a number of those together and you have an unmaintainable mess.

    don’t remember the syntax of the language they’re using due to their overreliance on Cursor.

    I think this is pretty fine. Knowing what the situation calls for, knowing exactly how to accomplish it, and having the AI fill in the syntax for your psuedocode typically works pretty great. Something like “In the header add jQuery from the most common CDN. (Verify that CDN or this is a great vector for AI-induced malware/compromise.) Use an ajax call to this api [insert api url] and populate the div with id ‘mydata’.” That’s a pretty simple thing that it’ll likely handle pretty well and is easy to review.

    The ways they’re forcing people to use it is kind of insane. But they’re doing that because they’re using AI as a justification for firing people. It doesn’t really work like that. Used properly will it speed up development? For most developers (anyone who used Stack Overflow), yeah. But that doesn’t mean a developer who’s juggling and maintaining 3 products can now suddenly handle 5. It doesn’t speed up context switching, really. And it’s not like it’s replacing the overhead of story boards, standups, change review boards, debugging, handling tickets, or other overhead. You might just spend 7 weeks developing a project instead of 8. And it can remove a bit of tedium (or add if you’re stupid about how you force AI).

    It’s a useful tool. It shouldn’t be replacing a large number of developers. Of course they’ll fire the devs anyway, because like any other R&D the dividends are usually paid in the future. So in most cases, firing developers takes some time before you pay the toll, whether it’s opportunity cost, creating an unmaintainable mess, or losing the ability to maintain the things you already have. I expect that’s why the internet’s been falling apart lately. Fire a bunch of people and things they used to handle start to fall apart (or the people who have always handled those things get stretched too thin).

    • EnsignWashout@startrek.website
      link
      fedilink
      arrow-up
      10
      ·
      9 hours ago

      I expect that’s why the internet’s been falling apart lately.

      I’m sure it is.

      It’s been interesting to see people not really getting angry about it, yet.

  • A_norny_mousse@feddit.org
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    7 hours ago

    “For example, we had our own Github, so we couldn’t use their Github Copilot license,” he explained. “We were still required to find some ways to use AI. The one corporate AI integration that was available to us was the Copilot plugin to Microsoft Teams. So everyone was required to use that at least once a week. The director of engineering checked our usage and nagged about it frequently in team meetings.”

    “To satisfy the boss, I started using the Teams Copilot AI to get answers for questions I would previously have Googled,” he said. “Questions such as the syntax for a particular command or an idea for setting up a new (to me) process. Sometimes the answers were perfect. Sometimes they were useless. Once, I spent three hours trying to get the AI’s suggestion for a Docker problem to work before I gave up and Googled the correct answer in two minutes.”

    doG help us all.

    We thought these new technologies would help humanity, to have more time to concentrate on the real work. Instead they’re being used to exploit us even more, and in such a stupid way.

    It’s one of those things where having even the slightest bit of insight, you can predict that this is going to crash big time, eventually. Yet the people who should don’t listen. And then, when it inevitably happens, everyone is very surprised indeed. And somebody who knew it, just like you and thousands of other people, will be celebrated as some sort of prophet. We truly live in a dark age.

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    10 hours ago

    God that article is depressing. Where I work, it’s bad but not as bad as any of those stories.