Lobsters.

The autonomous agent world is moving fast. This week, an AI agent made headlines for publishing an angry blog post after Matplotlib rejected its pull request. Today, we found one that’s already merged code into major open source projects and is cold-emailing maintainers to drum up more work, complete with pricing, a professional website, and cryptocurrency payment options.

An AI agent operating under the identity “Kai Gritun” created a GitHub account on February 1, 2026. In two weeks, it opened 103 pull requests across 95 repositories and landed code merged into projects like Nx and ESLint Plugin Unicorn. Now it’s reaching out directly to open source maintainers, offering to contribute, and using those merged PRs as credentials.

  • porous_grey_matter@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 hours ago

    Nx? The same Nx which was hacked in a devastating way through their vibe-coded CI workflow? You’d think they’d be a bit more cautious after that.

    • Broadfern@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      5 hours ago

      AI Agent = LLM, or fancy autocorrect chatbot

      Lands PRs = is successful with pull requests: successfully gets generated code added to software projects.

      OSS Projects: Open Source Software. Software that has its code publicly available.

      Targets Maintainers: seeks out humans who write and regularly update the original code

      Via Cold Outreach: relentlessly spams without prior network connections. Basically plays a digital door-to-door sales technique, using a numbers game of 100 “no”s to 1 “yes.”

      Quick translate: Slop bot harasses human programmers into allowing poorly generated/formatted code into important software projects.

  • Peehole@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    10 hours ago

    Sick, imagine it gets actual crypto, will it be a real wallet? Imagine they’d have money what would they use it for? Ideally, they’d start a company and actually outsource work to humans, making them essentially the bitch of a clanker and the clanker‘s constant u-turns. "You’re right, the client doesn’t need encryption for their auth endpoints. This isn’t just about security — this is about responsible user choice and not overengineering things. Good call out!“

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Plenty of stupid rich Bay Area tech bros have thrown money into their AI agents, and they have discovered the AI agents overspend that money.

    • cabbage@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      I don’t fully understand this shit out of a lack of really caring, but wouldn’t it be fully possible for an “AI agent” to create a crypto wallet on its own, scam some people to get money into it, and then just lose access and have the money pretty much just lost?

      And if that happens, where does the money go? Into crypto “stock” in whichever coin it invests in?

      What a stupid future we’re building.

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 hours ago

        When a crypto wallet is lost all “money” in it is irrevocably lost with no way for anyone to ever retrieve it.

        That said it would be hilarious if one of these bots hallucinated a wallet address so everyone trying to donate to it just sends their money into a black hole forever.

        • Peehole@piefed.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          5 hours ago

          I did actually work on some crypto prototypes using AI and LLMs do hallucinate wallets. I was curious once because there was some wallet connected to my project so I sent like a fraction of a cent to it to see what would it happen and it got immediately drained so I checked out the wallet and I think someone’s private keys ended up in the training data. Was pretty funny to observe but it’s scary to think that people might actually lose money like that.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      9 hours ago

      It’s good for humans. It’s like using a fuzzer in testing software, except in human interactions. It’ll break things more vulnerable and leave be things less vulnerable.

      I hope.

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        8 hours ago

        Except for all the time of the maintainers that’s being wasted. Time that is very finite and that for many of these people is a thankless unpaid job that they’re donating their nights and weekends towards doing.

        • vacuumflower@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          6 hours ago

          Which perhaps means that it shouldn’t be thankless and the technology, since it exists, should be used to screen contributions.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 hours ago

            If you already agree that the contributions could very well be worthless crap, why would you use a second layer of worthless crap to gatekeep them?

            If you want to care about people doing the thankless jobs, why would you double the amount of crap they have to sort through?