AI-generated code is shipping to production without security review. The tools that generate the code don’t audit it. The developers using the tools often lack the security knowledge to catch what the models miss. This is a growing blind spot in the software supply chain.

  • dan@upvote.au
    link
    fedilink
    arrow-up
    33
    ·
    edit-2
    1 day ago

    The article says:

    None of the tools produced exploitable SQL injection or cross-site scripting

    but I’ve seen exactly this. After years of not seeing any SQL injection vulnerabilities (due to the large increase in ORM usage plus the fact that pretty much every query library supports/uses prepared statements now), I caught one while reviewing vibe-coded code written generated by someone else.

    • Not a newt@piefed.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Forget SQL injection and XSS, LLMs are bringing back unsanitised inputs as a whole, including reintroducing previously removed vulnerabilities. You can casually browse Github for submissions by Claude bot and find …/… vulns all over.

      • pinball_wizard@lemmy.zip
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        14 hours ago

        Yes. And let’s not forget bringing back the classic “forgot to even put a password on the sensitive files”.

    • ugo@feddit.it
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      vibe-coded code written by someone else

      “Someone else” “writes” vibe-coded code in the same way that someone buying a meal at a restaurant cooks said meal.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        Haha good point - maybe “generated by” is a better description?