Well, I hope you don’t have any important, sensitive personal information in the cloud?

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    24
    ·
    3 days ago

    These weren’t obscure, edge-case vulnerabilities, either. In fact, one of the most frequent issues was: Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples.

    So, I will readily believe that LLM-generated code has additional security issues, but given that the models are trained on human-written code, this does raise the obvious question of what percentage of human-written code properly defends against cross-site scripting attacks, a topic that the article doesn’t address.

    • HaraldvonBlauzahn@feddit.orgOP
      link
      fedilink
      arrow-up
      12
      ·
      3 days ago

      There are a few aspects that LLMs are just not capable of, and one of them is understanding and observing implicit invariants.

      (That’s getting to be funny if the tech is used for a while on larger, complex, multi-threaded C++ code bases. Given that C++ appears already less popular with more experienced people than with juniors, I am very doubtful whether C++ will survive that clash.)

    • anton@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      If a system was made to show blogs by the author and gets repurposed by a LLM to show untrusted user content the same code becomes unsafe.