• 0 Posts
  • 841 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle

  • I think their point was that CSR-only sites would be unaffected, which should be true. Exploiting it on a static site, for example, couldn’t be RCE because the untrusted code is only being executed on the client side (and therefore is not remote).

    Now, most people use, or at least are recommended to use, SSR/RSC these days. Many frameworks make SSR enabled by default. But using raw React with no Next.js, react-router, etc. to create a client-side only site does likely protect you from this vulnerability.


  • I think it also doesn’t help that only 4XX (client error) and 5XX (server error) are defined as error status codes, and 4XX errors don’t even necessarily indicate that anything happened that shouldn’t happen (need to reauth, need to wait a bit, post no longer exists, etc).

    Trying to think of what 6XX would stand for, and we already have “Service Unavailable” and “Bad Gateway”/“Gateway Timeout”, so I guess 6XX would be “incompetence errors”. 600 is “Bad Implementation”, 601 is “Service Hosted On Azure”, 602 is “Inference Failure” (for AI stuff), and I guess 666 is “Cloudflare Outage”.







  • 30 is assuming you write code for all 30 days. In practice, it’s closer to 20, so 75 tests per day. It’s doable on some days for sure (if we include parameterized tests), but I don’t strictly write code everyday either.

    Still, I agree with them that you generally want to write a lot of tests, but volume is less important than quality and thoroughness. The author using the volume alone as a meaningful metric is nonsense.





  • Quoting Kohler:

    We encrypt data end-to-end in transit, as it travels between users’ devices and our systems, where it is decrypted and processed to provide and improve our service.

    I guess Kohler recently learned about TLS? IBM’s response, which is a bit random in my opinion, addresses the idiocy of the E2EE claim lol.

    I’d hope they encrypt data in transit? Not doing so would be an incredible, though unsurprising, show of incompetence. Setting up TLS and getting certs is easy these days with LetsEncrypt, and a company like Kohler could even get certs through AWS or Azure or something if they wanted.

    I can’t imagine why I’d ever spend money on a camera for my toilet, especially if it includes a subscription fee. That’s a new level of stupid.



  • TehPers@beehaw.orgtoProgramming@programming.devCloudflare goes again
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    2 days ago

    This is more likely the actual incident report:

    A change made to how Cloudflare’s Web Application Firewall parses requests caused Cloudflare’s network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.

    Edit: If you like reading


  • As someone who stopped all contact with my dad at one point (while still a child, but continuing as an adult), I can say that there were a few specific memorable issues, but that they were by no means isolated.

    The impression I get from reading seems to be that it’s an anecdote indicative of a larger, more regular series of incidents.


  • 1500 tests is a lot. That doesn’t mean anything if the tests aren’t testing the right thing.

    My experience was that it generates tests for the sake of generating them. Some are good. Many are useless. Without a good understanding of what it’s generating, you have no way of knowing which are good and which are useless.

    It ended up being faster for me to just learn the testing libraries and write my own tests. That way I was sure every test served a purpose and tested the right thing.


  • I am interested to see if these tools can be used to tackle tech debt, as often the argument for not addressing tech debt is a lack of time, or if they would just contribute it to it, even with thorough instructions and guardrails.

    From my experience working with people who use them heavily, they introduce new ways of accumulating tech debt. Those projects usually end up having essays of feature spec docs, prompts, state files (all in prose of course), etc. Those files are anywhere from hundreds to thousands of lines long, and there’s a lot of them. There’s no way anybody is spending hours reading through enough markdown to fill twenty encyclopedia-sized books just to make sure it’s all up-to-date. At least, I can promise that I won’t be doing it, nor will anyone I know (including those using AI this way).


  • And it often generates a bunch of markdown docs which are plain drivel, luckily most devs just delete those before I see them.

    My favorite is when it generates a tree of the files in a directory in a README and a description for each file. How the fuck is this useful? Files will be added and removed, so there’s now an additional task to update these docs whenever that happens. Nobody will remember to do so because no tool is going to enforce that and it’s stupid anyway.

    Sure, document high level directories. But do you really need that all in the top level README?

    But for real if anyone in management is listening, take it from an old asshole who has done this job since the 80s: AI fucking sucks!

    Nothing to add. Just quoting this section because it needs to be highlighted lol.