• boonhet@sopuli.xyz
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 day ago

    For shits and giggles I tried to see if I could make AI this funny by asking it to make a recipe that reads like AI hallucinations. Nope, goes way over the top, even when I tell it to tone it down.

    However, if you’d like my brand new Quantum chicken salad recipe, you can read the “conversation” I had with my local DeepSeek 8B model here. It was funny in its own way, but it really couldn’t do subtle.

    I liked this bit though:

    1. Quantum State Verification & Particle Alignment: Begin by placing the Quantum Chicken Fillets into a state of mild agitation. Subject them to a low-frequency oscillation (5 Hz +/- 0.5 Hz) for exactly 47 seconds. This primes the protein lattice for optimal batter adhesion. Verify via palpation (a light, non-intrusive touch).
    • Brave Little Hitachi Wand@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      See, I think the real reason an LLM is so unfunny is structural. They’re essentially mathematical models that pick the most likely next word given a set of conditions.

      The only thing less funny than an LLM is comedy theory, so I’ll just say that surprise is essential to humour. You’d never laugh after hearing the most likely next word, would you? Knowing how to surprise people takes guile, ingenuity, and trauma.

      • anomnom@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Except accidental comedy (or comedy of errors).

        But LLMs only do it unintentionally and usually at unwanted times