For shits and giggles I tried to see if I could make AI this funny by asking it to make a recipe that reads like AI hallucinations. Nope, goes way over the top, even when I tell it to tone it down.
Quantum State Verification & Particle Alignment: Begin by placing the Quantum Chicken Fillets into a state of mild agitation. Subject them to a low-frequency oscillation (5 Hz +/- 0.5 Hz) for exactly 47 seconds. This primes the protein lattice for optimal batter adhesion. Verify via palpation (a light, non-intrusive touch).
See, I think the real reason an LLM is so unfunny is structural. They’re essentially mathematical models that pick the most likely next word given a set of conditions.
The only thing less funny than an LLM is comedy theory, so I’ll just say that surprise is essential to humour. You’d never laugh after hearing the most likely next word, would you? Knowing how to surprise people takes guile, ingenuity, and trauma.
For shits and giggles I tried to see if I could make AI this funny by asking it to make a recipe that reads like AI hallucinations. Nope, goes way over the top, even when I tell it to tone it down.
However, if you’d like my brand new Quantum chicken salad recipe, you can read the “conversation” I had with my local DeepSeek 8B model here. It was funny in its own way, but it really couldn’t do subtle.
I liked this bit though:
See, I think the real reason an LLM is so unfunny is structural. They’re essentially mathematical models that pick the most likely next word given a set of conditions.
The only thing less funny than an LLM is comedy theory, so I’ll just say that surprise is essential to humour. You’d never laugh after hearing the most likely next word, would you? Knowing how to surprise people takes guile, ingenuity, and trauma.
Except accidental comedy (or comedy of errors).
But LLMs only do it unintentionally and usually at unwanted times