• 0 Posts
  • 148 Comments
Joined 2 years ago
cake
Cake day: June 4th, 2023

help-circle
  • You absolutely can’t use LLMs for anything big unless you learn to code.

    Think of an LLM as a particularly shit builder. You give them a small job and maybe 70% of the time they’ll give you something that works. But it’s often not up to spec, so even if it kinda works you’ll have to tell them to correct it or fix it yourself.

    The bigger the job is and the more complex the more ways they have to fuck it up. This means in order to use them, you have to break the problem down into small sub tasks, and check that the code is good enough for each one.

    Can they be useful? Sometimes yes, it’s quicker to have an AI write code than for you to do it yourself, and if you want something very standard it will probably get it right or almost right.

    But you can’t just say ‘write me an app’ and expect it to be useable.


















  • The reason all the weird “rationalists” are worried by roko’s basilisk is because it basically uses all of their dumb arguments for friendly AI against them.

    So in the course of realizing why it’s wrong, a bunch of cultists either end up deprogramming themselves, or they just have a breakdown and go murder their landlord.


  • Because if a website doesn’t work in your browser, but it works in everyone else’s, no one will say “oh that website’s badly written”, instead they say “what a shitty browser”.

    So you have a huge web standard you have to respect, and then all the websites with non standard code you have to make work anyway.