• 210 Posts
  • 1.34K Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle


















  • whoever employs LLM

    incumbent upon the handler to assume liabillity

    I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.

    But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.


  • I don’t, not in general.

    There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.

    But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.

    I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.

    I believe in general that AI gets too much attention in the media. It’s really not that impactful.