• NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      91
      ·
      edit-2
      2 days ago

      Someday soon an AI company will win a court case where they argue that their LLM is an expression of their free speech rights per Citizens United and is therefore legally allowed to say whatever it wants and in fact has the same rights to freedom of expression as the corporation itself does.

      This precedent will be the basis on which future AI rights are eventually won, not out of egalitarianism or altruism or respect for (possible) sentience, but because corporations want to avoid liability for the behavior of their products.

      • WoodScientist@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        That wouldn’t help protect companies from liability from their LLMs. Companies are still liable for what their employees say. If your doctor gives you really bad medical advice that results in you getting heart, the hospital that employs them will be named in the medical malpractice lawsuit. If an employee at a local business throws out a minority customer, telling them “we don’t serve your kind,” the company can’t escape legal liability by saying, “that isn’t our official policy, it was just the employee exercising their first amendment rights.”

        The first amendment just means that the government can’t tell you what to say or not say. It doesn’t shield you, or the company you are an employee of, from liability for damages that arise due to something you say.

        If it doesn’t work for actual human employees, it certainly won’t work for LLMs.

      • nthavoc@lemmy.today
        link
        fedilink
        English
        arrow-up
        45
        ·
        2 days ago

        “Dead Internet Theory” would turn into the “Law of Dead Internet” if that happens. It’s pretty close right now as it is. At that point either a new “Internet” is born from a technology renaissance, or humans continue to co-exist with AI in their new role as Internet Zombie Cash Cows.

        • voluble@lemmy.ca
          link
          fedilink
          arrow-up
          13
          ·
          2 days ago

          I think tools for detecting and filtering out ai material from search results would go a long way to improve the current situation, and is a middle ground between an internet revolution and a technological dystopia. There is still an unfathomably large amount of good information on the internet, the issue is that there is 20x more trash. And the trash is scaling rapidly, humans are not.

          If you haven’t already, give the Marginalia search engine a try. They’re doing something interesting in this space. You can filter out results with javascript, affiliate links, tracking, ads, and cookies. After filtering, the internet feels a lot more like it did 20 years ago, more sincere, more human.

          If I recall correctly, Marginalia is made and maintained by one guy. As the trash to good content ratio worsens, I think more people will want to build on and use projects like Marginalia.

          • TheBluePillock@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            1 day ago

            Ironically, those tools to filter out AI will also be AI. I do believe they’ll be necessary, but also what the fuck. It’s a bit like a bunch of people have decided to just piss all over the place, and rather than cleaning it up and putting an end to the rampant pissing, everybody’s just gonna end up putting on masks so they don’t have to smell it.

            • bit@pawb.social
              link
              fedilink
              arrow-up
              1
              ·
              10 hours ago

              not necessarily, i once stumbled upon an enormous ublock “ai filter” that was just a list of css rules hiding search results referencing a predefined list of ai sites

            • voluble@lemmy.ca
              link
              fedilink
              arrow-up
              3
              ·
              18 hours ago

              Filtering doesn’t necessarily have to be driven by AI.

              Take recipes for example. Recipes are now almost impossible to get non AI results for via search engines. But, simple hardcoded parameters that set a preference for older results, ones without affiliate links (Marginalia does this), ones with fewer than 5 domains executing javascript on the site, some analysis of the date of the domain registration and activity on the domain, some analysis of the top level domain to filter out blogspam, these would all make the search results more human.

              My hope is that eventually, there will be a paradigm of search engine optimization, maybe even an open standard for the absence of excessive javascript, affiliate links, social media buttons, etc. Sites that lack those elements are way less likely to be junk.