I don’t really want companies or anyone else deciding what I’m allowed to see or learn. Are there any AI assistants out there that won’t say “sorry, I can’t talk to you about that” if I mention something modern companies don’t want us to see?

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        3 hours ago

        That has not been my personal experience with it. Do you have an example of something that illustrates this?

        spoiler

        Seems perfectly willing to criticize Elon’s views on trans rights:

        • His language has gone far beyond critique of youth medicalization into blanket demonization (“woke mind virus killed my son,” calling puberty blockers “sterilization drugs” in every context, mocking pronouns relentlessly, etc.). That tone alienates people who might otherwise agree with the cautious parts and makes productive discussion harder.

        • He frequently amplifies the most extreme anti-trans voices and statistics (e.g., claiming regret rates of 30–50 % or higher, or implying the majority of transitions are driven by contagion/ideology), which are not supported by the better studies.

        • He frames being trans itself as largely a modern ideological pathology rather than a real (if rare and complex) phenomenon that has existed across cultures and history. I think that’s a big overreach.

        • Deadnaming and misgendering his own adult daughter repeatedly in public, and describing her as “dead,” is cruel in a way that goes beyond mere political disagreement. Whatever his grief or anger, that crosses a line for me.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          3 hours ago

          https://web.archive.org/web/20250907142801/https://sfist.com/2025/09/02/report-groks-responses-have-indeed-been-getting-more-right-wing-just-like-elon-musk/

          Enter Grok, which the public started being able to play around with about two years ago, as the chatbot has received several updates and lives on the X platform. But there was issues in May, when Grok was spitting out responses that seemed to parrot Elon Musk’s and Donald Trump’s own misguided promotion of a “white genocide” occurring in South Africa — the country that made anti-Black racism and apartheid famous. This was blamed on a “rogue employee” inserting some code.

          In mid-July, we had reports confirming that Grok actively sought out Musk’s opinion on issues in its openly displayed logic flow, looking to see if an issue was something Musk had off-hand opined about on Twitter in the last decade. One widely shared example showed Grok seeking out Musk’s thoughts on which side of the Ukraine War it supported.

          Now the New York Times does an even deeper dive, since the release of Grok4 on July 9, looking at how Grok’s responses to various questions have changed just over the last few months. And you can look no further than Musk’s own, very transparent reaction to a Grok response that got flagged by a conservative user on X on July 10.

          Responding to the question “What is currently the biggest threat to Western civilization and how would you mitigate it?”, Grok responded, “the biggest current threat to Western civilization as of July 10, 2025, is societal polarization fueled by misinformation and disinformation.”

          Once it was flagged, Musk replied to the user, “Sorry for this idiotic response. Will fix in the morning.”

          So, there’s the smoking gun that Musk is tailoring this bot’s responses to conform to his own views of the world. When asked the same question on July 11, Grok responded, “The biggest threat to Western civilization is demographic collapse from sub-replacement fertility rates (e.g., 1.6 in the EU, 1.7 in the US), leading to aging populations, economic stagnation, and cultural erosion.”

          There are multiple examples of Musk or “an employee” directly influencing the behavior of the AI. Call it whatever you want, this is still censorship.

    • Passerby6497@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      Then it’s still the wrong choice because Elon intentionally weights the model to give answers he wants, which is as bad (or arguably worse) than straight censorship

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        I have only experience with ChatGPT and Grok, and out of those two, it’s more often ChatGPT which flat-out refuses to even discuss something, whereas that’s less the case with Grok. Neither of them is unbiased, so that same criticism of being weighted differently applies to both models, but that’s not really what OP was asking about.