• floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      120
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Watch all the AI companies scramble to comply in a quest for government contracts. This will affect everyone who uses American LLMs and generative AI.

      It should also open an opportunity for international competition from less censored models.

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        53
        ·
        edit-2
        2 days ago

        And this is one of the best arguments against depending on LLMs. People are outsourcing their thinking to linear algebra machines owned by the wealthy. LLMs are a tool of social control.

      • Tony Bark@pawb.social
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        1
        ·
        2 days ago

        Considering how much they bleed cash regularly, I can see them jumping on the government contract bandwagon quickly.

      • leftytighty@slrpnk.net
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        To be fair to the executive order (ugh) many of the examples cited are due to well intentioned system prompts that encourage the LLM to actively be diverse.

        The example of a female pope or whatever (read this earlier) is an example of that.

        Generally speaking the LLMs have left-bias because they’re trained on information unlike conservatives, but they aren’t necessarily asking the models to be censored

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      2 days ago

      Because Executive Orders aren’t laws. They’re just guidelines for the executive branch of the federal government, which the POTUS is in charge of. It can’t affect private entities like AI businesses, because that would require an actual act of congress.

      Notably, this would potentially determine what kinds of contracts the executive branch was able to make. For instance, maybe the government wants to contract out a LLM instead of building their own. This EO could affect which companies are able to bid on that contract, by adding these same restrictions to any LLM that they provide. But on its own, the EO is just that; an order to the executive branch of the federal government.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      But anything the US feds contracted them for, like building data centres, they have to comply or they face penalties and have to pay all the costs back.

      10 days ago, a week before this was announced, they awarded $200M contracts each to Anthropic, OpenAI, Google and xAI

      This doesn’t doom the public versions, but they now have a pretty strong incentive to save money and make them comply with the US governments new definition of truth.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      Well, in practice, no.

      Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

      So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

      • MrMcGasion@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.

        Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).

      • itsame@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

        • forrgott@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          2 days ago

          So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

          Frankly, you sound like you’re talking straight out of your ass.

          • itsame@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

            BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.

    • dontmindmehere@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Honestly this order seems empty. Does the government even have a need for general LLMs? Why would they need an AI to answer simple questions?

      As much as I dislike Trump, this shouldn’t impact any AI available to the general public.