The Trump administration recently published “America’s AI Action Plan”. One of the first policy actions from the document is to eliminate references to misinformation, diversity, equity, inclusion, and climate change from the NIST’s AI Risk Framework.

Lacking any sense of irony, the very next point states LLM developers should ensure their systems are “objective and free from top-down ideological bias”.

Par for the course for Trump and his cronies, but the world should know what kind of AI the US wants to build.

  • brianpeiris@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    3 days ago

    I was giving an example of regulation that has an effect on open source AI

    • Tony Bark@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      Fair enough. That being said, deepfake services doesn’t need to be open source. Anything that presents to the masses is obviously going to be enforced but that doesn’t necessarily translate back to the open source supply chain.

      • brianpeiris@lemmy.caOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I’m not sure if this has happened yet, but in theory the TAKE IT DOWN act could be used to shutdown an open source deepfake code or model repository. In that case you’re right that there will be copies that spring up, but I think it is significant that popular projects could be taken down like that.