Just the other day I asked AI to create an image showing a diverse group of people, it didn’t include a single black person. I asked it to rectify and it couldn’t do it. This went on a number of times before I gave up. There’s still a long way to go.

  • CerebralHawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    3 days ago

    TL;DR: Woman missing an arm couldn’t get AI image generators to generate an amputee. It apparently didn’t know how. Now it does and the woman says the representation is important. </tldr>

    I guess it couldn’t find enough art to steal of amputees for it to form enough of a basis to draw them? And so in reaction to the backlash (such as it was), they gave it more data?

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      1
      ·
      3 days ago

      Representation…in AI image generation?

      The idea that this is something anyone should want is hard to wrap my head around.

      If I could opt out of being deepfake-able, I would.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 days ago

      Image generation often happens in a kind of region by region way, too, so not just continuing the arm might be hard.

      It’s annoying that she asked ChatGPT why it was doing that and they reported the answer uncritically.

  • Jul (they/she)@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    Despite living with one arm, Jess doesn’t see herself as disabled, saying the barriers she faces are societal.

    Actually, this is what disability is all about. It’s not that people can’t complete tasks or take care of themselves, it’s that society doesn’t provide the same tools to disabled people that they provide to so called “able bodied” people to allow them to complete those tasks.

    It’s the trope of the single grocery store that everyone goes to, but the person in a wheelchair, but otherwise able, can’t use because there’s a curb. So, suddenly they can’t feed themselves. It’s not that they are unable to feed themselves, it’s that they can’t access the food without assistance and thus are “disabled”. As soon as a ramp is installed they are no longer “disabled”, just differently abled.

    • SSUPII@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      3 days ago

      I am instead thinking it will instead not be the case? Bigger models will be able to store more of the less common realities

      • Eq0@literature.cafe
        link
        fedilink
        arrow-up
        10
        ·
        3 days ago

        They will, at best, replicate the data sets. They will learn racial discrimination and propagate it.

        If you have a deterministic system, for example, to rate a CV, you can ensure that no obvious negative racial bias is included. If instead you have a LLM (or other AI) there is no supervision on which data element is used and how. The only thing we can check is if the predictions match the (potentially racist) data.

      • luxyr42@lemmy.dormedas.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        You may be able to prompt for the less common realities, but the default of the model is still going to see “doctor” as a white man.