Clarification:

Judging by the fact that AI progress will never stop and will eventually be able to replace truth with fiction, it will become impossible to trust any article, and even if it is possible, not all of them, and we won’t even be able to tell exactly what’s true and what’s fiction.

So, what if people from different countries and regions exchanged contacts here and talked about what’s really happening in their countries, what laws are being passed, etc., and also shared their well-thought-out theories and thoughts?

If my idea works, why not sober up as many people as possible that only similar methods will be able to distinguish reality from falsehood in the future?

I’m also interested in your ideas, as I’m not much of an expert.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    8 hours ago

    Relevant XKCD.

    Basically, we’re going to go back to the days assessing likely truth by how much you trust the source. Unfakeable photographic evidence was the irreplaceable thing, and it was nice to have for a while, but it’s definitely not the first time we’ve been without it. AI slop isn’t fundamentally different from human made tall tales.

    To a degree I’ve already made this shift, and tend to click more on websites I’m familiar with where I used to just go by relevance.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      7 minutes ago

      Even then, you can probably trust a source to not lie, but you can’t trust them to never get fooled.

  • silasmariner@programming.dev
    link
    fedilink
    arrow-up
    5
    ·
    10 hours ago

    Same way we always have - trust. Reputation has always been a thing, there was a brief window where photographic or video evidemce was enough but it didn’t last all that long and tbh it’s always had its flaws.

  • Alsjemenou@lemy.nl
    link
    fedilink
    Nederlands
    arrow-up
    3
    ·
    10 hours ago

    Like it has always been done. This question is such a weird one to me. The problem isn’t that AI is making shit up, people make shit up all the time. They lie, cheat, make mistakes, are dumb, etc. The problem is that we don’t know if we can trust what we trusted before. But the solution is always simply trusting certain people to tell you the truth. scientists, journalists, teachers, publishers, etc. This has never not been the case. We can’t trust AI not dreaming up answers. That’s just how it is, and that’s not a problem that needs solving. It’s a fundamental part of current LLM technology.

    Maybe in the future we can find something that makes LLM’s more trustworthy, but as of yet, that’s simply not the case. So I don’t see a problem here. If you want to know the truth about something you’re going to have to look at sources and do some digging until you find something you’re trusting, then that’s what’s real to you.

    And unless you want some deep philosophical discussion on the nature of Truth and how to arrive at Reality. Than this is how it works and how it has always worked.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    12 hours ago

    “That’s a great question!” </ai>

    The truth is, we don’t need AI to have misinformation, and AI is not the biggest problem in the current post-truth society. There has been a war going on globally in undermining truth for a long time. The old saying, “The first casualty in war is truth” is invalid now, because truth is no longer relevant and lies are weaponised like never before in history. People don’t want to be certain of something, their first reaction to news is to react at a deep and emotional level and the science of misinformation is highly refined and successful in making most people react in a certain way. It takes effort and training not to do that, and most of us can’t.

    Journalists have been warning us about this for decades but integrity costs money, and that funding has been under attack too. It’s pretty depressing whichever way you look at it.

  • lowrads@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    11 hours ago

    It’s best to hew to long form content. It’s harder for contentbots to rattle on for long without becoming incoherent. It helps that they don’t know anything, including what they don’t know, so they aren’t going to fool someone that’s already familiar with a subject. The problem emerges for novices, who often turn to chatbots to get an overview of a subject.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    11 hours ago

    The real problem is social media, and how feeds are structured now.

    The ‘few trustworthy institutions’ model has been utterly obliterated because a few tech companies figured out a sea of influencers is more profitable/exploitable. Not to minimize some of the great creators out there, but one’s daily news shouldn’t come from Joe Rogan + your Facebook uncle’s reshares.

  • Naich@lemmings.world
    link
    fedilink
    arrow-up
    21
    ·
    21 hours ago

    That is already the case with written news. How can you trust any of it, when anyone can make up anything and present it as fact? In the same way you have to rely on the source and provenance of a news story, the same has become true of photos and videos. Photo altering has been going on for over 100 years

      • SkyNTP@lemmy.ca
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        17 hours ago

        The tools to manufacture content are more accessible, sure. But again, information has always been easy to manufacture. Consider a simple headline:

        [Group A] kills 5 [Group B] people in terrorist plot.

        I used no AI tools to generate it, yet I was able to create it with minimal effort nonetheless. You would be rightfully skeptical to question its veracity unless you recognized my authority.

        The content is not important. The person speaking it and your relationship of trust with them is. The evidence is only so good as the chain of custody leading to the origin of that piece of evidence.

        Not only that, but a lot of people already avoid hard truths, and seek to affirm their own belief system. It is soothing to believe the headline if you identify as a member of Group B and painful if you identify as a member of Group A. That phenomena does not change with AI.

        Our relationship with the truth is already extremely flawed. It has always been a giant mistake to treat information as the truth because it looks a certain way. Maybe a saturation of misinformation is the inoculation we need to finally break that habit and force ourselves to peg information to a verifiable origin (the reality we can experience personally, as we do with simple critical thinking skills). Or maybe nothing will change because people don’t actually want the truth, they just want to soothe themselves. I guess my point is we are already in a very bad place with the truth, and it seems like there isn’t much room for it to get any worse.

  • FriendOfDeSoto@startrek.website
    link
    fedilink
    English
    arrow-up
    10
    ·
    20 hours ago

    At different points in the past, we thought novels, newspapers, radio, television, and the internet would be the end of truth. Truth is still around. We develop systems to sieve through the bullshit. In terms of slop, I don’t think anyone can say for sure how we will deal with it. But if past experience is anything to go by, it will rely on reputation. You trust a certain news source because they have been reliable, so they have a reputation they don’t want to lose. And that keeps them honest or you move on. We will find a way to deal with slop as well that will be based on reputation. In addition to laws and regulations that are yet to be written.

    Whether it’s news rooms or TikTubers or something completely new that will gain this reputation, eff knows. But we will get there.

  • sunbeam60@lemmy.one
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    14 hours ago

    I think authentication will be huge.

    Lots of commercial alternatives are starting to bet on the future (eg https://swear.com/), where something gets fingerprinted and the fingerprint gets signed and put on an authentication block chain (something block chains are actually useful for).

    I imagine a future where this gets built into browsers (ie “this picture was verified by NY times, and it went through this editing chain, starting from a canon camera which recorded on this date”) and you can switch over to have unauthenticated assets highlighted.

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      14 hours ago

      Red Star OS has a meta data capture and reencode feature so North Korean authority can check who a picture was created, and access, or shared by.

      Need that kind of image locking, without the fascism part

  • Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    20 hours ago

    So, what if people from different countries and regions exchanged contacts here and talked about what’s really happening in their countries

    Bots present themselves as real people with opinions. I know this is off topic, but have you ever tried Cool Ranch Doritos?

    • dreadknight11@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      19 hours ago

      Yes, this is a serious problem. Previously, when AI wasn’t such a problem, it was still possible to find people, but now it’s a risky but still possible option. But if we wait another year, finding people will become completely impossible.

  • locuester@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    15 hours ago

    talked about what’s really happening

    If you ask me about my country the information and opinion you get will be very different from asking someone else. It doesn’t reveal a truth.