Meta’s has been listening to some concerns after all especially now after some pressure.

These changes very well could help parents moderate their teens. Meta’s head of product says these changes address particular 3 concerns in an Npr interview.

Will this be the end of the complaints and concerns geared towards Instagram, probably not.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    I’m personally on the fence about this type of stuff. On one hand, yes I 100% agree about actually keeping kids safer online (not like the politicians “Think of the kids!” type of “safety”). On the other I don’t want anyone to have to give up privacy by having to confirm their age by sending some form of verification, whether that picture/video of ID with birth date on it or having an AI that will inevitably get so many false positives judge you, just to access a service online.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 days ago

      I’m 100% in the second camp. Facebook having my ID is a much bigger issue than having my kids’ profile be public. I as a parent can ensure my kids’ profiles are acceptable, or mark them as private myself. I can’t ensure Facebook deletes my ID after verifying my identity.

      Yes, kids should be safer online, and that starts at home. Educate parents and kids about how to stay safe, that’s as far as it should go.

      • domdanial@reddthat.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        I’m also in the second camp. Plus, censoring the bad words on specific users is a few too many steps closer to don’t say gay on the internet. Is ass ok but not fuck? Is sex talk forbidden? All mention of anatomy, including general questions about health? How about they ban anti-capitalist language too? The tiktok language phenomenon shows that users will absolutely just make do getting around communication bans, “unalive” and “le$beans” being the most popular. This type of censorship has already happened on other platforms, and it’s all bullshit and useless.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          7 hours ago

          I completely agree. I’m reading a book related to 1984, and all of the thought crime and whatnot it talks about is scarily on-point when it comes to social media censorship. For example, “sex crime” is strictly controlled, and in the same chapter that someone gets taken away for getting pregnant, the MC talks about sexual relationships she has and plans to have. Nobody can talk about love or relationships, yet everyone seems to engage in them, or at least one-night stands. In fact, the word used for “abortion” in that book is “unbirth,” which is right there with the term “unalived.”

          Blocking out a huge part of human culture doesn’t help anyone, and it doesn’t actually work, because people will find a way. What can work is giving users the tools to hide stuff they don’t want to see.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        The obvious answer is that Facebook should not be used by anyone, ever. The model is cancer, whatever FB does of value for the user can be accomplished without a social media platform.

      • el_bhm@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        Choice becomes much, much harder once you listen to accounts about CSAM. Darknet Diaries has a few episodes on this. Some accounts are stomach churning. You can see reasoning of people pushing for the laws

        And I agree. Education would go a long way. Much further than some ID verification.

        But, see, education makes people smarter. What if people see through the lies of politicians?!

        Both politicians and agencies are drooling at the thought of such laws. Because no one answers one simple aspect the people want answered. Who watches the watchers? Who are they accountable to?

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          Exactly.

          People like easy solutions to complex problems. If you don’t see the problems, it’s easy to assume they don’t exist, but what actually happens is that by banning things, you just push them underground, where they fester. Alcohol prohibition created the mafia, which caused so many more problems than alcohol ever did, and it’s still around today. Banning drugs seems to have created, or at least strengthened, the drug cartels. I wouldn’t be surprised if strict controls around CSAM actually ends up harming more kids as people who would be casual observers end up getting caught up in the worst of it and end up actually harming children. I’m not saying CSAM should be legal or anything like that, I’m just saying the strict censorship of anything close to it is more likely to push someone who is casually interested to go and find it. The more strictly something is controlled, the more valuable it is for the person who controls it.

          In other words, it’s the Streisand Effect, but for crime.

          No, what we need is better education and better (not more) policing.

    • Rob200@lemmy.autism.placeOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Anything to prevent getting my i.d in a database, i would actually be ok with using an ai to verify my age by my appearance if it really came down to it and I had to choose legally some form of age verification.

    • Clbull@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 day ago

      I’m in the first camp. Instagram is flooded with spam accounts posting links to illicit Telegram channels where actual CSAM is being distributed. The owner of Telegram was also arrested recently for failing to safeguard his platform from such highly illegal activity. Children having easy and often unrestricted access to social media is probably the reason why things have gotten so bad.

      Every major social network should be asking for ID verification, but there should be strict safeguards on how that information is used and stored, with hefty fines for failures to safeguard.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 day ago

      All this is creeping surveillance, and the end goal is not commercial, it’s political.

      One commandment parents of many people of my age (28) have failed to imprint is - you shall say “nay” and you shall tell jerks to eat shit and die.

      There are many distractions, somehow the computer program processing your unencrypted communications being called “AI” becomes important, somehow the difference between that program and the people controlling it becomes important, somehow them being able to censor you becomes important, and somehow requirements to confirm identity become normal.

      I felt hot all-encompassing shame many times in my childhood for not remembering things which were unimportant, but people around would remember those. Only now I understand that something in my childhood was a gift.

      Seeing what is happening by most general and vague descriptions might help to judge things more soberly.

  • antmzo220@lemmy.ml
    link
    fedilink
    English
    arrow-up
    144
    arrow-down
    2
    ·
    edit-2
    2 days ago

    How many users does IG have that are registered as under 18?

    I’m 25 now, but I still always say I was born in the 80s out of habit…

    It’s a good step, but it won’t fix things.

    • ArbitraryValue@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      2 days ago

      Nothing can fix things because teenagers will not cooperate. If Instagram could identify all its teenage users, those users would move to a platform that couldn’t. The only thing the restrictions achieve is a reduction in the market share of the platform with the restrictions.

      • Thurstylark@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 days ago

        I think it would be naive to think that they don’t know this already. Not to say that I think you’re making that argument, but that I think the losses are calculated against the benefit of the appearance of care that this move affords them. Sure, these new restrictions and tooling means that some parents will be more willing to allow their teens to engage with the platform, but there’s no way that will outweigh the active user reduction in the targeted age range.

        The real benefit is looking like they’re doing stuff in a positive direction in the context of minors. I’m definitely expecting them to point at this move (and its voluntary nature) as an argument against future regulation proposals. Especially the part where they’re ostensibly putting that control in parents’ hands.

    • 14th_cylon@lemm.ee
      link
      fedilink
      English
      arrow-up
      29
      ·
      2 days ago

      I still always say I was born in the 80s out of habit…

      I always say 1900 out of habit… I was at least once rejected as too old :D

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      If you’re 25 now, you were 15 during the early wild west days of smartphone adoption, while we as a society were just figuring that stuff out.

      Since that time, the major tech companies that control a big chunk of our digital identities have made pretty big moves at recording family relationships between accounts. I’m a parent in a mixed Android/iOS family, and it’s pretty clear that Apple and Google have it figured out pretty well: child accounts linked to dates of birth that automatically change permissions and parental controls over time, based on age (including severing the parental controls when they turn 18). Some of it is obvious, like billing controls (nobody wants their teen running up hundreds of dollars in microtransactions), app controls, screen time/app time monitoring, location sharing, password resets, etc. Some of it is convenience factor, like shared media accounts/subscriptions by household (different Apple TV+ profiles but all on the same paid subscription), etc.

      I haven’t made child accounts for my kids on Meta. But I probably will whenever they’re old enough to use chat (and they’ll want WhatsApp accounts). Still, looking over the parent/child settings on Facebook accounts, it’ll probably be pretty straightforward to create accounts for them, link a parent/child relationship, and then have another dashboard to manage as a parent. Especially if something like Oculus takes off and that’s yet another account to deal with paid apps or subscriptions.

      There might even be network effects, where people who have child accounts are limited in the adult accounts they can interact with, and the social circle’s equilibrium naturally tends towards all child accounts (or the opposite, where everyone gets themselves an adult account).

      The fact is, many of the digital natives of Gen Alpha aren’t actually going to be as tech savvy as their parents as they dip their toes into the world of the internet. Because they won’t need to figure stuff out on their own to the same degree.

      • cm0002@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        It works well…when a parent makes an account for the express purpose of parental controls. The “issue” are the fake accounts (i.e. “finstas”) that the kids make themselves in which they lie about their age.

        Also, side note, Googles child accounts work OK, I would not say they’ve got it on lock. Did you know if you get your kids a debit card and they’re under 13 Google will NOT allow them to add it as their own payment method no matter what consent I’m willing to give to them?

        Yea, I had to do a parent sanctioned age-lie to Google so now Google thinks my kids are all 13+ just so I could do the extreme thing of teaching them money responsibilities in an age of digital transactions SMDH

        Because they won’t need to figure stuff out on their own to the same degree.

        Lol they will the second they get hit with that “you need to get parental consent” screen, that’s how it happened to us all.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Lol they will the second they get hit with that “you need to get parental consent” screen, that’s how it happened to us all.

          The normie services are increasingly tied to real world identities, through verification methods that involve phone numbers and often government-issued IDs. As the regulatory requirements tighten on these services, it’ll be increasingly more difficult to create anonymous/alt accounts. Just because it was easy to anonymously create a new Gmail or a new Instagram account 10 years ago doesn’t mean it’s easy today. It’s a common complaint that things like an Oculus requires a Meta account that requires some tedious verification.

          I don’t think it’ll ever be perfect, but it will probably be enough for the network effects of these types of services to be severely dampened (and then a feedback loop where the difficult-to-use-as-a-teen services have too much friction and aren’t being used, so nobody else feels it is worth the effort to set up). Especially if teens’ parent-supervised accounts are locked to their devices, in an increasingly cloud-reliant hardware world.

    • SagXD@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      More than you can imagine. I haven’t found a student in my high school without IG.

  • hdnsmbt@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 days ago

    Thank god they’re filtering out the bad no-no words! Finally teens won’t be using naughty and scary words any longer because forbidding words that make us sad and upset is a sensible and smart thing to do! Fuck these shitty networks policing every aspect of speech with a humongous camel dick!

    Also, if everything is highlighted, nothing is highlighted. Be more reasonable with your highlights.

  • garretble@lemmy.world
    link
    fedilink
    English
    arrow-up
    97
    ·
    2 days ago

    I’m glad nearly every word in this image is highlighted so I’d know what to read.

    (I’m just joshin’)

      • Rob200@lemmy.autism.placeOP
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        It’s not an ai summary because if it was the wording would had been different from the article. The content featured in the screenshot is from the article and I manually draw attention to parts I am interested in and also to narrow things down. I started highlighting instead of redacting just so people wouldn’t say i’m censoring.

        For those who think it’s an ai summary idk what to tell you.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          If you read the whole text and interpret the highlights as emphasis then it’s just annoying and hard to read (sort of like those people who add random commas everywhere). If you read just the highlighted text then it sounds like a summary, but there are mistakes in it, which is why I assumed AI.

          • Rob200@lemmy.autism.placeOP
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            2 days ago

            See the screenshot isn’t intended to be a summary but a selected portion I react to with a select post. If someone wants to read the full story, it’s linked to.

            I, or if it’s not a post I created then the op usually provides the link to the article and if any one were to ask me I would always tell them to read the article for full context.

    • helpImTrappedOnline@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Na, that’s a total valid point. In school you could tell anyone who’s note book was a giant yellow soggy mess was not going to adjust well to adult life.

  • yamanii@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    They know their network is harmful to teens for years now, I wonder why NOW they are finally doing something about it?

    • Tarquinn2049@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      Yeah, I’m not sure. People are calling it highlighting, but it doesn’t fit any reasonable pattern to have been manually highlighted. Is there some sort of bad automated highlighting? Or just someone still learning what highlighting is even used for. Or is it just some sort of style thing?

      • Rob200@lemmy.autism.placeOP
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        2 days ago

        It’s highlighting, what’s wrong with that? I thought it was an improvement from my earlier posts where I was blocking out filler to narrow down the article.

        I highlight the parts I want to read if I were to revisit the article, to narrow it down and save time. Could be useful for users too who just want to get the story and not read a lot of filler.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          2 days ago

          Yeah, you don’t need to put this much effort in. Bold/highlight one key thing for emphasis at most. Maybe two.

          • Rob200@lemmy.autism.placeOP
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            2 days ago

            It doesn’t take much time or effort to do. If I highlight one or two things, then when I read it again, I’l have to gasp read through a good portion of the article again.

            Sometimes there is more then just two things to highlight.

  • SagXD@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    2 days ago

    Wait, There are Teens who don’t private their accounts? That’s wierd.

  • roofuskit@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    This has all happened before and it will all happen again. This is what it looks like when a social media company tries to head off an incoming regulatory push.

    • Ilandar@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Meta said it was fully expecting many teenagers would try to evade the new measures.

      “The more restrictive the experience is, the stronger the theoretical incentive for a teen to try and work around the restriction,” Mr Mosseri said.

      In response, the company is launching and developing new tools to catch them out.

      Instagram already asks for proof of age from teenage users trying to change their listed date of birth to an adult one, and has done since 2022.

      Now, as a new measure, if an underage user tries to set up a new Instagram account with an adult date of birth on the same device, the platform will notice and force them to verify their age.

      In a statement, the company said it was not sharing all the tools it was using, “because we don’t want to give teens an instruction manual”.

      “So we are working on all these tools, some of them already exist … we need to improve [them] and figure out how to provide protections for those we think are lying about their age,” Mr Mosseri said.

      The most stubborn category of “age-liars” are underage users who lied about their age at the outset.

      But Meta said it was developing AI tools to proactively detect those people by analysing user behaviour, networks and the way they interact with content.

      Source.

      • umami_wasabi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        Now, as a new measure, if an underage user tries to set up a new Instagram account with an adult date of birth on the same device, the platform will notice and force them to verify their age.

        So another reason to force user to hand over PII for “age verification” if they “suspect” (with AI ofc) a new user is underage. Nice.

      • hdnsmbt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        developing AI tools

        We tried nothing and we’re all out of ideas. AI to the rescue!

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    “If everyone jumped off a bridge, would you jump off too?”

    Glad we found the answer to that parental koan.

  • MsPenguinette@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    At least it’s a step in the right direction. Especially since they’ve been extremely evil when it comes to teens. Tho I’m sure they’ll figure out how to continue to be evil with these restrictions/guidelines in place.