When I was young and starting out with computers, programming, BBS’ and later the early internet, technology was something that expanded my mind, helped me to research, learn new skills, and meet people and have interesting conversations. Something decentralized that put power into the hands of the little guy who could start his own business venture with his PC or expand his skillset.

Where we are now with AI, the opposite seems to be happening. We are asking AI to do things for us rather than learning how to do things ourselves. We are losing our research skills. Many people are talking to AI’s about their problems instead of other people. And they will take away our jobs and centralize all power into a handful of billionaire sociopaths with robot armies to carry out whatever nefarious deeds they want to do.

I hope we somehow make it through this part of history with some semblance of freedom and autonomy intact, but I’m having a hard time seeing how.

  • tomiant@piefed.social
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    1
    ·
    7 days ago

    Any good thing will inevitably be corrupted by capitalism, because that is what capitalism does. It is a cancer, and it will consume everything and us all in the process.

    I don’t know if it was in Strauss’ “Accelerando” that humanity told an AI to solve some complex problem at any cost, and the AI promptly turned all the matter in the solar system into a supercomputer capable of solving it.

    That’s capitalism in a nutshell: “do profit” is the only imperative, and it will destroy everything, just like a cancer is predicated upon “do growth”, forever, at any cost, regardless of whether the host organism dies.

  • chunkystyles@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    15
    ·
    7 days ago

    AI isn’t the only thing you can use a computer for now. If you ignore AI and corporate software, there’s loads of mind expanding activities in computing.

    Take a look at what you can self host with commodity hardware (barring the insane RAM prices right now).

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      I do lots of self hosting. But the issue is not what I will do but what the world will do and what we will be forced to do by our employers and pressure to work at an efficiency only possible with ai doing a lot of the work.

  • solomonschuler@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    5 days ago

    As far as I’m concerned the generative AI that we see in chatbots has no goal associated with it: it just exists for no purpose at all. In contrast to google translate or other translation apps (which BTW still use machine learning algorithms) have a far more practical use to it as being a resource to translate other languages in real-time. I don’t care what companies call it (if it’s a tool or not) at the moment its a big fucking turd that AI companies are trying to force feed down our fucking mouth.

    You also see this tech slop happening historically in the evolution of search engines. Way before we had recommendation algorithms in most modern search engines. A search engine was basically a database where the user had to thoughtfully word its queries to get good search results, then came the recommendation algorithm and I could only imagine no one, literally no one, cared about it since we could already do the things this algorithm offered to solve. Still, however, it was pushed, and sooner than later integrated into most popular search engines. Now you see the same thing happening with generative AI…

    The purpose of generative AI, much like the recommendation algorithm is solving nothing hence the analogy “its just a big fucking turd” is what I’m trying to persuade here: We could already do the things it offered to solve. If you can see the pattern, its just this downward spiraling affect. It appeals to anti intellectuals (which is most of the US at this point) and google and other major companies are making record profit by selling user data to brokers: its a win for both parties.

    • TubularTittyFrog@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 days ago

      it creates perceived shareholder value of an emerging market. that is it’s purpose.

      it’s utility is not for the end-user. it’s something for shareholders to invest in, and companies to push in an attempt to generate shareholder interest. It’s to raise the stock-price.

      And like all speculative assets… nobody will care about the returns on it, until they do. And once those returns don’t materialize… poof goes the market.

      Just like they did with all the speculative investment bubbles based on insane theories.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      This is how I felt about it a year ago. But it has gotten so much better since then. It automates a lot of time consuming tasks for me now. I mean I’ve probably only saved 100 hours using it this year but the number is going up rapidly. It’s 100 more than it saved me last year.

  • _cnt0@sh.itjust.works
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    7 days ago

    With AI, now it does the thinking for you […]

    No, it doesn’t. It’s just mimikry. Autocomplete on steroids.

    • Xella@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      My father is convinced that humans and dinosaurs coexisted and told me that ai proved that to him. So… people do let it think for them.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      7 days ago

      This was true last year. But they are cranking along the ARC-AGI benchmarks designed specifically to test the kind of things that cannot be done by just regurgitating training data.

      On GPT 3 I was getting a lot of hallucinations and wrong answers. On the current version of Gemini, I really haven’t been able to detect any errors in things I’ve asked it. They are doing math correctly now, researching things well and putting together thoughts correctly. Even photos that I couldn’t get old models to generate now are coming back pretty much exactly as I ask.

      I was sort of holding out hopes that LLM’s would peak somewhere just below being really useful. But with RAGs and agentic approaches, it seems that they will sidestep the vast majority of problems that LLM’s have on their own and be able to put together something that is better at even very good humans at most tasks.

      I hope I’m wrong, but it’s getting pretty hard to bank on that old narrative that they are just fancy autocomplete that can’t think anymore.

          • pinball_wizard@lemmy.zip
            link
            fedilink
            arrow-up
            8
            ·
            7 days ago

            was dotcom this annoying too?

            Surprisingly, it was not this annoying.

            It was very annoying, but at least there was an end in sight, and some of it was useful.

            We all knew that http://www.only-socks-and-only-for-cats.com/ was going away, but eBay was still pretty great.

            In contrast, we’re all standing around today looking at many times the world’s GDP being bet on a pretty good autocomplete algorithm waking up and becoming fully sentient.

            It feels like a different level of irrational.

          • hitmyspot@aussie.zone
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            7 days ago

            Dot com bubble was optimistic. AI bubble is pessimistic. People thought their lives would improve due to improved communication and efficiency. The internet was seen as a positive thing. The dot com bubble was more about monetizing it, but that wasn’t the zwitgeist. With AI people don’t see much benefits and are aware it’s purpose is to take their jobs.

            With the dot com bubble, it was mainly mom and pop investors that were worst off, but many companies died. With AI bubble it seems like it’s the companies that will do worst when it crashes. Obviously, it affects everyone, but this skews more to the 1%. So hopefully it’s a lesson on greed. Unlikely though.

      • Cevilia (she/they/…)@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        4
        ·
        7 days ago

        I’m pleased to inform you that you are wrong.

        A large language model works by predicting the statistically-likely next token in a string of tokens, and repeating until it’s statistically-likely that its response has finished.

        You can think of a token as a word but in reality tokens can be individual characters, parts of words, whole words, or multiple words in sequence.

        The only addition these “agentic” models have is special purpose tokens. One that means “launch program”, for example.

        That’s literally how it works.

        AI. Cannot. Think.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          6
          ·
          edit-2
          6 days ago

          …And what about non LLM models like diffusion models, VL-JEPA, SSM, VLA, SNN? Just because you are ignorant of what’s happening in the industry and repeating a narrative that worked 2 years ago doesn’t make it true.

          And even with LLM’s, even if they aren’t “thinking”, but produce as good or better results than real human “thinking” in major domains, does it even matter? The fact is that there will be many types of models working in very different ways working together and together will be beating humans at tasks that are uniquely human.

          Go learn about ARC-AGI and see the progress being made there. Yes, it will take a few more iterations of the benchmark to really challenge humans at the most human tasks, but at the rate they are going that’s only a few years.

          Or just stay ignorant and keep repeating your little mantra so that you feel okay. It won’t change what actually happens.

          • lad@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            5 days ago

            Yeah those also can’t think, and it will not change soon

            The real problem though is not if LLM can think or not, it’s that people will interact with it as if it can, and will let it do the decision making even if it’s not far from throwing dice

            • realitista@lemmus.orgOP
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              5 days ago

              We don’t even know what “thinking” really is so that is just semantics. If it performs as well or better than humans at certain tasks, it really doesn’t matter if it’s “thinking” or not.

              I don’t think people primarily want to use it for decision making anyway. For me it just turbocharges research, compiling stuff quickly from many sources, writes code for small modules quite well, generates images for presentations, etc, does more complex data munging from spreadsheets or even saved me a bunch of time taking a 50 page handwritten ledger and near perfectly converting it to excel…

              None of that requires decision making, but it saves a bunch of time. Honestly I’ve never asked it to make a decision so I have no idea how it would perform… I suspect it would more describe the pros and cons than actually try to decide something.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      I’ve been using Linux steadily for the last 30 years, and yes it’s still great. But doesn’t really fill the niche that AI does.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      It’s definitely taking some jobs. Not a huge amount yet, but it’s unfortunately still getting better at a pretty good clip.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          7 days ago

          Graphic artists, translators, and copywriters are losing jobs in droves. It’s expanding. I sell contact center software and it’s just kicking off in my industry, but it’s picking up.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            7 days ago

            Yeah, I can see it happening there, especially for graphic artists (however actual graphic design is much better than anything a model can currently spit out). Translation is surprising to me, because in my experience, LLMs are actually kind of bad at actual translation especially when sounding natural according to local dialect. So I might consider that one to be a case of dumb bosses that don’t know any better.

            I’m a DevOps engineer and dumb bosses are absolutely firing in my industry. However, our products have suffered the consequences and they continue to get worse and less maintainable.

            • realitista@lemmus.orgOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 days ago

              As someone who uses machine translation on a daily basis, I’ve watched it go from being barely usable to as good as human translation for most tasks. It’s really uncommon that I find issues with it any more. And even if there is one issue in 1000 words or whatever, you can just have a human proofread it instead of translating the whole thing, it will reduce your headcount my 90%. But I think for most things, no one calls translators any more, they just go to google translate. Translators now only do realtime voice translation, not documents which used to be most of their work.

              These things creep up on you. They aren’t good and you get comfortable that they don’t work that well, and then over time they start working as well or better than humans and suddenly there’s really no reason to employ someone for it.

    • solomonschuler@lemmy.zip
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 days ago

      No its apart of the companies business strategy. These tech companies fire an unprecedented amount of employees (primarily from the mass hiring during 2020) make a post they fired these employees because of ai improvements, see their stock price rise ultimately inflating it and creating an economic bubble, and rinse and repeat with the next wave of potential hires who are sucking their employers dick a little to hard.

      It’s unethical, and it violates any and all job security and I don’t want to be apart of that toxic workspace. Its ironic im saying this because a few years ago if I got a job at Google I would say “fuck yea mother fucker count me in” and now I just don’t want to work for them. There are far better companies doing interesting and valuable work to benefit society than these hipster douche bags.

  • whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    2
    ·
    5 days ago

    Computers can and still do all that, you just need some mental discipline to avoid the cognitive equivalent of fast food being forced into your attention via AI slop and social media demagogues over corporate owned messaging systems.

    • burned_das_brot@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      But how many people are actually doing that? I reckon most people (myself included) don’t realise the extents of influence social media and other media outlets have on them, let alone act on that knowledge.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 days ago

      It’s a real dilemma unfortunately. On one hand if you don’t get used to using it you will be at a massive disadvantage in whatever’s left of a job market in the future. On the other hand if you do get used to using it you will likely be atrophying parts of your brain and giving money to exactly the machine that will destroy us.

  • barryamelton@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    7 days ago

    Unless you were a hard GNU fan when you were a kid, it was the same process of giving power to billionares. Just that now it sits on 50 years of wins for the billionares side. So it’s closer to the endgame.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 days ago

      I’ve been a GNU fan since 1995. And yes, while buying software did make some billionaires, I never felt like it was taking away my abilities or autonomy or freedom until now. Back then I felt like it was giving me more of those things.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          I don’t know. Looking back I don’t think I gave up my abilities or allowed billionaires to replace me by using tech until LLM’s came along.

  • Apytele@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    edit-2
    7 days ago

    If AI can even half ass your job you barely had one to begin with. All us healthcare workers and the tradies are still making a half decent wage for real work just like we always have. And the food service and sanitation workers still aren’t doing the absolute best but they’re not hurting for work either. I’m not going to tell you I like the way my work is valued under capitalism but at least I’m tangibly benefitting other humans.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      edit-2
      7 days ago

      I don’t think it’s fair to say that just because you were a commercial graphic designer or translator or copywriter that you were doing bullshit work that was barely worth being called work.

      Yes, healthcare is a very commendable line of work, no doubt, but we will see radiologists out of work fairly soon IMO, as well as anyone who interprets lab results, and very likely those who make diagnoses of all types. These are all things that AI will likely be doing better if they aren’t already.

      Physical care will take longer and won’t be replaced until we have AI robots, but the gains there are happening fast too. We may only have another decade or so until we see a lot of that stuff being automated. It’s really hard to tell how fast this will all happen. Things do tend to happen slower than the hype around them, but the progress that’s happening every year is pretty staggering if you are really tracking it. I’d love to think that my job which requires mostly creative ways of dealing with people and negotiation is safe for some time, but I’m really doubting that I can make it the next 12 years I need to until retirement without some disruption.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 days ago

          Maybe you don’t, but I have a father in assisted living and know for a fact that there are an awful lot of nursing jobs that don’t look particularly different than this. AI will start with the hardest diagnosis tasks first, and at some point start doing the easiest physical ones. Then it will eat away the stuff in the middle gradually. This is one of the most needed areas for non human labor so it will be one of the most heavily focused on.

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    5 days ago

    The LLM is absolutely not doing anything thinking for you. It’s can, at best surface someone else’s thinking based on a prompt.

    Anyone that confuses what these things do with thinking is on a path towards psychosis.

    Every 4 hours spent talking to one of these things is indistinguishable from talking to oneselves for 40 hours. It amplifies one’s inner thoughts in ways that prevoisly only a schizophrenic was able to enjoy.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      It absolutely can replace hours of research or programming or drawing with a quick prompt. It does this for me often, and as of the latest Gemini pretty much is always right too.

      • Clent@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 days ago

        None of that is it doing the thinking for you.

        LLMs can be used as a research tool but require a human apply critical thinking to the output to be useful.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          4 days ago

          It definitely replaced a lot of thinking with vibe coding. And research also requires thinking. Maybe not super intense thought, but it’s thought all the same. Artists would also be pretty annoyed to hear that they are doing a brainless activity.

          • Clent@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 days ago

            Literally not thinking.

            Artists would also be pretty annoyed to hear that they are doing a brainless activity.

            I see you lack critical thinking skills so I understand the confusion. Unfortunately, I can’t fix stupid.

              • Clent@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 days ago

                You failed to prove your assertions.

                Ask an LLM it will tell you it’s not capable of thinking, it’s approximating thinking.

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    7 days ago

    I have to disagree. The only reason computer expanded your mind is because you were curious about it. And that is still true even with AI. Just for example, people doesn’t have to learn or solve derivations or complex equations, Wolfram Alpha can do that for them. Also, learning grammar isn’t that important with spell-checkers. Or instead of learning foreign languages you can just use automatic translators. Just like computers or internet, AI makes it easier for people, who doesn’t want to learn. But it also makes learning easier. Instead of going through blog posts, you have the information summarized in one place (although maybe incorrect). And you can even ask AI questions to better understand or debate the topic, instantly and without being ridiculed by other people for stupid questions.

    And to just annoy some people, I am programmer, but I like much more the theory then coding. So for example I refuse to remember the whole numpy library. But with AI, I do not have to, it just recommends me the right weird fuction that does the same as my own ugly code. Of course I check the code and understand every line so I can do it myself next time.

    • realitista@lemmus.orgOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      Yes, I remember the day I quit the football team and started hanging out with the nerds. I lost a lot of friends and coolness points, but I was so much happier sitting in the library for lunch playing with computers.

      • YeahIgotskills2@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        7 days ago

        Don’t get me wrong - I spent many a lunchtime in the library and the computer lab. Loved it. But by 16 I had to repress it and get into drinking and music (which, honestly wasn’t hard), just to fit in and meet girls.

        The taboo of IT stayed with me, so I never openly discussed my interest in it.

        Happily, online life has been normalised and teens and adults game all the time without it being seen as odd.

        Ironically, despite being into 16-bit games in my teens I never really allowed myself to get into gaming in the suceeding years.

        I regret that now, as I reckon I missed out on a Golden age of gaming that I would have enjoyed had I just been born a decade or so later and been less uptight about what people think.

        • realitista@lemmus.orgOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 days ago

          Yeah I also started partying in my teens and met lots of girls… Also kept my IT hobby mostly to myself. But gave me a great career and as you say, now it’s fully normalized so no need to hide it too much, though there aren’t a huge amount of 50 year olds that are into gaming and home automation as I am outside of forums like Lemmy.

  • zombiebot@piefed.social
    link
    fedilink
    English
    arrow-up
    108
    arrow-down
    1
    ·
    edit-2
    7 days ago

    Librarian here, can confirm.

    I started my Master’s in Library and Information Science in 2010. We were told not to worry about the internet making us obsolete because we would be needed to teach information literacy.

    Information literacy turned out to be something people didn’t want. They wanted to be told what to think, not taught skills to think for themselves.

    It’s been the single greatest and most expensive disappointment of my life.

      • TubularTittyFrog@lemmy.world
        link
        fedilink
        arrow-up
        13
        ·
        edit-2
        5 days ago

        classes in philosophy, literature, politics, and digital media. typically.

        you know, those evil humanities that are destroying society… because they don’t produce ‘value’.

    • Strider@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      They wanted to be told what to think, not taught skills to think for themselves.

      This must be one of the wisest statements I ever read on the internet.

    • jimmy90@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      6 days ago

      if people don’t want to use computers to expand their mind, empower themselves and others then, obviously they won’t get those benefits

      you can still use computers to do those things