It always feels like some form of VR tech comes out with some sort of fanfare and with a promise it will take over the world, but it never does.

  • kurmudgeon@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    46 minutes ago

    Twitter/X. It is not a free speech platform. Give it up and move on to something else. Stop supporting these billionaires and stop giving them your time.

  • unknown1234_5@kbin.earth
    link
    fedilink
    arrow-up
    1
    ·
    1 hour ago

    vr is useful but its too wrapped up in corporate bs to really take off for now. its dominated by companies obsessed with ai and by pathetic startups that never finish a product. it just needs meta to be less dominant.

    • lucullus@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      51 minutes ago

      I sometimes wonder what would happen to VR, if it would get the same situation as 3D printing. That took of, because some patents where expiring and it was then easy to build up your own version. We had/have many open source/FOSS printers and nearly all the companies currently in this space wouldn’t exist, if it werent for the many open source developments and the extention of the market, that they created. I know this is highly unprobable for VR, but one should be allowed to dream

  • early_riser@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    3 hours ago

    I’m going to get downvoted for this

    Open source has its place, but the FOSS community needs to wake up to the fact that documentation, UX, ergonomics, and (especially) accessibility aren’t just nice-to-haves. Every year has been “The Year of the Linux Desktop™” but it never takes off, and it never will until more people who aren’t developers get involved.

    • itflows@feddit.org
      link
      fedilink
      arrow-up
      1
      ·
      8 minutes ago

      Funny you make “missing documentation” an argument against open source and for closed source, as if the average Windows user reads any documentation or even the error messages properly.

      your comment is a joke.

    • VitoRobles@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      29 minutes ago

      Not here to downvote. But I will say there is some good changes as of the past five years.

      From a personal perspective: there’s a lot of GOOD open-source software that has great user experiences. VLC. Bitwarden. OBS. Joplin. Jitsi.

      Even WordPress (the new Blocks editor not the ugly classic stuff) in the past decade has a lot of thought and design for end users.

      For all the GIMP/Libre office software that just has backwards ass choices for UX, or those random terminal apps that require understanding the command line – they seem to be the ones everyone complains about and imprinted as “the face of open-source”. Which is a shame.

      There’s so much good open-source projects that really do focus on the casual non technical end user.

    • lucullus@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      40 minutes ago

      While you generally have a point, the year of the linux desktop is not hindered by that. Distributions like Linux Mint, Ubuntu and the like are just as easy to install as Windows, the desktop environments preinstalled on them work very good and the software is more than sufficient for like 70% to 80% of people (not counting anything, that you cannot install with a single click from the app store/software center of the distribution.

      Though Linux is not the default. Windows is paying big time money to be the default. So why would “normal people” switch? Hell, most people will just stop messaging people instead of installing a different messenger on their phone. Installing a different OS on your PC/Notebook is a way bigger step than that.

      So probably we won’t get the “Year of the Linux Desktop”, unless someone outpays Microsoft for quite some time, or unless microsoft and Windows implode by themselves (not likely either)

    • Stegget@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      23 minutes ago

      We have flying cars. They’re called planes, you can get a license for them and everything.

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    29
    ·
    6 hours ago

    “Smart” TVs. Somehow they have replaced normal televisions despite being barely usable, laggy, DRM infested garbage.

    • VitoRobles@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      28 minutes ago

      You’re not kidding. It’s pretty difficult to not buy them.

      It’s a $250 smart TV vs a $2000 non-infested TV.

    • Professorozone@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 hours ago

      Man, I haven’t really faced this yet. My flat screen is a really old Panasonic plasma and it is"barely" smart. It came with a few apps on it. I ignore them and use it as a dumb monitor, running everything through my receiver instead. When it dies, I don’t know what I’ll do.

      • swab148@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        35 minutes ago

        They’re more expensive, but check out commercial displays. They’re basically just big “dumb” TVs for businesses to display menus and whatnot, usually with a single HDMI and no sound, but those limitations can easily be bypassed with a stereo receiver.

    • early_riser@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 hours ago

      The concept confuses and infuriates me. I’m just going to stick a game console or Blu-ray player on it, but you can’t buy a TV these days that doesn’t have a bloated “smart” interface. The solution, for me at least, is a computer monitor. I don’t need or want a very large screen, and a monitor does exactly one thing, and that’s show me what I’ve plugged into it.

    • RedGreenBlue@lemmy.zip
      link
      fedilink
      arrow-up
      16
      ·
      6 hours ago

      They are surveilance- and ad delivery platorms. The user experience is as bad as the consumer can tolerate. They work as intended.

      • chicken@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        5
        ·
        6 hours ago

        I don’t buy it, they would be better at whatever nefarious crap if they didn’t take a full second to navigate between menu options, or had a UI designed by someone competent. Even people who have subscriptions to the services the TV is a gateway to have a hard time figuring out how to use them. These things aren’t even good at exploitation, they are decaying technology.

  • Ash@piefed.social
    link
    fedilink
    English
    arrow-up
    34
    ·
    7 hours ago

    So I have a contentious one. Quantum computers. (I am actually a physicist, and specialised in qunatum back in uni days, but now work mainly in in medical and nuclear physics.)
    Most of the “working”: quantum computers are experiments where the outcome has already been decided and the factoring they do can be performed on 8 bit computers or even a dog.
    https://eprint.iacr.org/2025/1237.pdf “Replication of Quantum Factorisation Records with an
    8-bit Home Computer, an Abacus, and a Dog”
    This paper is a hilarious explanation of the tricks being pulled to get published. But then again, it is a nascent technology, and like fusion, I believe it will one day be a world changing technology, but in it’s current state is a failure on account of the bullshittery being published. Then again such publications are still useful in the grand scheme of developing the technology, hence why the article I cited is good humoured but still making the point that we need to improve our standards. Plus who doesnt like it when an article includes dogs.
    Anyway, my point is, some technologies will be constant failures, but that doesn’t mean we should stop.
    A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.

    • Ediacarium@feddit.org
      link
      fedilink
      arrow-up
      12
      ·
      6 hours ago

      They didn’t thank Scribble (the dog) in their acknowledgements section. 1/10 paper, would only look at the contained dog picture

    • _cnt0@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      ·
      6 hours ago

      A cure for cancer is a perfect example. Research has been going on for a century and cumulatively amassed 100s of billions of dollars of funding. It has failed constantly to find a cure, but our understanding of the disease, treatment, how to conduct research, and prevention have all massively increased.

      Cancer != cancer. There are hundreds of types of cancer. Many types meant certain death 50 years ago and can be treated and cured now with high reliability. “The” cure for cancer likely doesn’t exist because “the” cancer is not a singular thing, but a categorization for a type of diseases.

      • HubertManne@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        yeah it is like saying a cure for virus or a cure for bacteria. Its like why we don’t have a cold vaccine and flue ones have to be redone every year.

      • baggachipz@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        3 hours ago

        Thank you for helping educate on this. I live in the best time in history to have the cancer I have. I’ll be able to live a pretty full life with what would have been a steady decline into an immobile death, were this 30 years ago.

      • Ash@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 hours ago

        Yes of course. There are also many types of quantum computer and applications, multiple types of fusion, and cancers.

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        5 hours ago

        Exactly, a “cure for cancer” is like “stopping accidents”.

        There’s still cancer, and there are still accidents. But on both fields it’s much better to be alive in 2026 than in 1926

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      We have also produced treatments that work to some extent for some forms of cancer.

      We don’t have a 100% reliable silver bullet that deals with everything with a simple five minute shot, but…

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        6 hours ago

        AI is great, LLMs are useless.

        They’re massively expensive, yet nobody is willing to pay for it, so it’s a gigantic money burning machine.

        They create inconsistent results by their very nature, so you can, definitionally, never rely on them.

        It’s an inherent safety nightmare because it can’t, by its nature, distinguish between instructions and data.

        None of the company desperately trying to sell LLMs have even an idea of how to ever make a profit off of these things.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          6
          arrow-down
          6
          ·
          6 hours ago

          LLMs are AI. ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”

          That sheer volume of weekly users also shows the demand is clearly there, so I don’t get where the “useless” claim comes from. I use one to correct my writing all the time - including this very post - and it does a pretty damn good job at it.

          Relying on an LLM for factual answers is a user error, not a failure of the underlying technology. An LLM is a chatbot that generates natural-sounding language. It was never designed to spit out facts. The fact that it often does anyway is honestly kind of amazing - but that’s a happy accident, not an intentional design choice.

          • Tar_Alcaran@sh.itjust.works
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            5 hours ago

            ChatGPT alone has over 800 million weekly users. If just one percent of them are paying, that’s 8 million paying customers. That’s not “nobody.”

            Yes, it is. A 1% conversion rate is utterly pathetic and OpenAI should be covering its face in embarrassment if that’s. I think WinRAR might have a worse conversion rate, but I can’t think of any legitimate company that bad. 5% would be a reason to cry openly and beg for more people.

            Edit: it seems like reality is closer to 2%, or 4% if you include the legacy 1 dollar subscribers.

            That sheer volume of weekly users also shows the demand is clearly there,

            Demand is based on cost. OpenAI is losing money on even its most expensive subscriptions, including the 230 euro pro subscription. Would you use it if you had to pay 10 bucks per day? Would anyone else?

            If they handed out free overcooked rice delivered to your door, there would be a massive demand for overcooked rice. If they charged you a hundred bucks per month, demand would plummet.

            Relying on an LLM for factual answers is a user error, not a failure of the underlying technology.

            That’s literally what it’s being marketed as. It’s on literally every single page openAI and its competitors publish. It’s the only remotely marketable usecase they have, because these things are insanely expensive to run, and they’re only getting MORE expensive.

      • Rothe@piefed.social
        link
        fedilink
        English
        arrow-up
        12
        ·
        7 hours ago

        It can’t really reliably do any of the stuff which it is marketed as being able to do, and it is a huge security risk. Not to mention the huge climate issues for something with so little gain.

      • Goldholz @lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        13
        ·
        8 hours ago

        The cost to maintain it? The enviormental impact? The impact its enormouse energie consumption on everyday people (rising costs imensly)?

      • Chais@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        7 hours ago

        It’s quite bad at what we’re told it’s supposed to do (producing reliably correct responses), hallucinating up to 40% of the time.
        It’s also quite bad at not doing what it’s not supposed to. Meaning the “guardrails” that are supposed to prevent it from giving harmful information can usually be circumvented by rephrasing the prompt or some form of “social” engineering.
        And on top of all that we don’t actually understand how they work in a fundamental level. We don’t know how LLMs “reason” and there’s every reason to assume they don’t actually understand what they’re saying. Any attempt to have the LLM explain its reasoning is of course for naught, as the same logic applies. It just makes up something that approximately sounds like a suitable line of reasoning.
        Even for comparatively trivial networks, like the ones used for written number recognition, that we can visualise entirely, it’s difficult to tell how the conclusion is reached. Some neurons seem to detect certain patterns, others seem to be just noise.

        • Perspectivist@feddit.uk
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          6 hours ago

          You seem to be focusing on LLMs specifically, which are just one subcategory of AI. Those terms aren’t synonymous.

          The main issue here seems to be mostly a failure to meet user expectations rather than the underlying technology failing at what it’s actually designed for. LLM stands for Large Language Model. It generates natural-sounding responses to prompts - and it does this exceptionally well.

          If people treat it like AGI - which it’s not - then of course it’ll let them down. That’s like cursing cruise control for driving you into a ditch. It’s actually kind of amazing that an LLM gets any answers right at all. That’s just a side effect of being trained on a ton of correct information - not what it’s designed to do. So it’s like cruise control that’s also a somewhat decent driver, people forget what it really is, start relying on it for steering, and then complain their “autopilot” failed when all they ever had was cruise control.

          I don’t follow AI company claims super closely so I can’t comment much on that. All I know is plenty of them have said reaching AGI is their end goal, but I haven’t heard anyone actually claim their LLM is generally intelligent.

          • Chais@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            6 hours ago

            I know they’re not synonymous. But at some point someone left the marketing monkeys in charge of communication.
            My point is that our current “AI” is inadequate at what we’re told is its purpose and should it ever become adequate (which the current architecture shows no sign of being capable) we’re in a lot of trouble because then we’ll have no way to control an intelligence vastly superior to our own.

            So our current position on that journey is bad and the stated destination is undesirable, so it would be in our net interest to stop walking.

          • Tar_Alcaran@sh.itjust.works
            link
            fedilink
            arrow-up
            4
            ·
            edit-2
            6 hours ago

            If people treat it like AGI - which it’s not - then of course it’ll let them down.

            People treat it like the thing it’s being sold as. The LLM boosters are desperately trying to sell LLMs as coworkers and assistants and problemsolvers.

    • gabelstapler@feddit.org
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      7 hours ago

      Please don’t mix AI with LLMs. LLMs are surely overhyped and I guess they will never reach the quality they promise. AI on the other hand is used in many aspects, successfully. For many years protein folding was extremely difficult. Google threw AI at it, now it can be regarded as a solved problem.

      • Captain Aggravated@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 hours ago

        I think there’s a pretty major failure in overall AI/Machine learning as well. Elsewhere on Lemmy I saw someone talking about how the neural processor built into their laptop essentially doesn’t function. Like there isn’t and will never be any software that runs on it because it’s badly implemented and badly or not at all documented.

      • Perspectivist@feddit.uk
        link
        fedilink
        arrow-up
        4
        ·
        6 hours ago

        LLM is an AI, but the terms aren’t synonymous. AI describes a broad field - LLM is only one subcategory of it.

        • Captain Aggravated@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 hours ago

          AI is essentially a useless term because it has been used to describe everything from an LLM to a single if statement in the code of a video game.

          • Perspectivist@feddit.uk
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            4 hours ago

            “Plant” can describe anything from grass to giant redwoods, but it’s not a useless term. We have more specific names for all the subspecies of plants - and the same goes for AI.

  • The shit with VR, specifically, is baffling to me. We have pretty good tech for it and yet nobody seems to know what to actually do with them. Hardware is at a good starting point, but the software is mostly bullshit.

    • 87Six@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      3 hours ago

      I shit you not I know a University where they have AR (with haptic gloves, not just VR) headsets worth around 70k euro and nobody uses them.

      A student watched porn in class on one of them though, so I guess the taxpayer investment was worth it…

      • Barbecue Cowboy@lemmy.dbzer0.com
        cake
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 hours ago

        Honestly, immersive video is hands down the best use of VR today in my opinion and like no one outside of porn is doing anything interesting with it.

        Dude had the right priorities.

    • FellowEnt@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      4 hours ago

      Gaming. It has completely replaced flatscreen gaming for me. Playing FPS on a screen feels so ridiculous to me now.

    • Zarxrax@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      VR has given us an incredible amount of funny videos where the user will decide to run full speed into a wall or something. That and “hoverboards” have kept America’s Funniest Home Videos on the air.

    • Techlos@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      8 hours ago

      For sim racing games, it’s actually pretty helpful for being able to look around, check mirrors etc.

      Outside of that it feels pretty gimmicky

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      2
      ·
      7 hours ago

      Beat Saber and VRChat are genuinely cool at least. I think what it comes down to is that the illusion of physically being somewhere is cool and can be useful for some stuff but not as cool as everyone used to assume. More abstract entertainment and art works basically just as good or better most of the time because it doesn’t require specialized hardware, acclimating to nausea, and learning novel control schemes.

    • YiddishMcSquidish@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      I remember setting up a VR og Skyrim with an old android and some infrared LEDs. It was pretty cool at first, but I couldn’t play more than an hour without some kinda discomfort. I’ve tried some of the newer guys, and aside from being much higher def, it’s largely the same thing. I don’t want or need it in my life.

    • Korhaka@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      12
      ·
      9 hours ago

      I think there is an open source printer being created. Potentially has the chance at being the only printer that isn’t a pile of shit.

      • Barbecue Cowboy@lemmy.dbzer0.com
        cake
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 hours ago

        I’ve seen that project. Complete radio silence since the announcement and zero path to releasing anything.

        It really sucks.

        • Passerby6497@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          Keyword in that is ‘old’ - new printers are shit, and Brother printers have been pretty bad in my experience as well. Most any new printers I’ve touched is just terrible.

          But then again, I stan my old HP color LJ that I got for free from the early 2010s that I got when my employer went to a printer contract service and just dumped all the printers they currently had. That fucker runs like a champ and has let me put in after market toner carts without much complaint and 0 printing issues. Modern HP printers only belong on fire.

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 hours ago

      I have a black and white laser printer — a Brother, FWIW — that works great. It sits there and when I print the occasional document, flips on and quietly and quickly does its thing. I remember printers in past decades. Paper jams. Continuous-tractor feed paper having the tractor feeds rip free in the printer. Slow printing. Loud printing. Prints that smeared. Clogging ink nozzles on inkjets.

      It replaced a previous Apple black-and-white laser printer from…probably the early 1990s that I initially got used which also worked fine and worked until the day I threw it out — I just wanted more resolution, which current laser printers could do.

      The only thing that I can really beat the Brother up for is maybe that, like many laser printers, to cut costs on the power supply, it has a huge power spike in what it consumes when it initially comes on; I’d rather just pay for a better power supply. But it’s not enough for me to care that much about it, and if I really want to, I can plug it into power regulation hardware.

      It’s not a photo printer, and so if someone wants to print photos, I can appreciate that a laser printer isn’t ideal for that, but…I also never print photos, and if I did at some point, I’d probably just hit a print shop.

  • BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    33
    ·
    11 hours ago

    The big one would be viable nuclear fusion, we’ve been trying to figure it out and spending money on it for like 80 years now.

    That being said, there’s actually a lot of verified progress on it lately by reputable organizations and international teams.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    16
    ·
    11 hours ago

    Flying cars. The idea has intuitive appeal — just drive like normal, but most congestion problems go away!

    https://en.wikipedia.org/wiki/Flying_car

    We’ve made them, but the tradeoffs that you have to make to get a good road vehicle that is also a good aircraft are very large. The benefits of having a dual-mode vehicle are comparatively limited. I think that absent some kind of dramatic technological revolution, like, I don’t know, making the things out of nanites, we’ll just always be better off with dedicated vehicles of the first sort or the second.

    Maybe we could have call-on-demand aircraft that could air-ferry ground vehicles, but I think that with something on the order of current technology, that’s probably as close as we’ll get.

    • bob_lemon@feddit.org
      link
      fedilink
      arrow-up
      14
      ·
      9 hours ago

      Flying cars lose al appeal the moment you encounter other drivers on the road. Just imagine that, but flying.

    • Ziggurat@jlai.lu
      link
      fedilink
      arrow-up
      7
      ·
      9 hours ago

      The thing is how is a flying car different from a aircraft?

      We have (ultra) light aircraft not much more expensive than a good car, we have helicopter with vtol abilities. Licencing isn’t that more complex than for a car.

      The problem are the maintenance cost as a failure would be dramatic, and the noise meaning people don’t want them over their bacxyard/balcony

      • ageedizzle@piefed.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        how is a flying car different from a aircraft?

        If we had as many aircraft in the sky as we do cars on the road, the changes of deadly collisions in the air would go up by a lot. It would become crowded up there, with a lot of potential obstacles to bump into.

    • flabbergast@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      I don’t think any government will ever allow flying cars.
      Too prone for accidents, and way too much freedom.

    • Limerance@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      There are many models of flying cars. They usually are bad cars and bad planes and very expensive. Only good for niche wealthy enthusiasts.

  • Ziggurat@jlai.lu
    link
    fedilink
    arrow-up
    3
    ·
    7 hours ago

    Nobody said supra conductors ?

    While being used in some niche field (mostly MRI) the ultra-low temperature needed means there is only very few case where the price of cryo-cooler offset the elecricity costs