Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 36 Posts
  • 4.54K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle
  • I just am not sold that there’s enough of a market, not with the current games and current prices.

    There are several different types of HMDs out there. I haven’t seen anyone really break them up into classes, but if I were to take a stab at it:

    • VR gaming googles. These focus on providing an expansive image that fills the peripheral vision, and cut one off from the world. The Valve Index would be an example.

    • AR goggles. I personally don’t like the term. It’s not that augmented reality isn’t a real thing, but that we don’t really have the software out there to do AR things, and so while theoretically these could be used for augmented reality, that’s not their actual, 2026 use case. But, since the industry uses it, I will. These tend to display an image covering part of one’s visual field which one can see around and maybe through. Xreal’s offerings are an example.

    • HUD glasses. These have a much more limited display, or maybe none at all. These are aimed at letting one record what one is looking at less-obtrusively, maybe throw up notifications from a phone silently, things like the Ray-Ban Meta.

    • Movie-viewers. These things are designed around isolation, but don’t need head-tracking. They may be fine with relatively-low resolution or sharpness. A Royole Moon, for example.

    For me, the most-exciting prospect for HMDs is the idea of a monitor replacement. That is, I’d be most-interested in something that does basically what my existing displays do, but in a lower-power, more-portable, more-private form. If it can also do VR, that’d be frosting on the cake, but I’m really principally interested in something that can be a traditional monitor, but better.

    For me, at least, none of the use cases for the above classes of HMDs are super-compelling.

    For movie-viewing. It just isn’t that often that I feel that I need more isolation than I can already get to watch movies. A computer monitor in a dark room is just fine. I can also put things on a TV screen or a projector that I already have sitting around and I generally don’t bother to turn on. If I want to block out outside sound more, I might put on headphones, but I just don’t need more than that. Maybe for someone who is required to be in noisy, bright environments or something, but it just isn’t a real need for me.

    For HUD glasses, I don’t really have a need for more notifications in my field of vision — I don’t need to give my phone a HUD.

    AR could be interesting if the augmented reality software library actually existed, but in 2026, it really doesn’t. Today, AR glasses are mostly used, as best I can tell, as an attempt at a monitor replacement, but the angular pixel density on them is poor compared to conventional displays. Like, in terms of the actual data that I can shove into my eyeballs in the center of my visual field, which is what matters for things like text, I’m better off with conventional monitors in 2026.

    VR gaming could be interesting, but the benefits just aren’t that massive for the games that I play. You get a wider field of view than a traditional display offers, the ability to use your head as an input for camera control. There are some genres that I think that it works well with today, like flight sims. If you were a really serious flight-simmer, I could see it making sense. But most genres just don’t benefit that much from it. Yeah, okay, you can play Tetris Effect: Connected in VR, but it doesn’t really change the game all that much.

    A lot of the VR-enabled titles out there are not (understandably, given the size of the market) really principally aimed at taking advantage of the goggles. You’re basically getting a port of a game aimed at probably a keyboard and mouse, with some tradeoffs.

    And for VR, one has to deal with more setup time, software and hardware issues, and the cost. I’m not terribly price sensitive on gaming compared to most, but if I’m getting a peripheral for, oh, say, $1k, I have to ask how seriously I’m going to play any of the games that I’m buying this hardware for. I have a HOTAS system with flight pedals; it mostly just gathers dust, because I don’t play many WW2 flight sims these days, and the flight sims out there today are mostly designed around thumbsticks. I don’t need to accumulate more dust-collectors like that. And with VR the hardware ages out pretty quickly. I can buy a conventional monitor today and it’ll still be more-or-less competitive for most uses probably ten or twenty years down the line. VR goggles? Not so much.

    At least for me, the main things that I think that I’d actually get some good out of VR goggles on:

    • Vertical-orientation games. My current monitors are landscape aspect ratio, and don’t support rotating (though I imagine that there might be someone that makes a rotating VESA mount pivot, and I could probably use wlr-randr to make Wayland change the display orientation manually) Some games in the past in arcades had something like a 3:4 portrait mode aspect ratio. If you’re playing one of those, you could maybe get some extra vertical space. But unless I need the resolution or portability, I can likely achieve something like that by just moving my monitor closer while playing such a game.

    • Pinball sims, for the same reason.

    • There are a couple of VR-only games that I’d probably like to play (none very new).

    • Flight sims. I’m not really a super-hardcore flight simmer. But, sure, for WW2 flight sims or something like Elite: Dangerous, it’s probably nice.

    • I’d get a little more immersiveness out of some games that are VR-optional.

    But…that’s just not that overwhelming a set of benefits to me.

    Now, I am not everyone. Maybe other people value other things. And I do think that it’s possible to have a “killer app” for VR, some new game that really takes advantage of VR and is so utterly compelling that people feel that they’d just have to get VR goggles so as to not miss out. Something like what World of Warcraft did for MMO gaming, say. But the VR gaming effort has been going on for something like a decade now, and nothing like that has really turned up.


  • Have a limited attack surface will reduce exposure.

    If, say, the only thing that you’re exposing is, oh, say, a Wireguard VPN, then unless there’s a misconfiguration or remotely-exploitable bug in Wireguard, then you’re fine regarding random people running exploit scanners.

    I’m not too worried about stuff like (vanilla) Apache, OpenSSH, Wireguard, stuff like that, the “big” stuff that have a lot of eyes on them. I’d be a lot more dubious about niche stuff that some guy just threw together.

    To put perspective on this, you gotta remember that most software that people run isn’t run in a sandbox. It can phone home. Games on Steam. If your Web browser has bugs, it’s got a lot of sites that might attack it. Plugins for that Web browser. Some guy’s open-source project. That’s a potential vector too. Sure, some random script kiddy running an exploit scanner is a potential risk, but my bet is that if you look at the actual number of compromises via that route, it’s probably rather lower than plain old malware.

    It’s good to be aware of what you’re doing when you expose the Internet to something, but also to keep perspective. A lot of people out there run services exposed to the Internet every day; they need to do so to make things work.




  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldPerchance slow render
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    2 days ago

    I don’t use Perchance, but what was the image resolution?

    I’d expect the processing time to grow linearly in the image area, and I imagine that you could probably find an image that, if the thing has sufficient memory available, will take quite some time to process.

    If it is particularly high resolution and you just want a test image to see whether you’re having some other problem, you might try using a low-resolution image to check.













  • I believe that “older” mods can remove other mods, same as on Reddit, though I’ve never tried. So mods that show up higher on the list of mods in the right-hand sidebar in the Lemmy Web UI for the community.

    Or instance admins on the instance where the community lives. They probably won’t get involved unless the mod is violating the rules they’ve set for their instance. Your idea of what constitutes acceptable behavior for the mod and their idea may or may not be the same.

    You’d have to talk to either those “more senior” mods or the instance admins and convince one of them that the mod shouldn’t be a mod for that community.

    Only alternative is going and creating some alternative community elsewhere and drawing users away.



  • Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up.

    Eh…I don’t know if you’d call it “really serious hardware”, but when I picked up my 128GB Framework Desktop, it was $2k (without storage), and that box is often described as being aimed at the hobbyist AI market. That’s pricier than most video cards, but an AMD Radeon RX 7900 XTX GPU was north of $1k, an NVidia RTX 4090 was about $2k, and it looks like the NVidia RTX 5090 is presently something over $3k (and rising) on EBay, well over MSRP. None of those GPUs are dedicated hardware aimed at doing AI compute, just high-end cards aimed at playing games that people have used to do AI stuff on.

    I think that the largest LLM I’ve run on the Framework Desktop was a 106 billion parameter GLM model at Q4_K_M quantization. It was certainly usable, and I wasn’t trying to squeeze as large a model as possible on the thing. I’m sure that one could run substantially-larger models.

    EDIT: Also, some of the newer LLMs are MoE-based, and for those, it’s not necessarily unreasonable to offload expert layers to main memory. If a particular expert isn’t being used, it doesn’t need to live in VRAM. That relaxes some of the hardware requirements, from needing a ton of VRAM to just needing a fair bit of VRAM plus a ton of main memory.


  • Are Motorola ok?

    Depends on what you value in a phone. Like, I like a vanilla OS, a lot of memory, large battery, and a SIM slot. I don’t care much about the camera quality and don’t care at all about size and weight (in fact, if someone made a tablet-sized phone, I’d probably switch to that). That’s almost certainly not the mix that some other people want.

    There’s some phone comparison website I was using a while back that has a big database of phones and lets you compare and search based on specification.

    goes looking

    This one:

    https://www.phonearena.com/phones