• 0 Posts
  • 154 Comments
Joined 2 years ago
cake
Cake day: June 6th, 2023

help-circle




  • Depending on your bank, you may be able to use their website.

    The “no apps” isn’t “that big of an issue” (at least for me), as there’s Waydroid available, and it’s just standard Linux with all the desktop apps right from Flathub. There’s also plenty of webapps available.

    There’s tons of other issues with Linux mobile, like general usability, battery life, responsiveness (especially when receiving calls or notifications), and hardware support.

    The biggest one I’m running into is sleep states. I can either have 4-ish hours of battery life, but if my phone is charged, I get notifications, alarms go off, and calls come in immediately. Or I can have about a day idle battery life, but have to check my phone before any of that stuff comes in.

    There’s also the fact I use my phone for media a lot (Jellyfin, Lemmy), and the experience isn’t great on Linux mobile. “Apps” integtate less with each other, and video playback is kind of a mess. (For example, I can’t “share” on a photo from Lemmy to send it to a friend on Matrix).


  • Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.

    Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.

    LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.

    Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.

    So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.









  • IRC does not have any federation, and XMPP does it in a completely different way from Matrix that has unique pros and cons.

    IRC is designed for you to connect to a specific server, with an account on that server, to talk to other people on that server. There is no federation, you cannot talk to oftc from libera.chat. Alongside that, with mobile devices being so common, you’d need to get people to host their own bouncer, or host one for nearly everyone on your network.

    XMPP federation conceptually has one major difference compared to Matrix: XMPP rooms are owned by the server that created them, whereas Matrix rooms are equally “owned” by everyone participating in it, with the only deciding factor being which users have administrator permissions.

    This makes for better (and easier) scaling on XMPP, so rooms with 50k people isn’t that big of an issue for any users in that room. However, if the server owning the room goes down, the whole room is down, and nobody can chat. See Google Talk dropping XMPP federation after making a mess of most client and server implementations.

    On Matrix, scaling is a much bigger issue, as everyone connects with everyone else. Your single-person homeserver has to talk with every other homeserver you interact with. If you join a lot of big rooms, this adds up, and takes a lot of resources. However, when a homeserver goes down, only the people on that homeserver are affected, not the rooms. Just recently, matrix.org had some trouble with their database going down. Although it was a bit quieter than usual, I only properly noticed when it was explicitly mentioned in chat by someone else. My service was not interrupted, as I host my own homeserver.

    The Matrix method of federation definitely comes with some issues, some conceptually, and some from the implementation. However, a single entity cannot take down the federated Matrix network, even when taking down the most used homeservers. XMPP is effectively killed off by doing the same.



  • deadcade@lemmy.deadca.detoMemes@lemmy.mlLet's update...
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Yes. -Syyu is for “Sync (repository action), database update (forced), upgrade packages”, in that order (though the flags don’t have to be). Doubling a lowercase character like yy or uu is to force the operation. yy in particular shouldn’t be needed, as it only overrides the “is your database recent” check. Unless you’re updating more than every 5 minutes, using a single y is perfectly fine.




  • Being able to choose the OS and kernel is also important. I would not want my hypervisor machine to load GPU kernel modules, especially not on an older LTS kernel (which often don’t support the latest hardware). Passing the GPU to a VM ensures stability of the host machine, with the flexibility to choose whatever kernel I need for specific hardware. This alongside running entirely different OSes (like *BSD, Windows :(, etc) is pretty useful for some services.