• 20 Posts
  • 450 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle

  • d3Xt3r@lemmy.nztoKDE@lemmy.kde.socialOpt out? Opt in? Opt Green!
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    5 months ago

    I’m not moving any goalposts. You’re the one arguing about the semantics around “Plasma”, and I keep saying that’s irrelevant.

    Refer back to my original comment which was, and I quote:

    So, are there any plans to reduce the bloat in KDE, maybe even make a lightweight version (like LXQt) that’s suitable for older PCs with limited resources?

    To clarify, here I was:

    • Referring to KDE + default apps that are part of a typical KDE installation
    • Stating that a typical KDE installation is bloated compared to a typical lightweight DE like LXQt
    • Saying with the intention that the “bloat” is RELATIVE, with respect to a older PC with limited resources

    The ENTIRE point of my argument was the KDE isn’t really ideal RELATIVELY, for older PCs with limited resources, and I’m using LXQt here are a reference.

    In a subsequent test, here’s a direct apples-to-apples(ish) component comparison:

    Component Process_KDE RAM_KDE Process_LXQt RAM_LXQt
    WM kwin_x11 99 openbox 18
    Terminal konsole 76 qterminal 75
    File Manager Dolphin 135 pcmanfm-qt 80
    File Archiver ark 122 Lxqt-archiver 73
    Text Editor kwrite 121 featherpad 73
    Image Viewer gwenview 129 lximage-qt 76
    Document Viewer okular 128 qpdfview-qt6 51
    Total 810 446

    plasmashell was sitting at 250MB btw in this instance btw.

    The numbers speak for themselves - no one in their right minds would consider KDE (or plasmashell, since you want to be pedantic) to be “light”, in RELATION to an older PC with limited resources - which btw, was the premise of my entire argument. Of course KDE or plasmashell might be considered “light” on a modern system, but not an old PC with 2GB RAM. Whether something is considered light or bloated is always relative, and in this instance, it’s obvious to anyone that KDE/plasmashell isn’t “light”.



  • d3Xt3r@lemmy.nztoKDE@lemmy.kde.socialOpt out? Opt in? Opt Green!
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    5 months ago

    You’re arguing semantics and that’s not the point I’m trying to argue here. Forget the term “Plasma”. I don’t really care about what the DE is branded as or what’s in “Plasma” the software package. When I say “KDE”, I mean the desktop + all the basic default/recommended apps that you’d see on a typical KDE installation, such as Dolphin, Konsole, Kate, Kalculator, Spectacle etc that’s part of the KDE project. IDK whether the apps I’ve mentioned are considered part of “Plasma” or not, but again, that’s not the point, I’m saying this is what I meant when I said “KDE” - and what most people would expect when they picture a “KDE” environment.

    Anyways, I tested this myself on two identical VMs with 2GB RAM, one installed with Fedora 40 KDE, and another with Fedora 40 LXQt, both set to use X11 (because LXQt isn’t Wayland ready yet), both updated and running the latest kernel 6.8.10-300.fc40. I logged into the DEs, opened only two terminal windows and nothing else, ran, and ran htop. The screenshot speaks for itself:

    And when I tried disabling swap on both machines, the KDE machine was practically unusable, with only 53MB RAM remaining before it completely froze on me. Meanwhile, the LXQt one was still very much usable even without swap enabled.

    I’d like to see you try running without swap and see how it fares. And if you think it’s unfair disabling swap on a 2GB machine - try installing LXQt yourself, disable swap and see for yourself how much more usable it is compared to KDE.

    And this is why I say KDE is bloated and not suitable for old machines.

    Edit: Also, check out the memory consumption listed by a user in this post: https://lemmy.nz/comment/9070317

    Edit2: Here’s a screenshot of the top 30 processes on my test systems, side-by-side:

    Of the above, I calculated the usage of the top 10 processes specific to each respective DE, and you can see that KDE’s memory usage is almost double that of LXQt. Had I counted all the DE-specific processes, it’d no doubt be a lot more than double.


  • d3Xt3r@lemmy.nztoKDE@lemmy.kde.socialOpt out? Opt in? Opt Green!
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    edit-2
    5 months ago

    Correct me if I’m wrong, but this #OptGreen project isn’t talking specifically about Plasma, is it? They don’t mention Plasma anywhere on the page they linked.

    In any case, that’s irrelevant, also, I don’t doubt that KDE can’t run at all under the specs you mentioned - that’s not the issue. The question is, how much free/usable RAM do you actually have on that machine - let’s say with no apps open first, and with then check again with Konsole + Dolphin + KWrite/Kate open? And for fun, fire up Konqueror as well and check again.


  • d3Xt3r@lemmy.nztoKDE@lemmy.kde.socialOpt out? Opt in? Opt Green!
    link
    fedilink
    arrow-up
    15
    arrow-down
    6
    ·
    edit-2
    5 months ago

    Edit: Screenshots proving that what you’re saying is not correct:

    I’m not talking specifically about Plasma, I’m talking about the “DE” part of KDE in general; and particularly in this context of repurposing and extending the life of old PCs.

    I find it a bit ironic for KDE to be pushing this message, when it’s a heavy DE (relatively speaking) - it’s NOT what anyone would have in mind when when selecting a DE for an old PC.

    For instance, take LXQt - run the default/recommended file browser, terminal and text editor, and compare it with KDE + equivalents - you’d see a significant difference in resource consumption. On a system with low RAM, that extra bit of free memory makes a big difference, as it could mean avoiding the penalty hit of the swap file, which you’d invariably run into as soon as you fire up a modern Web browser. So it’s vital that the DE use as little resources as possible on such a machine.





  • Before y’all get excited, the press release doesn’t actually mention the term “open source” anywhere.

    Winamp will open up its code for the player used on Windows, enabling the entire community to participate in its development. This is an invitation to global collaboration, where developers worldwide can contribute their expertise, ideas, and passion to help this iconic software evolve.

    This, to me, reads like it’s going to be a “source available” model, perhaps released under some sort of a Contributor License Agreement (CLA). So, best to hold off any celebrations until we see the actual license.





  • d3Xt3r@lemmy.nztoAndroid@lemmy.world...
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    6 months ago

    Because MIUI deviates from stock Android so much that it often causes unexpected behaviour and bugs. So it’s easier for developers to just say they don’t support it, instead of putting up with negative reviews and complaints.






  • Since you’re on Linux, it’s just a matter of installing the right packages from your distros package manager. Lots of articles on the Web, just google your app + “ROCm”. Main thing you gotta keep in mind is the version dependencies, since ROCm 6.0/6.1 was released recently, some programs may not yet have been updated for it. So if your distro packages the most recent version, your app might not yet support it.

    This is why many ML apps also come as a Docker image with specific versions of libraries bundled with them - so that could be an easier option for you, instead of manually hunting around for various package dependencies.

    Also, chances are that your app may not even know/care about ROCm, if it just uses a library like PyTorch / TensorFlow etc. So just check it’s requirements first.

    As for AMD vs nVidia in general, there are a few places mainly where they lagged behind: RTX, compute and super sampling.

    • For RTX, there has been improvements in performance with the RDNA3 cards, but it does lag behind by a generation. For instance, the latest 7900 XTX’s RTX performance is equivalent to the 3080.

    • Compute is catching up as I mentioned earlier, and in some cases the performance may even match nVidia. This is very application/library specific though, so you’ll need to look it up.

    • Super Sampling is a bit of a weird one. AMD has FSR and it does a good job in general. In some cases, it may even perform better since it uses much simpler calculations, as opposed to nVidia’s deep learning technique. And AMD’s FSR method can be used with any card in fact, as long as the game supports it. And therein lies the catch, only something like 1/3rd of the games out there support it, and even fewer games support the latest FSR 3. But there are mods out there which can enable FSR (check Nexus Mods) that you might be able to use. In any case, FSR/DLSS isn’t a critical thing, unless you’re gaming on a 4K+ monitor.

    You can check out Tom’s Hardware GPU Hierarchy for the exact numbers - scroll down halfway to read about the RTX and FSR situation.

    So yes, AMD does lag behind in nVidia but whether this impacts you really depends on your needs and use cases. If you’re a Linux user though, getting an AMD is a no-brainer - it just works so much better, as in, no need to deal with proprietary driver headaches, no update woes, excellent Wayland support etc.



  • It’s not “optimistic”, it’s actually happening. Don’t forget that GPU compute is a pretty vast field, and not every field/application has a hard-coded dependency on CUDA/nVidia.

    For instance, both TensorFlow and PyTorch work fine with ROCm 6.0+ now, and this enables a lot of ML tasks such as running LLMs like Llama2. Stable Diffusion also works fine - I’ve tested 2.1 a while back and performance has been great on my Arch + 7800 XT setup. There’s plenty more such examples where AMD is already a viable option. And don’t forget ZLUDA too, which is being continuing to be improved.

    I mean, look at this benchmark from Feb, that’s not bad at all:

    And ZLUDA has had many improvements since then, so this will only get better.

    Of course, whether all this makes an actual dent in nVidia compute market share is a completely different story (thanks to enterprise $$$ + existing hw that’s already out there), but the point is, at least for many people/projects - ROCm is already a viable alternative to CUDA for many scenarios. And this will only improve with time. Just within the last 6 months for instance there have been VAST improvements in both ROCm (like the 6.0 release) and compatibility with major projects (like PyTorch). 6.1 was released only a few weeks ago with improved SD performance, a new video decode component (rocDecode), much faster matrix calculations with the new EigenSolver etc. It’s a very exiting space to be in to be honest.

    So you’d have to be blind to not notice these rapid changes that’s really happening. And yes, right now it’s still very, very early days for AMD and they’ve got a lot of catching up to do, and there’s a lot of scope for improvement too. But it’s happening for sure, AMD + the community isn’t sitting idle.