• 16 Posts
  • 349 Comments
Joined 3 years ago
cake
Cake day: August 10th, 2023

help-circle



  • go run works by compiling the program to a temporary executable and then executing that.

    can you guarantee that runs everywhere

    It seems to depend on glibc versions, if that’s what you are asking. You can force it to be more static by using a static musl python or via other tools. Of course, a binary for Linux only runs on Linux and the same for Windows and Mac. But yeah.

    Also it should be noted that go binaries that use C library dependencies are not truly standalone, often depending on glibc in similar ways. Of course, same as pyinstaller, you can use musl to make it more static.







  • I won’t lie, I use curl | bash as well, but I do dislike it for two reasons:

    Firstly, it is much, much easier to compromise the website hosting than the binary itself, usually. Distributed binaries are usually signed by multiple keys from multiple servers, resulting in them being highly resistant to tampering. Reproducible builds (two users compiling a program get the same output) make it trivial to detect tampering as well.

    On the other hand, websites hosting infrastructure is generally nowhere near as secure. It’s typically one or two VPS’s, and there is no signature or verification that the content is “official”. So even if I’m not tampering with the binary, I can still tamper with the bash script to add extra goodies to it.

    On the other hand (but not really relevant to what OP is talking about), just because I trust someone to give me a binary in a mature programming language they have experience writing in, doesn’t mean I trust them to give me a script in a language known for footguns. A steam bug in their bash script once deleted a user’s home directory. There have also been issues with AUR packages, which are basically bash scripts, breaking people’s systems as well. When it comes to user/community created scripts, I mostly trust them to not be malicious, and I am more fearful of a bug or mistake screwing things up. But at the same time, I have little confidence in my ability to spot these bugs.

    Generally, I only make an exception for running bash installers if the program being installed is a “platform” that I can use to install more software. K3s (Kubernetes distro), or the Nix package manager are examples. If I can install something via Nix or Docker then it’s going to be installed via there and not installed via curl | bash. Not every developer under the sun should be given the privilege of running a bash script on my system.

    As a sidenote, docker doesn’t recommend their install script anymore. All the instructions have been removed from the website, and they recommend adding their own repo’s instead. Personally, I prefer to get it from the distro’s repositories, as usually that’s the simplest and fastest way to install docker nowadays.




  • It’s easy. Mumble. Or the thing you used probably still works.

    But you see, people never actually seek a discord alternative. They want a discord alternative that includes all the features in one app that is also federated, AND end to end encrypted, and each one makes things vastly more technically challenging and resource intensive and then you want them together.

    A little secret: Matrix is much, much easier to host if you disable encryption and federation. Federation to many servers is the main performance killer, and “failed to decrypt message” will all disappear if you disable encryption.


  • If your software updates between stable releases break, the root cause is the vendor, rather than auto updating. There exist many projects that manage to auto update without causing problems. For example, Debian doesn’t even do features or bugfixes, but only updates apps with security patches for maximum compatibility.

    Crowdstrike auto updating also had issues on Linux, even before the big windows bsod incident.

    https://www.neowin.net/news/crowdstrike-broke-debian-and-rocky-linux-months-ago-but-no-one-noticed/

    It’s not the fault of the auto update process, but instead the lack of QA at crowdstrike. And it’s the responsibility of the system administrators to vet their software vendors and ensure the models in use don’t cause issues like this. Thousands of orgs were happily using Debian/Rocky/RHEL with autoupdates, because those distros have a model of minimal feature/bugfixes and only security patches, ensuring no fuss security auto updates for around a decade for each stable release that had already had it’s software extensively tested. Stories of those breaking are few and far between.

    I would rather pay attention to the success stories, than the failures. Because in a world without automatic security updates, millions of lazy organizations would be running vulnerable software unknowingly. This already happens, because not all software auto updates. But some is better than none and for all software to be vulnerable by default until a human manually touches it to update it is simply a nightmare to me.












  • Unless you are running at really large scales, or really small scales and trying to fit stuff that quite fit, memory compression may not be significant enough of an optimization to spend a lot of time experimenting a lot. But I’m bored and currently on an 8 GB device so here are my thoughts dumped out from my recent testing:

    Zram vs Zswap (can be done at hypervisor or at host):

    • One or the other is commonly enabled on many modern distros. It is a perfectly reasonable position to simply use the distro’s defaults and not push it any further
    • Zram has much, much better compression, but suffers from LRU inversion. Essentially after zswap is full, fresh pages (memory) goes to the swap instead. Since these pages will probably be needed, it will be slower to get them from the disk then to get them from zram.
    • Zswap has much, much worse compression but cold, unused pages are moved to swap automatically, freeing up space
    • I am investigating ways to get around the above. See my thoughts on this and other differences here: https://github.com/moonpiedumplings/moonpiedumplings.github.io/blob/main/playground/asahi-setup/index.md#memory-optimization

    Kernel same page merging (KSM) (would be done at hypervisor level) (esxi also has an equivalent feature called something different):

    • Only really efficient if you have lots of the same virtual machines
    • Used to overcommit (promise more ram than you physically have)
      • Dangerous, but highly cost saving. Many cheap VPS providers do this in order to save money. You can run four 8 GB vps on 24 GB of ram and take a semi-safe bet that not all of the memory will be used.

    In my opinion, the best thing is to enable zram or zswap at the virtual machine level and kernel same page merging at the hypervisor level, assuming you take into account and accept the marginal security risk and slightly weaker isolation that comes with KSM. There isn’t any point running zswap at two layers, because the hypervisor is just gonna spend a lot of time trying to see if it can compress stuff that’s already been compressed. Than KSM deduplicates memory across hosts. Although you may actually see worse savings overall if zram/zswap compression is only semi-deterministic and makes deduplication ahrder.

    I agree with the other commenter as well about zram being weird with some workloads. Like I’ve heard of I think it was blender interacting weirdly with zram since zram is swap, making less total memory available in ram, whereas zswap compresses memory. If you really need to know you gotta test.