In his newest (and worst) How Do You Do Fellow Kids moment, Mark Zuckerberg launches the Poob service, accessible exclusively through the Metaverse. What does it do? Fucked if we know.
I take my shitposts very seriously.
In his newest (and worst) How Do You Do Fellow Kids moment, Mark Zuckerberg launches the Poob service, accessible exclusively through the Metaverse. What does it do? Fucked if we know.
adduser
is an interactive wrapper for useradd
. It can, for example, prompt the user to set a password rather than execute passwd
separately. Very useful if you just want to manage a user without reading through useradd
’s command line options, then running usermod
because you forgot to set something.
It doesn’t excuse the bad naming, I’d rather have something like useradd --interactive
, but it’s worth remembering.
I wish my one bad experience in 2015 had been the absolute last time Pulse failed for anyone ever. Alas, time doesn’t work that way, and Pulse remained failure-prone for years after my encounter with it.
Extremely frequently. Digital musical instruments generally don’t output production quality sounds – they output MIDI data that describes what note is being played, and an audio synthesizer device or software interprets it and generates the audio data.
Very mature. If you’re going to act like a toddler, I’ll put you in the timeout box.
For the record, your reply to my comment had no bearing on my decision, and I’ll preserve the entire comment chain in case I find myself on YPTB.
Very mature. If you’re going to act like a toddler, I’ll put you in the timeout box.
Memes are not necessarily meant to be funny. They’re meant to represent a common experience within a group. The horrid failure rate of PulseAudio just a couple years ago fits that definition perfectly, and in fact it’s why I gave up on Ubuntu 14.04 in 2015.
It should be Neovim and Lua. Nobody should be subjected to the curse and torment of writing Elisp.
I think you’re confusing it with Manjaro, which has had several.
Proxmox is a great starting point. I use it in my home server and at work. It’s built on Debian, with a web interface to manage your virtual machines and containers, the virtual network (trivial unless you need advanced features), virtual disks, and installer images. There are advanced options like clustering and high availability, but you really don’t have to interact with those unless you need them.
Not even the man RMS himself uses GNU/Hurd or Guix, which is hilarious.
The Mythbusters did it! They couldn’t even get up to speed to begin the first experiment because the traffic jam formed naturally.
You should check out the tldr
program. It’s a community-driven quick reference tool that lists common practical examples for commands.
Windows devs used to have fun.
New developments: just a few hours before I post this comment, The Register posted an article about AI crawler traffic. https://www.theregister.com/2025/08/21/ai_crawler_traffic/
Anubis’ developer was interviewed and they posted the responses on their website: https://xeiaso.net/notes/2025/el-reg-responses/
In particular:
Fastly’s claims that 80% of bot traffic is now AI crawlers
In some cases for open source projects, we’ve seen upwards of 95% of traffic being AI crawlers. For one, deploying Anubis almost instantly caused server load to crater by so much that it made them think they accidentally took their site offline. One of my customers had their power bills drop by a significant fraction after deploying Anubis. It’s nuts.
So, yeah. If we believe Xe, OOP’s article is complete hogwash.
That’s why the developer is working on a better detection mechanism. https://xeiaso.net/blog/2025/avoiding-becoming-peg-dependency/
With how much authority you wrote with before, I thought you’d be able to grasp the concept. I’m sorry I assumed better.
THEN (and this is the part you don’t seem to understand) the client process has to waste time solving the challenge, which is, by the way, orders of magnitudes lighter on the server than serving the actual meaningful content, or cancel the request. If a new request is sent during that time, it will still have to waste time solving the challenge. The scraper will get through eventually, but the challenge delays the response and reduces the load on the server because while the scrapers are busy computing, it doesn’t have to serve meaningful content to them.
Linux has two different kinds of “used” memory. One is memory allocated for/by running processes that cannot be reclaimed or reallocated to another process. This memory is unavailable. The other kind is memory used for caching (ZFS, write-back cache, etc) that can be reclaimed and allocated for other things as needed. Memory that is not allocated in any way is free. Memory that is either free or allocated to cache is available.
It looks like
htop
only shows unavailable memory as “used”, while proxmox shows the sum of unavailable and cached memory. Proxmox “uses” 11 GB, but it’s not running out of memory because most of it is “available”.