

I’ve been thinking about setting up Anubis to protect my blog from AI scrapers, but I’m not clear on whether this would also block search engines. It would, wouldn’t it?
Canadian software engineer living in Europe.
I’ve been thinking about setting up Anubis to protect my blog from AI scrapers, but I’m not clear on whether this would also block search engines. It would, wouldn’t it?
I use them quite heavily in combination with Cookie Autodelete. I then create a separate profile for each surveillance capitalist service I work with. So for example, here’s my list of containers:
Every time I visit one of these sites, Firefox opens them in the respective container, and the cookies they create are isolated to that container. When I’m in the LinkedIn container, Cookie AutoDelete nukes every cookie that isn’t from LinkedIn (including Google, GitHub, etc.). When I’m not in any container, all cookies are deleted everywhere.
Basically it’s a nice way to leverage Cookie Autodelete without having to whitelist Big Tech for all my browsing.
I had a job interview a few weeks ago where the lead developer straight-up said that he doesn’t have any tests in the codebase because “it’s just writing your code twice”. I thought he was joking. Unfortunately he was not.
I didn’t end up getting the job, perhaps because I made it clear that I thought he was very wrong. I think I dodged a bullet.
I have much the same:
The only difference is that I’m using a Synology 'cause I have 15TB and don’t know how to do RAID myself, let alone how to do it with an old laptop. I can’t really recommend a Synology though. It’s got too many useless add-ons and simple tools like rsync never work properly with it.
Yeah this was a deal-breaker for me too.
Unfortunately, a rather substantial portion of warfare is the economics behind it. Often, spending eye-watering amounts of money on proprietary, overpriced hardware is the point. It’s corporate welfare.
Yes. Tailscale is surprisingly simple.
# systemctl start tailscale
# tailscale up
Lowering the barrier to entry by moving from a technology few use (mercurial) to something popular (git) makes sense. Requiring participation on a proprietary platform owned by Microsoft instead of an open one like Codeberg or GitLab is just lazy. If someone wants to contribute to Firefox, asking them to create an account is a small ask, and I’d argue that if they’re unwilling to do even that, then their participation in the community is likely to be far from useful.
They could have opted for Codeberg for example and made a public donation to the project of a few hundred dollars a month. Instead, they opted for funnelling more power and support into a terrible company.
I had the same reaction until I read this.
TL;DR: it’s 10-50x more efficient at cleaning the air and actually generates both electricity and fertiliser.
Yes, it would be better to just get rid of all the cars generating the pollution in the first place and putting in some more trees, but there are clear advantages to this.
I was one of the people who based my opinion of Proton on that tweet and swore off them until someone else shared that link with me. It’s excellent, thorough, and makes a convincing case that Yang is actually left-leaning. I can only assume that you’re getting downvotes from people who haven’t read it.
Nobody asked for this.
This is great news, and I might be tempted to use it if I had some reassurance that the mail servers (and the organisation that controls them) weren’t subject to U.S. jurisdiction.
I’m quite happy with EuroDNS. They even include free email housing if you want it.
Fascinating! Thanks for sharing. I’m not sure I’d be happy in a fully remote role where you’ve got hundreds of employees voting on how you build stuff, but I know that there are lots of people who dig this pattern, and they’re clearly doing Good work.
It’s a rather brilliant idea really, but when you consider the environmental implications of forcing web requests to ensure proof of work to function, this effectively burns a more coal for every site that implements it.
Many of us have spent massive sums of time and energy supporting Firefox and Mozilla. We’ve donated patches, documentation, and more importantly our good will to the project, promoting Mozilla as the defender of your rights and of the internet. We are rightfully upset to have those efforts stolen and used against us, and we won’t stop raging about it until Mozilla does right by the social contract we all signed on for at the start of this journey.
I’d forgotten about Tasker. I’ll give that a try, thanks!
This all appears to be based on the user agent, so wouldn’t that mean that bad-faith scrapers could just declare themselves to be typical search engine user agent?