If you create a good product the market will pick it up, throwing cash at random projects and killing it when it doesn’t make huge profit sounds wasteful.
If you create a good product the market will pick it up, throwing cash at random projects and killing it when it doesn’t make huge profit sounds wasteful.
You mean they throw a lot of money at the wall hoping that something will stick?
We should start call them that…
Or maybe more like:
ExploitativeAI
or ExAI
https://chatgpt.com/share/66e9426a-c178-800d-a34e-ae4883f70ca0
But you have a lot of cold air to cool it down, and on a side note it makes your room warmer which you might want in that cold region 😅
(But the energy savings is hard to argue with)
On the server side to send you data, using any web server with mmap support will probably be less CPU intensive than app that handles websocket, but yes, the details matter as when reading a lot of small files vs websocket, then websocket could be better for CPU usage especially when you could generate data.
But once again using plain old http allow you to use the speestest software against any CDN very easy IMO.
Yes, but with WebSocket you need to have a server and that will consume some additional CPU.
Without it you only need some random CDN to do the download test.
My guess that websocket add additional overhead both in size (header) and complexity as browser and server need to encode/decode it making it more CPU intensive not to mention harder to implement.
Does it support intro skip?
10-50 people normal use case?
For KeePass no, for VaultWarden yes.
Just got triggered for the comment above suggesting a solution that doesn’t work for quite a lot of deployments/users, but yes, my comment was a little bit out of place as for single user deployments KeePass is probably way simpler/better.
Totally agreed, but there are pros and cons.
File - harder to steal but once stolen hacker can bruteforce it as much as it wants. Web service - with proper rate limits (and additional IP whitelist so you can only sync on VPN/local network) - its harder to bruteforce. (But yes, you (sometimes) have also full copy locally in the local client, but …)
If it was only for me I probably would also go with KeePass as you will not update the same db at the same time, but with with multiple users it’s getting unmanageable.
I just got triggered as those CVEs are not that bad due to the nature that the app encrypts stuff on the client side so web server is more like shared file storage, while your answer suggested to switch to a solution that doesn’t work for a lot of people (as we already tried that).
Explain how can you use KeePass+Syncthing with 10-50 people (possibly different groups for different passwords) having different sets of access level while maintaining sane ease of use?
The passwords are encrypted in the first place so the security for them is only on the client side.
I call those estimates BS like always, but who knows.
Maybe they should focus on giving people a way to access those legally? Where on that poster campain say where to go? And secondly… They as always still introduce the BS regional locking!
(No internet =} no download = no failure
You can even host repo mirror locally, that will still work without internet ;)
How to have internet without power?
Very cool project, as you can host your own stream on your own terms while publish to open/global directory and also integrates with Fediverse <3
I move unsubscribed emails to different folder, so next time they send me email I don’t feel bad in any way as I can confirm that I did tell them not to send me emails.
I only regret I can’t flag it as spam double time.
Gauguin - Sudoku-like game for Android (on F-Droid)
The instructions are not clear at first so it’s better to start the game a new with lower difficulty.
Just having btrfs is not enough, you need to have automatic snapshots (or do them manually) before doing updates and configured grub to allow you to rollback.
Personally, I’m to lazy to configure stuff like that, I rather just pick my Vetroy USB from backpack, boot into live image and just fix it (while learning something/new interesting) than spend time preventing something that might never happen to me :)
It first downloads all packages from net, then it proceed totally offline starting by verifying downloaded files, signatures, extracting new packages and finally rebuilding initramfs.
Because arch is replacing the kernel and inittamfs in-place there is a chance that it will not boot if interrupted.
This issue was long resolved on other distro.
One way to mitigate it is by having multiple kernels (like LTS or hardened) that you can always pick in grub if the main one fail.
True, if you have extra money, …
It just ‘feel’ bad/wrong like now Google has a brand that they will quickly kill any project they start.