

stares at 300 line shell script+ansible mess that updates/sets up Forgejo, Nextcloud, ghostcms
“Yes… It’s automated”
stares at 300 line shell script+ansible mess that updates/sets up Forgejo, Nextcloud, ghostcms
“Yes… It’s automated”
Exactly. Unless you are actively doing maintenance, there is no need to remember what DB you are using. It took me 3 minutes just to remember my nextcloud setup since it’s fully automated.
It’s the whole point of using tiered services. You look at stuff at the layer you are on. Do you also worry about your wifi link-level retransmissions when you are running curl?
I may be biased (PhD student here) but I don’t fault them for being as such. Ethics is something that 1) requires formal training 2) requires oversight 3) contains to are different to every person. Quite frankly, it’s not part of their training, never been emphasized as part of their training, and subjective based on cultural experiences.
What is considered unreasonable risk of harm is going to be different to everybody. To me, if the entire design runs locally and does not collect data for Google’s use then it’s perfectly ethical. That being said, this does not prevent someone else from adding the data collection features. I think the original design of such a system should put in a reasonable amount of effort in stopping that. But if that is done then there’s nothing else to blame them about. The moral responsibility lies with the one who pulled the trigger.
Should the original designer have anticipated this issue thus never took the first step? Maybe. But that depends on a lot of circumstance that we don’t know so it’s hard to predict anything meaningful.
As for the more “harm than good” analysis, I absolutely detest that sort of reasoning since it attempts to quantify social utility in a pure mathematical sense. If this reasoning holds, an extreme example would be justifying harm to any minority group as long as it maximizes benefit for society. Basically Omelas. I believe a good quantitative reasoning would be checking if harm is introduced to ANY group of people, as long as that’s the case the whole is considered unethical.
This is common for companies that like to hire PhDs.
PhDs like to work on interesting and challenging projects.
With nobody to reign them in, they do all kinds of cool stuff that makes no money (e.g. Intel Optane and transactional memory).
Designing a realtime scam analysis tool with resource constraints is interesting enough to be greenlit but makes no money.
Once released, they’ll move on to the next big challenge, and when nobody is there to maintain their work, it will be silently dropped by Google.
I’m willing to bet more than 70% of the Google graveyard comes from projects like these.
Tail latency with a swallow tail.
Why play chess with Moriarty when you can just bash him in the head with a chessboard?
I keep hearing good things however I have not yet seen any meaningful results for the stuff I would use such a tool for.
I’ve been working on network function optimization at hundreds of gigabit per second for the past couple of years. Even with MTU-sized packets you are only given approximately 200 ns for processing (this assumes without batching). Optimizations generally involve manual prefetching and using/abusing NIC offload features to minimize atomic instructions (this is also running on arm, where atomic fetch and add in gcc is compiled into a function that does lw, ll, sc and takes approximately 8 times the regular memory access time for a write). Current AI assisted agents cannot generate efficient code that runs at line rate. There are no textbooks or blogs that give a detailed explanation of how these things work. There are no resources for it to be trained on.
You’ll find a similar problem if you try to prompt them to generate good RDMA code. At best you’ll find something that barely works, and almost always of the code cannot efficiently utilize the latency reduction RDMA provides over traditional transport protocols. The generated code usually looks like how a graduate CS student may think RDMA works, but is usually completely unusable, either requiring additional PCIe round-trips or has severe thrashing issues with main memory.
My guess is that these tools are ridiculously good at stuff it can find examples of online. However for stuff that have no examples, it is woefully under prepared and you still need a programmer to manually do the work line by line.
As much as I hate the concept, it works. However:
It only works with generalized programming. (E.g. write a python script that passes csv files) For any specialized fields this would NOT work (e.g. write a DPDK program that identifies RoCEv2 packets and rewrite the IP address)
It requires the human supervising the AI agent to know how to write the expected code themselves, so they can prompt the agent to use specific techniques (e.g. use python’s csv library instead of string.split). This is not a problem now since even programmers out of college generally know what they are doing.
If companies try to use this to avoid hiring/training skilled programmers, they will have a very bad time in the future when the skilled talent pool runs dry and nobody knows how to identify correct vs incorrectly written code.
There’s also changing from circuit to packet switching, which also drastically changes how the handover process works.
tl;Dr - handover in 5G is buggy and barely works. The whole thing of switching from one service area to another in the middle of a call is held together by hopes and dreams.
Somehow I disagree with both the premise and the conclusion here.
I dislike a direct answer to things as it discourages understanding. What is the default memory allocation mechanism in glibc malloc? I could get the answer sbrk() and mmap() and call it a day, but I find understanding when it uses mmap instead of sbrk (since sbrk isn’t numa aware but mmap is) way more useful for future questions.
Meanwhile, Google adding a tab for AI search is helpful for people who want to use just AI search. It doesn’t take much away from people doing traditional web searches. Why be mad about this instead of the other true questionable decisions Google is doing?
Nope. Plenty of people want this.
In the last few years I’ve seen plenty of cases where CS undergrad students get stumped if ChatGPT is unable to debug/explain a question to them. I’ve literally heard “idk because ChatGPT can’t explain this lisp code” as an excuse during office hours.
Before LLMs, there were also a significant amount of people who used GitHub issues/discord to ask simple application usage questions instead of Googling. There seems to be a significant decrease of people’s willingness to search for an answer regardless of AI tools existing.
I wonder if it has to do with weaker reading comprehension skills?
Agreed. Personally I think this whole thing is bs.
A routine that just returns “yes” will also detect all AI. It would just have an abnormally high false positive rate.
systemd tries to unify a Wild West situation where everyone, their crazy uncle, and their shotgun-dual-wielding Grandma has a different set of boot-time scripts. Instead of custom 200-line shell scripts now you have a standard simple syntax that takes 5 minutes to learn.
Downside is now certain complicated stuff that was 1 line need multiple files worth of workarounds to work. Additionally, any custom scripts need to be rewritten as a systemd service (assuming you don’t use the compat mode).
People are angry that it’s not the same as before and they need to rewrite any custom tweaks they have. It’s like learning to drive manual for years, wonder why the heck there is a need for auto, then realizing nobody is producing manual cars anymore.
Iirc the specific reason behind this is
As a result, sudo (without args) can’t work in nvim as it doesn’t have a tty to prompt the user for passwords. Nvim also used to do what vim did, but they found out spawning the tty was causing other issues (still present in vim) so they changed it.
:w !sudo tee %
Warning: does not work for neovim
My personal complaints (despite enjoying the gameplay):
Input lag. It’s negligible compared to other games, but comparing it to DDDA it feels much higher (meh vs “oh wow this is smooth!”)
FSR. There is definitely something wrong with the FSR implementation here, because there are minor traces of ghosting that are not present in other games. Rotate your character in the character selection screen, or look at a pillar with water as the backdrop with light rays nearby. That being said, it becomes less obvious during actual gameplay. I do hope that this will be fixed though.
Been playing it since release and I have to say I quite like it. The mtx is less intrusive than Dragon Age Origins’ DLC (no mention in game at all versus “There’s a person bleeding out on the road, if you want to help him please go to the store page”).
So far, the game is a buttery smooth 60 fps at 4k max graphics + FSR3 w/o ray tracing except for inside the capital city (running 7800x3d with a 7900xtx). The only graphics complaint I have is the FSR implementation is pretty bad, with small amounts of ghosting under certain lighting conditions. There’s also a noticeable amount of input lag compared to the first game: not game breaking, but if you do a side-by-side comparison it’s pretty obvious.
Sure the game has its issues, but right now this looks like something that I enjoy. Games don’t need to be masterworks to be fun (my favorite games are some old niche JRPGs that have been absolutely demolished by reviewers at the time), and right now I think it’s money well spent.
A tumbleweed rolls in the distance…
If I’m using Rocky 10 for my personal laptop did I stray so far off the chart?
I even have the latest Firefox and emacs running on it!