• 26 Posts
  • 1.95K Comments
Joined 6 years ago
cake
Cake day: May 31st, 2020

help-circle
  • I mean, I don’t have a ton of skin in the game here, as I don’t care much for horror games either way.
    But yeah, I just assume that they say they’re cautious to calm the fans, but they actually can’t be cautious, since well, they can only really delay by a whole year at a time, and if they do that, then they have two games in the year afterwards.

    They did only pre-plan a handful of years, so maybe they can just delay the following games by a year each, too.

    But yeah, it still just sounds like the decision-making here isn’t driven by logic or what allows publishing good games, but rather by
    Mr. Krabs meme, where he says "Hello, I like money!".



  • I really hate, how it will gladly generate dozens of lines of complex algorithms, when it doesn’t find the obvious solution right away. Particularly, because you will readily find colleagues that just do not care.

    They probably stop reading the code in detail when it’s sufficiently long enough. And when you tell them that what they’ve checked in is terrible and absolutely unreadable, they don’t feel responsible for it either, because the AI generated it.




  • Large shared codebases never reflect a single design, but are always in some intermediate state between different software designs. How the codebase will hang together after an individual change is thus way more important than what ideal “north star” you’re driving towards.

    Yeah, learned this the hard way. Came up with an architecture to strive for 1½ years ago. We shipped the last remaining refactorings two weeks ago. It has been a ride. Mostly a ride of perpetually being low-priority, because refactorings always are.

    In retrospect, it would’ve likely been better to go for a half-assed architecture that requires less of a diff, while still enabling us to ship similar features. It’s not like the new architecture is a flawless fit either, after 1½ years of evolving requirements.

    And ultimately, architecture needs to serve the team. What does not serve the team is 1½ years of architectural limbo.


  • I mean, don’t get me wrong, I also find startup time important, particularly with CLIs. But high memory usage slows down your application in other ways, too (not just other applications on the system). You will have more L1, L2 etc. cache misses. And the OS is more likely to page/swap out more of your memory onto the hard drive.

    Of course, I don’t either sit in front of an application and can tell that it was a non-local NUMA memory access that caused a particular slowness, so I can understand not really being able to care for iterative improvements. But yeah, that is also why I quite like using an efficient stack outright. It just makes computers feel as fast as they should be, without me having to worry about it.


    Side-note

    I heavily considered ending this comment with this dumbass meme:

    Rust fast (aroused unga bunga)

    Then I realized, I’m responding to someone called “Caveman”. Might’ve been subconscious influence there. 😅






  • This isn’t Reddit. You don’t need to talk in absolutes.

    Similar to WittyShizard, my experience is very different. Said Rust application uses 1200 dependencies and I think around 50 MB RAM. We had a Kotlin application beforehand, which used around 300 dependencies and 1 GB RAM, I believe. I would expect a JavaScript application of similar complexity to use a similar amount or more RAM.

    And more efficient languages do have an effect on RAM usage, for example:

    • Not using garbage collection means objects generally get cleared from RAM quicker.
    • Iterating over substrings or list elements is likely to be implement more efficiently, for example Rust has string slices and explicit .iter() + .collect().
    • People in the ecosystem will want to use the language for use-cases where efficiency is important and then help optimize libraries.
    • You’ve even got stupid shit, for example in garbage-collected languages, it has traditionally been considered best practice, that if you’re doing async, you should use immutable data types and then always create a copy of them when you want to update them. That uses a ton of RAM for stupid reasons.

  • Yeah, gonna be interesting. Software companies working on consumer software often don’t need to care, because:

    • They don’t need to buy the RAM that they’re filling up.
    • They’re not the only culprit on your PC.
    • Consumers don’t understand how RAM works nearly as well as they understand fuel.
    • And even when consumers understand that an application is using too much, they may not be able to switch to an alternative either way, see for example the many chat applications written in Electron, none of which are interoperable.

    I can see somewhat of a shift happening for software that companies develop for themselves, though. At $DAYJOB, we have an application written in Rust and you can practically see the dollar signs lighting up in the eyes of management when you tell them “just get the cheapest device to run it on” and “it’s hardly going to incur cloud hosting costs”.
    Obviously this alone rarely leads to management deciding to rewrite an application/service in a more efficient language, but it certainly makes them more open to devs wanting to use these languages. Well, and who knows what happens, if the prices for Raspberry Pis and cloud hosting and such end up skyrocketing similarly.


  • The problem is that it sounds like a riddle. In a riddle, you’re traditionally supposed to work within the rules that you’ve been told. So, not thinking outside the box here is not an indication that the person isn’t capable of doing so.

    Of course, if I encountered this problem in real life, I’d ask Carol from accounting to check the other room, while I flip the switches. But my instinctive answer was that it is not possible, because I assumed it to be a riddle and the provided rules did not allow a solution.






  • Ah shit, here we go again.

    I almost expected someone to learn that just from me posting. 😅

    Basically, OpenOffice used to be organized by Sun Microsystems. Then Sun got bought by Oracle back in 2010.
    Oracle does not have a good reputation at all, so the OpenOffice devs from back then figured they’d need to take things into their own hands and set up The Document Foundation to organize further development. But the OpenOffice trademark was owned by Sun/Oracle, so they had to rename and get a new homepage and everything. The name they chose is LibreOffice: https://www.libreoffice.org/

    After the OpenOffice project was effectively dead, Oracle handed it and its trademark over to the Apache Foundation, where it’s seeing occasional bug fixes. But to my knowledge, they don’t even have the capacity to fix all the security problems.
    All the actual feature development happens over on the LibreOffice side.

    So, in practice, if you want OpenOffice, what you really want is LibreOffice.