The most famous quote from Knuth’s paper “Structured Programming with go to Statements” is this: There is no doubt that the grail of efficiency leads to abuse. Programmers waste e…
A nice post, and certainly worth a read. One thing I want to add is that some programmers - good and experienced programmers - often put too much stock in the output of profiling tools. These tools can give a lot of details, but lack a bird’s eye view.
As an example, I’ve seen programmers attempt to optimise memory allocations again and again (custom allocators etc.), or optimise a hashing function, when a broader view of the program showed that many of those allocations or hashes could be avoided entirely.
In the context of the blog: do you really need a multi set, or would a simpler collection do? Why are you even keeping the data in that set - would a different algorithm work without it?
When you see that some internal loop is taking a lot of your program’s time, first ask yourself: why is this loop running so many times? Only after that should you start to think about how to make a single loop faster.
You don’t even need to go at a low level. Lots of programmers forget that their applications are not running in a piece of paper in general.
My team at work once had an app running Kubernetes and it had a memory leak, so its pod would get terminated every few hours. Since there were multiple pods, this had effectively no effect on the clients.
The app in question was otherwise “done”, there were no new features needed, and we hadn’t seen another bug in years.
When we transferred the ownership of the app to another team, they insisted on finding and fixing the memory leak. They spent almost one month to find the leak and refactor the app. The practical effect was none - in fact due to the normal pod scheduling they didn’t even buy that much lifetime to each individual pod.
I get your point but I do not think you should justify releasing crap code because you think it has minimal impack on the customer. A memory leak is a bug and just should not be there.
In project management lore there is the tripple constraint: time, money, freatures. But there is another insidious dimension not talked about. That is risk.
The natural progession in a business if there is no push back is that management wants every feature under the sun, now, and for no money. So the project team does the only thing it can do, increase risk.
The memory leak thing is an example of risk. It is also an example of some combination of poor project management including insufficient push back against management insanity and bad business mangement in general which might be an even bigger problem.
My point, this is a common natural path of things but it does not have to always be tolerated.
A nice post, and certainly worth a read. One thing I want to add is that some programmers - good and experienced programmers - often put too much stock in the output of profiling tools. These tools can give a lot of details, but lack a bird’s eye view.
As an example, I’ve seen programmers attempt to optimise memory allocations again and again (custom allocators etc.), or optimise a hashing function, when a broader view of the program showed that many of those allocations or hashes could be avoided entirely.
In the context of the blog: do you really need a multi set, or would a simpler collection do? Why are you even keeping the data in that set - would a different algorithm work without it?
When you see that some internal loop is taking a lot of your program’s time, first ask yourself: why is this loop running so many times? Only after that should you start to think about how to make a single loop faster.
You don’t even need to go at a low level. Lots of programmers forget that their applications are not running in a piece of paper in general.
My team at work once had an app running Kubernetes and it had a memory leak, so its pod would get terminated every few hours. Since there were multiple pods, this had effectively no effect on the clients.
The app in question was otherwise “done”, there were no new features needed, and we hadn’t seen another bug in years.
When we transferred the ownership of the app to another team, they insisted on finding and fixing the memory leak. They spent almost one month to find the leak and refactor the app. The practical effect was none - in fact due to the normal pod scheduling they didn’t even buy that much lifetime to each individual pod.
I get your point but I do not think you should justify releasing crap code because you think it has minimal impack on the customer. A memory leak is a bug and just should not be there.
deleted by creator
… and this is how IT ends up being responsible for a breach.
It is a bug, unexpected and undefined behavior - it shouldnt have been there.
You are not wrong about the outcome from a capitalist view, but otherwise you are.
In project management lore there is the tripple constraint: time, money, freatures. But there is another insidious dimension not talked about. That is risk.
The natural progession in a business if there is no push back is that management wants every feature under the sun, now, and for no money. So the project team does the only thing it can do, increase risk.
The memory leak thing is an example of risk. It is also an example of some combination of poor project management including insufficient push back against management insanity and bad business mangement in general which might be an even bigger problem.
My point, this is a common natural path of things but it does not have to always be tolerated.
Exactly.
If you are exiting with a memory leak, Linux is having to wipe the floor for you.