Three ways to build high-performance software

0. Do not bother about performance

This is not really a strategy. You simply ignore the performance aspects of your application until you can allow it. Sooner or later your customers, if you have any, will bring it up to your attention.

1. Budget performance on all levels

As your application is split into subsystems and components, you assign strict performance requirements on all its levels. This strategy is usually used only when missing any deadline in your application causes real troubles. Usually it is called real-time software.

Essentially, performance requirements become just functional requirements. Similarly to how you approach development of regular use cases, you should write a test first and let it guide the development of production code. On each level, microbenchmark test cases should be attached to the code and continuously ensure that the time/speed budget assigned to this component is not overspent. This, by induction, will guarantee that overall performance requirements are satisfied. Knowing how much “speed budget” is consumed by individual components will allow to confidently make balancing decisions on which places have to be changed.

Usual caveats apply:

  1. It is more expensive to develop software this way, simply because there are more “must-have” requirements to satisfy.
  2. A compromise between latency and bandwidth has to be taken. Either the application guarantees that a fixed threshold to never be broken for any transaction entering it, or it tries to crunch as many concurrent transactions as possible but without guaranteeing that all of them would finish under pre-determined amount of time.

2. Think about performance all the time

You constantly think about the performance of the code as you write it. You use the fastest most sophisticated big-O algorithms, you inline everything. You share all the data; you pack it to consume the least amount of memory. Or the opposite, you watch closely about the alignment of structures’ fields to ensure they can be accessed fast.

The end result is, however, is always disappointing. It is especially true if you do not measure anything as you go. You do not reach your performance goals.

Simultaneously you complicate your code base with misplaced optimizations. You get less clarity with more duplication, and in general slower development pace. In the majority of places, presence of those optimizations would not make any measurable speed differences because those pieces of code are “cold”; they are executed very infrequently.

Premature optimization is bad. There are countless examples of this postulate to be true documented through the decades of software industry existence. Humans are bad at predicting where the time gets spent in programs. We consistently make wrong predictions. That is why you should measure it at the levels where it matters, and use a profiler to learn where the time is actually spent.

3. Monitor performance; address as deemed necessary

You write your code incrementally without much regard for the performance of newly added pieces (without making obvious blunders of course). You do not obsessively measure performance of your application, but you do have an understanding of the expected level. As soon as you observe an unexpected and undesirable degradation, you get your profiler and reproduce the problem with it. Then you get enough information about the reason of current performance degradation, and you fix it (preferably after writing a new test for the problem).

Why is it good?

  • You do not waste time unproductively by optimizing wrong things at wrong places. Instead, you focus on keeping your architecture clean.
  • You address problems that were observed in the wild, by modifying code that makes difference performance-wise based on data collected
  • Nothing prevents you budgeting for performance on the levels that make sense.

What are the trade-offs?

  • You do not always catch performance degradations preemptively.
  • You cannot guarantee strict upper bounds on latency of transactions passing through your application.
  • You have to maintain quite a good feel about how fast your application should behave, in order to catch unexpected slowdowns in a timely manner.

Conclusions

I think it should be clear now that the second option is the worst. Essentially it is doing the wrong thing. The budgeting approach is sound but usually overly restrictive for the majority of general purpose software.

A well balanced data-driven continuous performance optimization effort is what we should be after.


Written by Grigory Rechistov in Uncategorized on 10.10.2022. Tags: performance,


Copyright © 2024 Grigory Rechistov