Why time and space complexity is important for your job

Why time and space complexity is important for your job

Most software developers are always looking to improve their skills and experience. What is the last cutting-edge technology I can learn? what antipatterns should avoid? what are the new updates of my favorite framework? These things are important for all of us and for the companies we work in, but in all this chaotic mess of libraries, conventions, patterns, … we could easily forget the basics that we learn when we studied the concepts of algorithm design: “an algorithm should be efficient”. But, is it really that important? when should we worry about that?

What is time/space complexity?

Let’s first refresh these concepts:

  • Runtime complexity: is the amount of time an algorithm takes to complete its execution.
  • Space complexity: is the amount of memory space required to execute an algorithm.

Pretty simple, right? Yes it can be, the real problem here is to know when this become something to be worried about. Depending what kind of software your a developing and where this code is going to be runned, it could have a serious impact or not in the program flow.

For example, if you are developing a JavaScript program, the speed and memory that your software can manage will depend on the client’s machine where is executed. Only because your product runs smoothly in your octa-core computer that you use to code, it doesn’t mean that it cannot affect your customers.

The risk of bad efficient algorithms

Maybe you as an employee cannot see a real urgency of writing your code in the most efficient way possible. Nowadays there are other quality standards that companies put a lot of effort to ensure. Those could be to follow some conventions or guidelines to keep align all software engineers or to write clean code so it can be maintained easily. Efficiency is compatible to all those things, but harder to track sometimes.

Customer bugs are the worst kind of urgency that you want for your products. The effects of bad efficient algorithms are normally noticed when is to late and it’s affecting already some customers. Even if your company have capacity to respond fast to this issues, it will require a lot of effort to locate where is the problem and to refine all the logic that can be improved in runtime or space. This can produce a snowball effect, leading into more issues as you need to modify a lot of lines of code and the relationship with your customer could end in a very bad way.

Of course, it all depends on where your software scale: if you are developing accounting products, probably the risk will reside on how many transactions can manage at the same time or if your developing online games, the risk could be how many players can your server holds.

How to prevent bad efficient algorithms

  • Analyse your algorithms complexity: There are several mathematical notations that we can use for our measures:
    • Big Oh (O): worst-case scenario.
    • Big Omega (Ω): best case scenario.
    • Big Theta (Θ): average scenario.

Notice that these notations are re-interpreted for Computer Science to define best/average/worst cases, but have different mathematical conceptions. Normally, Big Oh notation is widely used to measure the algorithms complexity for a good reason: we are not interested on knowing how much our algorithms take in the best situations. The purpose of this is to detect the possible computational risks that our code can provide to the whole system.

Good analysis of complex programs requires a lot of experience and knowledge to determine the exact complexity. If you find really difficult to determine the complexity of your code, you can just leverage those constants that your algorithm is dependent on. For instance, imagine you want to calculate a row total in a table, maybe your code is complicated to follow and you don’t know exactly if your script performs with O(n²) or O(n³). However, you know what is n, in this case the number of rows of the table. If the rows in the table have high probabilities to scale a lot, you will be spotting a real risk in your product.

  • Run volume/performance tests: although in your company take seriously to write fast algorithms, it will hard to assure this unless you have competent developers reviewing what you do or nice performance/volume tests running in a constant basis. The design of these tests should focus on those variables that can potentially scale and cause problems for your customers. Knowing where are our limits will also help us to measure the the complexity of the whole system.

When efficiency deteriorate readability

This situation is more common that you think, when this happen the questions normally are: so what is most important? readability or efficiency? the answer is both. So depending on the context of your project, a decision has to be taken.

For example, if your are working in a small team for an interactive system, probably you will give more importance to efficiency, but it doesn’t mean that your are going to use binary operations for every problem.

You have to find the balance that works for your case and make sure that all people you work with understand that balance.

Wrapping up

Efficiency, in general, has been losing importance comparing with other code standards. It’s not important until it is and when that happens, it’s too late and customers will pay the consequences. Just tracking where your program scales and analysing those critical algorithms by some kind of measure, you could prevent a bad experience for your customers.

Leave a Comment