Today, we live in an era with great technological advantages, abundant resources and infinity of data. We can learn about anything, from anywhere at any time at a reasonable costs. The devices we use every day are getting more powerful and cheaper day after day, which makes our problems easier and faster to solve.

One of the most important operations and most used ones in Computer Science, even today, are the sorting operations. They apply on simple principles and complex algorithms to give the best result. Each of these algorithms is different in structure and execution speed. Are these speeds depending solely on the complexity of the algorithm or are they also depending on the computing power of the devices we use?

In order to get to the answer of this very popular question, I started analyzing few tests and results of the execution time of the most used and well-known sorting algorithms compared on computers with different processing powers.

The execution time of algorithms directly relates to the size of the input data and the complexity and efficiency of the algorithm itself. If we input more or bigger data, the sorting algorithms will take more time to give the results. In addition, if the algorithm is more complex, again, the results will calculate more slowly. However, if the algorithm is more efficiently constructed, we will get the results faster.

Execution time of these algorithms is directly proportional to the power consumption, after running a few tests, it resulted that some algorithms run faster than others did. This means those that have low CPU time tend to have less power consumption resulting in slower execution time. Hence, meaning that a linearly constructed algorithm will calculate the results faster on a computer with less processing power than on a state-of-the-art computing machine.

The reason for this is the way the algorithm is constructed. If we have a linear base algorithm, it will use one thread of the CPU and calculate linearly. These types of linear calculations are calculating the operations in order, from left to right, one operation at a time. In consequence of this, we do not need greater processing power, because the algorithm will not use it and it will not affect the execution time.

The processors we use today are multi-cored and multi-threaded, with great clock speeds and low power consumption. In order to use them for speeding up the execution time of the sorting algorithms, we should first reconstruct the algorithm from linear to parallel. This will result in speeding up the execution time in a way that the parallel algorithm now uses all the threads of the multi-core processor instead of one. We can accelerate the calculations even more using the GPU’s CUDA processing, but that is not part of our discussion here.

Using parallel constructed algorithms with today’s CPUs and their huge amount of processing power, the execution time of the sort algorithms, will not be much affected by the size of quantity of input data.

In conclusion, in order to speed up the execution time of the algorithms we first need to reconstruct them in parallel ones and increase their efficiency. In this way, our multi-core multi-threaded huge processing power CPUs can do the job properly of accelerating the calculations of the results of the used sorting algorithm. Making them work parallel instead of linear calculations, meaning calculating more operations at a time. However, if we tend to use linear type of algorithms the processing power does not make a difference since the calculations of the operations happen in order, linearly, using only one thread of the processor.