As said, in many cases, there are different algorithms to achieve the same task, and for obvious reasons, we will want to use the best algorithm to create the computer program. Now, there are many ways to define "best". For example, one algorithm may be better than another because it is easier to understand, and hence to translate into a programming language, like Java or C++. Or an algorithm may be better because it uses less memory. However, in complexity analysis, we consider an algorithm better if it runs faster.
There are of course many factors that influence the speed of a computer program, other than just the algorithm that underlies the program and the size of the input to the algorithm. For example, the type of computer and in particular the speed of the processor in the computer have a big influence on the speed of the program, and there is a somewhat corny joke among computer scientists that the best way to speed up a program is to save it to a USB drive and wait for the next generation computer to come out.
However, since we are comparing algorithms, i.e. abstract computer programs, we can ignore most of these other factors and we only consider the size of the input, and we express the running time of an algorithm as a function of the size of the input. The notation that we use to express the complexity of an algorithm is O (big-Oh). Moreover, when we give the complexity of an algorithm, we ignore all terms other than the term that most determines the growth of the value. Thus, if we determine that an algorithm runs in n.log(n) + 73 time units, where n is the size of the input, we state that the complexity of the algorithm is O(n.log(n)). The rationale is that, as n gets sufficiently large, the contribution of the other terms to the value of the function that determines the running time becomes negligible.