One of the topics covered in the material on relations is orders. You will also recall that creating an ordered list of items is central to the term project in that you were asked to implement two different sorting algorithms and compare their performance in a timinig experiment.
The question may arise why sorting is so important. The reason is that if you can sort a set of items, and you can access any position in the set in a constant time, the complexity of determining whether a given item is an element of that set goes down from O(n) to O(log(n)).
Clearly, if your set is not ordered (or if you cannot access any position in the set in a constant time), then the only way in which you can determine whether a given item is an element of the set is to compare it with the first element in the set. If they are identical, you are done; if they are not, you compare it with the next element in the set, and so on. If the item is not an element of the set, then we will need to compare the item with every element in the set, and we therefore make n comparison where n is the number of elements in the set. If the item is an element of the set, then, on average, the item you are looking for will be somewhere half way down the list, and you will therefore have to make on average 1/2n comparisons. In other words, you can expect to have to make 1/2n comparisons. Since we ignore the constant, the complexity of this type of search, which is called "linear search" is O(n).
However, if the set is ordered, and we can access any position in the set in a constant time, we start by comparing the item with the element in the middle of the set. If the item is identical to the element in the middle of the set, then we are done; if it is not, and the item is smaller than the one in the middle, we know that the item, if it is an element of the set, will be in the first half of the list; if it is larger, then it will be in the second half. So, if it is smaller, we repeat the process for the lower half of the list; if it is larger then we repeat the process for the larger half of the list. Since we keep dividing the list we are searching in half, and we can divide a list of size n in half log(n) times, and the complexity of the algorithm, which is called "binary search", is O(log(n)).
For a more detailed explanation, including some examples, at http://www.youtube.com/watch?v=wNVCJj642n4.