Chapter 9 : Algorithms and BigO Notation
Topics covered in this snacksized chapter:
An Algorithm is a sequence of unambiguous instructions for solving a problem.
 Used to obtain a required output for any legitimate input in a
finite number of steps.
 Algorithms can be expressed in many ways, including natural languages, pseudo code, flowcharts and programming languages.
An exact specification or procedure for solving a computational problem.
Expressed as a finite sequence of steps.
It must specify each and every step completely.
A machine can implement it.
It must work for all possible inputs.
Algorithms must be:
 Correct: For every possible input.
 Efficient: Run quickly, and use as little memory as possible.
Algorithm for calculating the average of two numbers:
Steps
 Task

Start


Step 1
 Input any 2 Numbers

Step 2
 Add both numbers

Step 3
 Divide the result by 2

Step 4
 Return result of step 3

End


Steps
 Description

Identify the Inputs
 What type of data is needed?
How will the program get that data?
What will be the format of the data?

Identify the Processes
 How will the program manipulate the data to produce meaningful results?

Identify the Outputs
 What outputs should be returned?
What form will the outputs take?

The performance of algorithms can be influenced by several factors:
 Speed of the machine which will run it.
 Language in which it will be implemented.
 Efficiency of the compiler that will create the program.
T(n), running time of a particular algorithm on input of size n
Provides the number of times the instructions in the algorithm are executed.
A sorting algorithm would take longer to sort 100,000 elements than 100.
 How can we compare performance of two sorting algorithms?
 Roughly, how much time will it take?
Used to describe the performance or complexity of an algorithm.
 Provides information on how long it will take to run according to the size of the input ānā.
Provides an answer to the question "roughly how much time?"
It is used by computer scientists to describe algorithms.
O (1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.
If a program takes 100 seconds irrespective of input size then its time complexity is O(1).
O (n) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.
If a program takes 1 second for input size less than 100, then after the size goes beyond 100, if the time taken increases linearly with respect to input size then its time complexity is O (n).
O (n^{2
}) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.