Amdahl's Law

Amdahl's Law is a law governing the speedup of using parallel processors on a problem, versus using only one serial processor. Before we examine Amdahl's Law, we should gain a better understanding of what is meant by speedup.

Speedup:

The speed of a program is the time it takes the program to excecute. This could be measured in any increment of time. Speedup is defined as the time it takes a program to execute in serial (with one processor) divided by the time it takes to execute in parallel (with many processors). The formula for speedup is:

Where T(j) is the time it takes to execute the program when using j processors. Efficiency is the speedup, divided by the number of processors used. This is an important factor to consider. Due to the cost of multiprocessor super computers, a company wants to get the most bang for their dollar.

To explore speedup more, we shall do a bit of analysis. If there are N workers working on a project, we may assume that they would be able to do a job in 1/N time of one worker working alone. Now, if we assume the strictly serial part of the program is performed in B*T(1) time, then the strictly parallel part is performed in ((1-B)*T(1)) / N time. With some substitution and number manipulation, we get the formula for speedup as:

N
S = -----------------------
(B*N)+(1-B)
N = processors B = % of algorithm that is serial

This formula is known as Amdahl's Law. The following is a quote from Gene Amdahl in 1967:

Let us investigate speedup curves:

Now that we have determined speedup and efficiency, let us turn to using this information to make sense of Amdahl's Law. We will refer to a Speedup Curve to do this. A Speedup Curve is simply a graph with an X-axis of the number of processors, compared against a Y-axis of the speedup. The best speed we could hope for, S = N, would yield a 45 degree curve. That is, if there were ten processors, we would realize a ten fold speedup. Anything better would mean that the program ran faster on a single processor than in parallel, which would not make it a good candidate for parallel computing. When B is constant (recall B = the percentage of the strictly parallel portion of the program), Amdahl's Law yields a speedup curve which is logarithmic and remains below the line S=N. This law shows that it is indeed the algorithm and not the number of processors which limits the speedup. Also note that as the curve begins to flatten out, efficiency is drastically being reduced.


The author of this page, Aaron Michalove, wishes to recognize the use of the book, Introduction to Parallel Computing, by Ted G. Lewis and Hesham El-Rewini. (C) 1992 Prentice-Hall, Inc. pp 31-32, 38-39.