Amdahl’s law describes the increase in speed that can be gained from parallelizing a computation across multiple processors. It is defined by:

S = 1 / ( 1 – p + p/n)

Where

S= the Speedup gained by running a computation in parallel. This is the time taken to run the computation on a single processor / time taken to run the computation on n processors.

p = the fraction of the job that can be executed in parallel.

N = the number of processors

The above formula is derived as follows:

Time taken to run a computation on multiple processors = time taken for the parallelizable part of the job + time taken on the non-parallelizable part of the job

Time taken on parallelizable part of the job = Parallelizable fraction of the job / number of processors

= p/n

The sequential part of the job is given by 1-p

Note that part of the non-parallelizable, or sequential part of the computation may involve the overhead of coordinating the multiple processors doing the work.

Thus the ratio of the sequential to the parallelizable part is given by :

S = 1 / (1-p + p/n)

Example,

For a perfectly parallelizable job running on 5 processors, the Speedup is

S= 1/ (1-1+ 1/5 ) = 5 times speedup (500%)

For a job that is 80% parallelizable

S= 1/(1-0.8 + 0.8/5) = = 1/0.36 = 2.78 times speed

**References**

- Maurice Herlihy and Nir Shavit. 2008.
*The Art of Multiprocessor Programming*. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.

### Like this:

Like Loading...