Analysis of parallel algorithms

From testwiki
Jump to navigation Jump to search

Template:Short description

In computer science, analysis of parallel algorithms is the process of finding the computational complexity of algorithms executed in parallel – the amount of time, storage, or other resources needed to execute them. In many respects, analysis of parallel algorithms is similar to the analysis of sequential algorithms, but is generally more involved because one must reason about the behavior of multiple cooperating threads of execution. One of the primary goals of parallel analysis is to understand how a parallel algorithm's use of resources (speed, space, etc.) changes as the number of processors is changed.

Background

A so-called work-time (WT) (sometimes called work-depth, or work-span) framework was originally introduced by Shiloach and Vishkin [1] for conceptualizing and describing parallel algorithms. In the WT framework, a parallel algorithm is first described in terms of parallel rounds. For each round, the operations to be performed are characterized, but several issues can be suppressed. For example, the number of operations at each round need not be clear, processors need not be mentioned and any information that may help with the assignment of processors to jobs need not be accounted for. Second, the suppressed information is provided. The inclusion of the suppressed information is guided by the proof of a scheduling theorem due to Brent,[2] which is explained later in this article. The WT framework is useful since while it can greatly simplify the initial description of a parallel algorithm, inserting the details suppressed by that initial description is often not very difficult. For example, the WT framework was adopted as the basic presentation framework in the parallel algorithms books (for the parallel random-access machine PRAM model) [3] and, [4] as well as in the class notes .[5] The overview below explains how the WT framework can be used for analyzing more general parallel algorithms, even when their description is not available within the WT framework.

Definitions

Template:Anchor Suppose computations are executed on a machine that has Template:Mvar processors. Let Template:Mvar denote the time that expires between the start of the computation and its end. Analysis of the computation's running time focuses on the following notions:

  • The work of a computation executed by Template:Mvar processors is the total number of primitive operations that the processors perform.[6] Ignoring communication overhead from synchronizing the processors, this is equal to the time used to run the computation on a single processor, denoted Template:Math.
  • The depth or span is the length of the longest series of operations that have to be performed sequentially due to data dependencies (the Template:Visible anchor). The depth may also be called the critical path length of the computation.[7] Minimizing the depth/span is important in designing parallel algorithms, because the depth/span determines the shortest possible execution time.[8] Alternatively, the span can be defined as the time Template:Math spent computing using an idealized machine with an infinite number of processors.[9]
  • The cost of the computation is the quantity Template:Mvar. This expresses the total time spent, by all processors, in both computing and waiting.[6]

Several useful results follow from the definitions of work, span and cost:

Using these definitions and laws, the following measures of performance can be given:

Execution on a limited number of processors

Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors is available. This is unrealistic, but not a problem, since any computation that can run in parallel on Template:Mvar processors can be executed on Template:Math processors by letting each processor execute multiple units of work. A result called Brent's law states that one can perform such a "simulation" in time Template:Mvar, bounded by[11]

TpTN+T1TNp,

or, less precisely,[6]

Tp=O(TN+T1p).

An alternative statement of the law bounds Template:Mvar above and below by

T1pTpT1p+T.

showing that the span (depth) Template:Math and the work Template:Math together provide reasonable bounds on the computation time.[2]

References

Template:Reflist

Template:Parallel Computing