Parallel Programming in OpenMP
Parallel Programming in OpenMP
The rapid and widespread acceptance of shared-memory multiprocessor architectures has created a pressing demand for an efficient way to program these systems. At the same time, developers of technical and scientific applications in industry and in government laboratories find they need to parallelize huge volumes of code in a portable fashion. OpenMP, developed jointly by several parallel computing vendors to address these issues, is an industry-wide standard for programming shared-memory and distributed shared-memory multiprocessors. It consists of a set of compiler directives and library routines that extend FORTRAN, C, and C++ codes to express shared-memory parallelism.
Parallel Programming in OpenMP is the first book to teach both the novice and expert parallel programmers how to program using this new standard. The authors, who helped design and implement OpenMP while at SGI, bring a depth and breadth to the book as compiler writers, application developers, and performance engineers.
* Designed so that expert parallel programmers can skip the opening chapters, which introduce parallel programming to novices, and jump right into the essentials of OpenMP.
* Presents all the basic OpenMP constructs in FORTRAN, C, and C++.
* Emphasizes practical concepts to address the concerns of real application developers.
* Includes high quality example programs that illustrate concepts of parallel programming as well as all the constructs of OpenMP.
* Serves as both an effective teaching text and a compact reference.
* Includes end-of-chapter programming exercises.The OpenMP standard allows programmers to take advantage of new shared-memory multiprocessor systems from vendors like Compaq, Sun, HP, and SGI. Aimed at the working researcher or scientific C/C++ or Fortran programmer, Parallel Programming in OpenMP both explains what this standard is and how to use it to create software that takes full advantage of parallel computing.
At its heart, OpenMP is remarkably simple. By adding a handful of compiler directives (or pragmas) in Fortran or C/C++, plus a few optional library calls, programmers can "parallelize" existing software without completely rewriting it. This book starts with simple examples of how to parallelize "loops"--iterative code that in scientific software might work with very large arrays. Sample code relies primarily on Fortran (undoubtedly the language of choice for high-end numerical software) with descriptions of the equivalent calls and strategies in C/C++. Each sample is thoroughly explained, and though the style in this book is occasionally dense, it does manage to give plenty of practical advice on how to make code run in parallel efficiently. The techniques explored include how to tweak the default parallelized directives for specific situations, how to use parallel regions (beyond simple loops), and the dos and don'ts of effective synchronization (with critical sections and barriers). The book finishes up with some excellent advice for how to cooperate with the cache mechanisms of today's OpenMP-compliant systems.
Overall, Parallel Programming in OpenMP introduces the competent research programmer to a new vocabulary of idioms and techniques for parallelizing software using OpenMP. Of course, this standard will continue to be used primarily for academic or research computing, but now that OpenMP machines by major commercial vendors are available, even business users can benefit from this technology--for high-end forecasting and modeling, for instance. This book fills a useful niche by describing this powerful new development in parallel computing. --Richard Dragan
Topics covered:
- Overview of the OpenMP programming standard for shared-memory multiprocessors
- Description of OpenMP parallel hardware
- OpenMP directives for Fortran and pragmas for C/C++
- Parallelizing simple loops
- parallel do / parallel for directives
- Shared and private scoping for thread variables
- reduction operations
- Data dependencies and how to remove them
- OpenMP performance issues (sufficient work, balancing the load in loops, scheduling options)
- Parallel regions
- How to parallelize arbitrary blocks of code (master and slave threads, threadprivate directives and the copyin clause)
- Parallel task queues
- Dividing work based on thread numbers
- Noniterative work sharing
- Restrictions on work-sharing
- Orphaning
- Nested parallel regions
- Controlling parallelism in OpenMP, including controlling the number of threads, dynamic threads, and OpenMP library calls for threads
- OpenMP synchronization
- Avoiding data races
- Critical section directives (named and nested critical sections and the atomic directive
- Runtime OpenMP library lock routines
- Event synchronization (barrier directives and ordered sections)
- Custom synchronization, including the flush directive
- Programming tips for synchronization
- Performance issues with OpenMP
- Amdahl's Law
- Load balancing for parallelized code
- Hints for writing parallelized code that fits into processor caches
- Avoiding false sharing
- Synchronization hints
- Performance issues for bus-based and Non-Uniform Memory Access (NUMA) machines
- OpenMP quick reference
List Price: $ 60.95
Price: $ 38.94
More Programming Products
No comments:
Post a Comment