Language:
English
繁體中文
Help
Login
Back
Switch To:
Labeled
|
MARC Mode
|
ISBD
An introduction to parallel programm...
~
Pacheco, Peter S.
An introduction to parallel programming /
Record Type:
Language materials, printed : Monograph/item
Title/Author:
An introduction to parallel programming // Peter S. Pacheco.
Author:
Pacheco, Peter S.
Published:
Amsterdam ;Morgan Kaufmann, : 2011.,
Description:
xix, 370 p. :ill. ; : 25 cm.;
Subject:
Parallel programming (Computer science) -
ISBN:
9780123742605 (hbk.) :
An introduction to parallel programming /
Pacheco, Peter S.
An introduction to parallel programming /
Peter S. Pacheco. - Amsterdam ;Morgan Kaufmann,2011. - xix, 370 p. :ill. ;25 cm.
Includes bibliographical references (p. 357-359) and index.
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here.
Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. User-friendly exercises teach students how to compile, run and modify example programs.
ISBN: 9780123742605 (hbk.) :NT2403
LCCN: 2010039584
Nat. Bib. No.: GBB0B0034bnbSubjects--Topical Terms:
557472
Parallel programming (Computer science)
LC Class. No.: QA76.642 / .P29 2011
Dewey Class. No.: 005.2/75
An introduction to parallel programming /
LDR
:03230cam a2200253 a 4500
001
788828
003
OCoLC
005
20140924054720.0
008
141013s2011 ne a 001 0 eng
010
$a
2010039584
015
$2
bnb
$a
GBB0B0034
020
$a
9780123742605 (hbk.) :
$c
NT2403
020
$a
0123742609 (hbk.)
035
$a
(OCoLC)668986119
$z
(OCoLC)650960034
035
$a
ocn668986119
040
$a
DLC
$b
eng
$c
DLC
$d
YDX
$d
YDXCP
$d
BWX
$d
CDX
$d
UKM
$d
FXR
$d
OCLCF
$d
CHVBK
$d
OCLCO
$d
OCLCQ
$d
IOO
$d
NFU
042
$a
pcc
050
0 0
$a
QA76.642
$b
.P29 2011
082
0 0
$2
22
$a
005.2/75
100
1
$a
Pacheco, Peter S.
$3
984769
245
1 3
$a
An introduction to parallel programming /
$c
Peter S. Pacheco.
260
#
$a
Amsterdam ;
$a
Boston :
$c
2011.
$b
Morgan Kaufmann,
300
$a
xix, 370 p. :
$b
ill. ;
$c
25 cm.
504
$a
Includes bibliographical references (p. 357-359) and index.
505
8 #
$a
Machine generated contents note: 1 Why Parallel Computing1.1 Why We Need Ever-Increasing Performance 1.2 Why We're Building Parallel Systems 1.3 Why We Need to Write Parallel Programs 1.4 How Do We Write Parallel Programs? 1.5 What We'll Be Doing 1.6 Concurrent, Parallel, Distributed 1.7 The Rest of the Book 1.8 A Word of Warning 1.9 Typographical Conventions 1.10 Summary 1.11 Exercises 2 Parallel Hardware and Parallel Software2.1 Some Background 2.2 Modifications to the von Neumann Model 2.3 Parallel Hardware 2.4 Parallel Software 2.5 Input and Output 2.6 Performance 2.7 Parallel Program Design 2.8 Writing and Running Parallel Programs 2.9 Assumptions 2.10 Summary 2.11 Exercises 3 Distributed Memory Programming with MPI3.1 Getting Started 3.2 The Trapezoidal Rule in MPI 3.3 Dealing with I/O 3.4 Collective Communication 3.5 MPI Derived Datatypes 3.7 A Parallel Sorting Algorithm 3.8 Summary3.9 Exercises 3.10 Programming Assignments 4 Shared Memory Programming with Pthreads4.1 Processes, Threads and Pthreads 4.2 Hello, World4.3 Matrix-Vector Multiplication 4.4 Critical Sections 4.5 Busy-Waiting 4.6 Mutexes 4.7 Producer-Consumer Synchronization and Semaphores 4.8 Barriers and Condition Variables 4.9 Read-Write Locks 4.10 Caches, Cache-Coherence, and False Sharing 4.11 Thread-Safety 4.12 Summary 4.13 Exercises4.14 Programming Assignments 5 Shared Memory Programming with OpenMP5.1 Getting Started 5.2 The Trapezoidal Rule 5.3 Scope of Variables 5.4 The Reduction Clause 5.5 The Parallel For Directive 5.6 More About Loops in OpenMP: Sorting 5.7 Scheduling Loops 5.8 Producers and Consumers 5.9 Caches, Cache-Coherence, and False Sharing 5.10 Thread-Safety 5.11 Summary 5.12 Exercises 5.13 Programming Assignments 6 Parallel Program Development6.1 Two N-Body Solvers 6.2 Tree Search 6.3 A Word of Caution 6.4 Which API? 6.5 Summary 6.6 Exercises 6.7 Programming Assignments 7 Where to Go from Here.
520
#
$a
Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. User-friendly exercises teach students how to compile, run and modify example programs.
650
# 0
$a
Parallel programming (Computer science)
$3
557472
based on 0 review(s)
ALL
圖書館3F 書庫
Items
2 records • Pages 1 •
1
Inventory Number
Location Name
Item Class
Material type
Call number
Usage Class
Loan Status
No. of reservations
Opac note
Attachments
E040472
圖書館3F 書庫
一般圖書(BOOK)
一般圖書
005.275 P116 2011
一般使用(Normal)
On shelf
0
Reserve
E040473
圖書館3F 書庫
一般圖書(BOOK)
一般圖書
005.275 P116 2011 c.2
一般使用(Normal)
On shelf
0
Reserve
2 records • Pages 1 •
1
Reviews
Add a review
and share your thoughts with other readers
Export
pickup library
Processing
...
Change password
Login