Parallel Programming with the Message-Passing Interface (MPI平行程式設計)
- Objective:
This course offers a thoroughly guide to the MPI (Message-Passing
Interface) standard library for writing programs for parallel computers.
Since the debut of multicore processors, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common.
The course takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets.
- Students: 24
- Software:
- FortiClient
VPN for Windows 10
- PuTTY 0.76
- Textbook:
- References:
- Grading criteria
- Participation (10%)
- Exercise (20%)
- Oral Presentation (30%)
- Term Project (40%)
Syllabus
- Introduction to Parallel Computing
- Planning for parallelization
- Performance limits and profiling
- Data design and performance models
- Parallel algorithms and patterns
- Vectorization: FLOPs for free
- OpenMP that performs
- MPI: The parallel backbone
- GPU architectures and concepts
- GPU programming model
- Directive-based GPU programming
- GPU languages: Getting down to basics
- GPU profiling and tools
- Affinity: Truce with the kernel
- Batch schedulers: Bringing order to chaos
- File operations for a parallel world
- Tools and resources for better code
Slides