Parallel Programming with the Message-Passing Interface (MPI平行程式設計)

  1. Objective: This course offers a thoroughly guide to the MPI (Message-Passing Interface) standard library for writing programs for parallel computers. Since the debut of multicore processors, parallel computing has become mainstream. Today, applications run on computers with millions of processors; multiple processors sharing memory and multicore processors with multiple hardware threads per core are common. The course takes an informal, tutorial approach, introducing each concept through easy-to-understand examples, including actual code in C and Fortran. Topics include using MPI in simple programs, virtual topologies, MPI datatypes, parallel libraries, and a comparison of MPI with sockets.
  2. Students: 24
  3. Software:
    1. FortiClient VPN for Windows 10
    2. PuTTY 0.76
  4. Textbook:
  5. References:
  6. Grading criteria

Syllabus

  1. Introduction to Parallel Computing
  2. Planning for parallelization
  3. Performance limits and profiling
  4. Data design and performance models
  5. Parallel algorithms and patterns
  6. Vectorization: FLOPs for free
  7. OpenMP that performs
  8. MPI: The parallel backbone
  9. GPU architectures and concepts
  10. GPU programming model
  11. Directive-based GPU programming
  12. GPU languages: Getting down to basics
  13. GPU profiling and tools
  14. Affinity: Truce with the kernel
  15. Batch schedulers: Bringing order to chaos
  16. File operations for a parallel world
  17. Tools and resources for better code

Slides