Students are responsible for monitoring e-mail to their university accounts concerning this course. Announcements will be posted on the discussion group (see Resources page).
This course examines the current techniques for design and development of parallel programs targeted for platforms ranging from multicore computers to high-performance clusters, with and without shared memory. It includes theoretical models for, and hardware effects on, parallel computation, the definitions of speedup, scalability, and data- versus task-parallel approaches. The course will also examine strategies for achieving speedup based on controlling granularity, resource contention, idle time, threading overhead, work allocation, and data localization.
Today's computer science students are entering a new era in parallel computing, featuring cheap multicores and high-performance clusters, but have received traditional largely-sequential training. This paradigm shift has been called "the end of the lazy programmer era." This course is aimed at helping soon-to-graduate students (1) move into jobs using current tools for parallel programming, and (2) acquire the theoretical background needed to keep abreast with rapid industry developments and to evolve with them. The textbook will provide foundational knowledge about modern parallel processor architectures and algorithms for organizing concurrent computations. Since parallel programming is all about speed, we will learn ways to measure execution performance and speedup through parallelization.
In terms of practical skills, high-performance (non-shared memory) cluster programming will be introduced via the University of Guelph Pilot library, based on MPI and utilizing message-passing. Programming for multicore shared memory processors will utilize the popular existing parallel programming technique of POSIX threads, and compiler-based OpenMP, supported by the latest suite of Intel tools. Heterogeneous architectures--GPUs (graphics processing units) and the Cell BE (Broadband Engine)--will be introduced. Students will use the above tools, or others of their choice, to carry out a parallel programming project.
Principles of Parallel Programming, by Calvin Lin and Larry Snyder, Addison-Wesley, 2009. This can be purchased as a physical book or "rented" as an e-book for 180 days at about one-third price: [ http://www.coursesmart.com/9780321557902 ]
The first printing has numerous small bugs affecting the code samples that you should carefully correct by hand: [ errata ]. If you have the second printing, you can skip this.
The Art of Multiprocessor Programming , by Herlihy and Shavit, Morgan Kaufmann, 2008 [ online ].
There will be 3 programming assignments using C. Late assignments are not accepted. The term project may be carried out using any justifiable choice of parallel programming language, libraries, and platform. All projects include proposal, presentation, and written report, plus software. The project report can be handed in anytime up through the last day of classes (Nov. 29). Late reports will cause a deduction of 10 marks from the project grade (0-100) for each calendar day past 11/29.