Nearly all of us have a parallel computer at our fingertips, yet most of us use the parallel processors only indirectly, through software packages written by others. In this course we’ll learn how to write parallel programs so we can parallelize that favorite application.
The goal of the course is to learn how to devise and implement parallel algorithms, to understand performance tradeoffs, and to learn how to generalize these skills to new problems. We’ll consider applications--both ordinary and extraordinary--and develop a systematic approach for building effective parallel variants.
The course will provide an overview of important topics and issues for parallel architectures, models, algorithms and software. We will focus on multi-core processors, which are the latest trend in processor design, programming with the POSIX Threads (Pthreads) and MPI, the message passing interface.
CSE 160 provides a foundation for pursuing more advanced topics in parallel computation, and conducting research in this exciting field.
The course has 2 required textbooks.
Please see the main course web page for instructions about how to obtain these texts, as they will not be available at the book store.
Introduction to HPCC; parallel architecture, algorithms, software and problem-solving techniques. Areas covered: Flynn's taxonomy, processor-memory organizations, shared and non-shared memory models; message passing and multithreading, data parallelism; speed up, efficiency and Amdahl's law, communication and synchronization, isoefficiency and scalability. Topics; run time software techniques, compilers, and grid computing. Assignments given to provide practical experience.
The prerequisite for CSE 160 is CSE 100.
Maintained by [Tue Jul 23 12:31:30 PDT 2013]