The use of Graphics Processing Units for rendering is well known, but their power for general parallel computation has only recently been explored. Parallel algorithms running on GPUs can often achieve up to 100x speedup over similar CPU algorithms, with many existing applications for physics simulations, signal processing, financial modeling, neural networks, and countless other fields.
This course will cover programming techniques for the GPU. The course will introduce NVIDIA's parallel computing language, CUDA. Beyond covering the CUDA programming model and syntax, the course will also discuss GPU architecture, high performance computing on GPUs, parallel algorithms, CUDA libraries, and applications of GPU computing. Problem sets will cover performance optimization and specific GPU applications in numerical mathematics, medical imaging, finance, and other fields.
Labwork will require significant programming. A working knowledge of the C programming language will be necessary. Although CS 24 is not a prerequisite, it (or equivalent systems programming experience) is recommended.
9 units; third term.
Kevin Yuh - firstname.lastname@example.org
Eric Martin - email@example.com
|Supervising professors:|| Professor
Al Barr - firstname.lastname@example.org
|Time and place:|| MWF 3:00-3:55 PM
Kevin Yuh - TBD
Eric Martin - TBD
104 Annenberg, instructional laboratory
|Grading policy:|| There will be 7 labs, each worth 100 points, or 10%
of your grade. At the end of the quarter, there will be one final
project worth 300 points, or 30% of your grade. Extensions may be
granted if the TA's see it appropriate. E grades will not be granted
unless under extreme circumstances.