GPU Programming

CS 179

The use of Graphics Processing Units for rendering is well known, but their power for general parallel computation has only recently been explored. Parallel algorithms running on GPUs can often achieve up to 100x speedup over similar CPU algorithms, with many existing applications for physics simulations, signal processing, financial modeling, neural networks, and countless other fields.

This course will cover programming techniques for the GPU. The course will introduce NVIDIA's parallel computing language, CUDA. Beyond covering the CUDA programming model and syntax, the course will also discuss GPU architecture, high performance computing on GPUs, parallel algorithms, CUDA libraries, and applications of GPU computing.

Problem sets will cover performance optimization and specific GPU applications such as numerical mathematics, medical imaging, finance, and other fields.

This quarter we will also cover uses of the GPU in Machine Learning.

Labwork will require significant programming. A working knowledge of the C programming language will be necessary. Although CS 24 is not a prerequisite, it (or equivalent systems programming experience) is strongly recommended.

9 units; third term.

   
Instructors/TA's: Aadyot Bhatnagar - abhatnag@caltech.edu
Tyler Port - tport@caltech.edu
Bobby Abrahamson - rabraham@caltech.edu
  • Piazza Please ask through Piazza if you have a question/issue that likely affects other students.
  • TA email: cs179.ta@gmail.com. Use Piazza, generally, for questions on the assignments or the material. These may be of interest to other people. Send an email to the TA's if you have something that only affects you or your project group.
Supervising Professor: Professor Al Barr - barradmin@cs.caltech.edu

 
Time and place: MWF 3:00-3:55 PM
Annenberg 107
 
Office Hours: TBD fill out the when2meet and survey
Fill out this survey: https://goo.gl/forms/RZiUFBGYs2GKYEFA2
Fill out this when2meet for office hours ASAP: https://www.when2meet.com/?6806202-GXLXT

    Sunday 2pm-5pm Tyler
    Monday 8pm-10pm Bobby
    Tuesday 7pm-9pm Aadyot
    104 Annenberg, instructional laboratory

 
Grading policy: Here is the grading scheme for the class:
  • 6 labs (60% of grade)
  • 4 week project (40% of grade)
All labs will be scored out of 100 and are weighted equally (meaning each lab is worth 10% of your grade). The final project can be completed individually or as a pair.

Homework extensions may be granted if the TA's see it as appropriate. E grades will not be granted except under extreme circumstances.  

Lectures: Week 1 (Introduction), MWF Tyler
Lecture 1 (Mon. 04/02): PPT PDF
Lecture 2 (Wed. 04/04): PPT PDF
Lecture 3 (Fri. 04/06): PPT PDF
Week 2 (Shared Memory), MWF Tyler
Lecture 4 (Mon. 04/09): PPT PDF
Lecture 5 (Wed. 04/11): PPT PDF
Lecture 6 (Fri. 04/13): PPT PDF
Week 3 (Reductions, FFT) MWF Aadyot
Lecture 7 (Mon. 04/16): PPT PDF
Lecture 8 (Wed. 04/18): PPT PDF
Lecture 9 (Fri. 04/20): PPT PDF
Week 4 (cuBLAS and Graphics) MWF Tyler
Lecture 10 (Mon. 04/23): PPT PDF Google Doc
Lecture 11 (Wed. 04/25): cuBLAS example
Lecture 12 (Fri. 04/27): PPT PDF
Week 5 (Machine Learning and cuDNN I) MWF Aadyot
Lecture 13 (Mon. 04/30): PPT PDF
Lecture 14 (Wed. 05/02): PPT PDF
Lecture 15 (Fri. 05/04): PPT PDF
Week 6 (Machine Learning and cuDNN II) MWF Aadyot
Lecture 16 (Mon. 05/07): PPT PDF
Lecture 17 (Wed. 05/09): PPT PDF
Week 7 (Projects) MW no class, F (in-class office hour) TBD
Week 8 (Projects) MW no class, F (in-class office hour) TBD
Week 9 (Projects) MW no class, F (in-class office hour) TBD
Week 10 (Projects) MW no class, F (in-class office hour) TBD

Assignments: Lab 1: assignment text UNIX files
Lab 2: assignment text UNIX files
Lab 3: assignment text UNIX files
Lab 4: assignment text UNIX files
Lab 5: assignment text UNIX files


Project INFO
Textbook: Programming Massively Parallel Processors (3rd Edtion) is recommended but not required. Amazon Link.

CUDA Installation There are three GPU machines available in Annenberg 104, the CMS machine lab. You will need a CMS account to use them.
Alternatively we can supply a bootable USB image if you wish, with CUDA preinstalled.
Otherwise you can consider this Guide (updated 2018) DANGER! Especially for non-Windows machines, make Clone of whole computer system before attempting installation! Don't do this casually. You can easily lose your ability to log in and your entire laptop/desktop environment without this type of backup! The loss of a working computer environment can affect your other classes. With the clone backup, however, you should not lose too much time if there is a problem.
The CMS machines or the bootable USB image will be a safer option.
To do the full partition backup, a suggested cloning tool is
Clonezilla, where you can use these Clonezilla instructions as a reminder.
An excellent USB "burning" tool (for making a Clonezilla drive or the CUDA boot drive) is Rufus, although it requires a Windows environment to run it .
Other cloning and burning tools are acceptable, if you have your own favorites.
Finally, use this code to retrieve your hardware info after you setup CUDA.

Resources: CUDA C Programming Guide
List of NVIDIA GPUs
Mapping from GPU name to Compute Capability

Material from previous year(s): 2015
2016
2017


Less useful, but cool resources: NVIDIA's Parallel Forall Blog
Videos from last several years of NVIDIA's conference on CUDA
How to Write Code the Compiler Can Actually Optimize (2015)
Excellent CPU optimization manuals
What Every Programmer Should Know About Memory
GPU focused systems guide to deep learning