General Purpose Computation on GPUs
Spring Semester 2014


Kick-off meeting: (tentative) Mon. April 7th, 16:00—17:00, LE 305
No weekly lecture.
Instructors: Thomas Fogal, Jens Krüger
Office: LE 305
Phone: 203 379 1314
E-Mail: thomas.fogal at uni-due punkt de
Office hours: just stop by (Tom's almost always around); or by appointment (send an email).

News (newest first)

  1. [17.04.2014] As posted on moodle, milestone 1 is due May 16th!
  2. [24.01.2014] Course announced!

Course Details

NOTE: This course will be given in English!

Course Description:

Project-focused course on using accelerators to increase program performance.

There are no explicit prerequisites, but you should be comfortable with systems programming. A basic background in operating systems would be useful. You may find Scientific Visualization to be useful. Please talk to the instructor if you have any doubts about your readiness.

We may include or additionally offer a crash course in C, if there is enough interest.

If you plan to take the course, please register on the courses' moodle page. The registration code is just 'gpgpu' (all lowercase).

GPGPU

General Purpose computation on Graphics Processing Units refers to using a GPU to accelerate the computation of a non-graphical task. A GPU exposes a highly data parallel model, which allows one to specify what operations to perform on data at a very fine-grained level. Where and how these operations get executed is abstracted out by the runtime. In some ways, programming a GPU is how one would program a CPU if that CPU had 5000 cores: it no longer makes sense for us to manually dole out tasks to this many processing elements; instead, we let a program transform program fragments—'kernels', in GPU parlance—to do the appropriate work assignment.

We should note that using the term "GPU" is a slight anachronism. After people started realizing these architectures were excellent ways to achieve good performance, companies started producing data parallel hardware that was not actually a GPU. NVIDIA's Tesla and Intel's Many Integrated Core (MIC) cards are examples of this broadening of the architecture. This course will stick to GPUs purely for practical reasons, but the ideas will carry over into other parallel environments.

Assignments and Grading

As this is a Forschungsprojekt, the entirety of the course grade will be based on your project. The final project will be due sometime after the end of lectures, and there will be approximately 3 required milestones throughout the semester.

WARNING: All groups are expected to do their own work on the programming assignments (and exams for that matter). No cross-group collaboration is allowed. A general rule to follow is that you may discuss the programs with other groups at the concept level but never at the coding level. If you are at all unclear about this general rule, do not discuss your programs with other students at all.

Reading Materials

There are no required textbooks for this course. However, you may find Viktor Eijkhout's HPC book useful for general information on parallel scientific computing.

Other resources you may find useful are the documentation for:

Computer Accounts

We will provide remote access to a machine which you can run your code on. You may also, of course, write your code locally, but your software must run on the provided platform.
Details on how to access the machine are forthcoming.

Imprint/Impressum Copyright 2014 by HPC Group - Building LE, Lotharstr. 65, 47057 Duisburg, Germany