EECS 331: Introduction to Computational Photography

Quarter Offered

Fall : 3:30-4:50 TuTh ; Cossairt


EECS 211 and/or 230 or permission from instructor. Students should have experience with C/C++ and MATLAB programming. If you are interested, please contact the instructor to discuss!


Computational photography combines plentiful low-cost computing, digital sensors, actuators, and lights to escape the limitations of traditional film-like methods. New methods offer unbounded dynamic range and variable focus, lighting, viewpoint, resolution and depth of field; hints about shape, reflectance, and location. Instead of fixed digital snapshots and video playback, computational methods promise direct interactions to explore what we photograph.

  • This course fulfills the Interfaces Breadth & Project Course requirement. 

COURSE COORDINATOR: Prof. Oliver Cossairt

CATALOG DESCRIPTION: This course is the first in a two-part series that explores the emerging new field of Computational Photography. Computational photography combines ideas in computer vision, computer graphics, and image processing to overcome limitations in image quality such as resolution, dynamic range, and defocus/motion blur. This course will first cover the fundamentals of image sensing and modern cameras. We will then use this as a basis to explore recent topics in computational photography such as motion/defocus deblurring cameras, light field cameras, and computational illumination.

This course will consist of six homework assignments and no midterm or final exam. We will provide a Nokia N900 cell phone camera for each student in the course. Students will write programs that run on the phone to capture photos. Enrollment is limited to 15 students.

REQUIRED TEXTS: Computational photography is a new and exciting field. No standard texts on this topic are available yet. Optional texts include:

• Forsyth and Ponce. Computer Vision: A Modern Approach. Pearson. 2002.

• Richard Szeliski. Computer Vision: Algorithms and Applications. Springer. 2010.

• Berthold K. P. Horn. Robot Vision. The MIT Press. 1986.

• R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision, Cambridge Press, Cambridge, UK, 2000.

COURSE GOALS: To teach the fundamentals of modern camera architectures and give students hand-on experience acquiring, characterizing, and manipulating data captured using a modern camera platform. For example, students will learn how to estimate scene depth from a sequence of images captured with different focus settings.


• Image formation: pinhole camera, lens camera model, sensor noise

• Lenses: focusing, zoom, field of view, aberrations

• Color: color filter arrays, demosaicking

• Radiometetry and photometry

• Metering: autofocusing, auto-exposure

• Image processing: Fourier Transform, blur, convolution

• Camera projective modeling and calibration

• Depth estimation: stereo matching, photometric stereo, depth from defocus

• Material acquisition: BRDF acquisition, multispectral/hyperspectral capture

• High dynamic range imaging

• Light field capture and rendering

• Computational illumination: image relighting, light transport

• Compressive imaging: super-resolution, compressive video/light field capture

• Novel displays: 3D displays, HDR displays

HOMEWORK ASSIGNMENTS:  Homework assignments will consist of some camera programming and some image processing. The camera programming will be done in C/C++ and the image processing will be done using MATLAB.