CSE291 J00

Topics in Image-Based Modeling and Rendering


Tuesday, Thursday 12:30-1:45

Center Hall 204


Class Mailing list: cse291-j@cs.ucsd.edu




Instructor: David Kriegman

Office: AP&M 3101

Phone: (858) 822-2424

Email: kriegman@cs.ucsd.edu

Office Hours:    Tuesdays 2:00-3:00,

                        Wednesdays 1:30-2:30



Recent movies like the Matrix, Pearl Harbor, Minority Report, AI, and Star Wars: Episode II have all used computer graphics techniques requiring complete photorealism.   Many of the effects are based on a class of techniques drawing from the fields of computer vision and computer graphics called image-based modeling and rendering.  These techniques take images or video as input, create an appropriate model or representation, and then render synthetic images potentially composed of multiple real and synthetic objects, from new viewpoints, and under novel lighting conditions.    The need for such techniques arises in part because while there have been great advances in rendering, it remains a tremendous challenge for designers to model both the geometry and reflectance of extremely complex scenes. 


This course will explore techniques, that like computer vision, take images and video as input, but where the goal is specifically to render images as in computer graphics.  These techniques have only emerged within the last six years – no textbooks have been written on the subject – and so this course will be based on a structured set of readings of recent papers covering topics such a light field methods, 3-D reconstruction, reflectance modeling, lighting estimation, face modeling, and texture synthesis. 


Students can take this course for 2 to 4 units.  One unit is based on class participation including having read the papers in advanced, attendance, and contributions to the discourse on a topic.  A second unit is based on presenting a lecture on a specific topic.  I will work with you to make a solid lecture.  Finally additional units (typically two) will be awarded for a term project.  [Credit where credit is due: Much of the structure of this course is based on highly successful structure introduced by Prof. Elkan in CSE 250A and followed by Prof. Belongie in CS252C.]



Prerequisites: Computer Graphics Course (e.g. CSE 167) or Computer Visions Course (e.g. CSE 252).  If you have any doubts, please don’t hesitate to ask.






(January 9, 2003)


Week 1:  Introduction and Background

Jan. 7: Welcome [Kriegman] Slides: lec1.pdf

D. Forsyth, J. Ponce, “Application: Image-Based Rendering,” Chapter 26, in Computer Vision: A Modern Approach, 2002.


H.Y.  Shum ,S.B. Kang, "A Review of Image-based Rendering Techniques", IEEE/SPIE Visual Communications and Image Processing (VCIP) 2000, pp. 2-13.


Jan. 9: Camera Models, Transforms, Radiometry [Kriegman] Slides: lec2.pdf


D. Forsyth, J. Ponce, “Cameras”, Chapter 1, in Computer Vision: A Modern Approach, 2002.


F. E. Nicodemus, J.C. Richmond and J.J. Hsia, Geometrical Considerations and Nomenclature for Reflectance,   Institute of Basic Standards, National Bureau of Standards, October 1977



Week 2: Moscaics, Plenoptic and Light Field Rendering

January 14: [Kriegman] Slides: lec3.pdf


E. H. Adelson and J. R. Bergen. The plenoptic function and the elements of early vision. In M. Landy and J. A. Movshon, editors, Computational Models of Visual Processing, pages 3-20. MIT Press, Cambridge, MA, 1991. http://www-bcs.mit.edu/people/adelson/pub_pdfs/elements91.pdf


S. Chen, Quicktime VR - an image-based approach to virtual environment navigation, SIGGRAPH, pages 29-38, Los Angeles, California, August 1995.


S. J. Gortler, R. Grzeszczuk, R. Szeliski ,M. F. Cohen   The Lumigraph,   SIGGRAPH, pp 43--54, 1996


M. Levoy, P. Hanrahan   Light Field Rendering ,   SIGGRAPH, 1996


 January 16: [Kriegman] Slides: lec4.pdf


D. Wood, D. Azuma, W. Aldinger, B. Curless, T. Duchamp, D. Salesin, and W. Steutzle. Surface light fields for 3D photography, SIGGRAPH, 2000. 


Dynamically reparameterized light fields  Aaron Isaksen, Leonard McMillan, Steven J. Gortler SIGGRAPH 2000, pp 297 - 306


McMillan, L., and G. Bishop, “Plenoptic Modeling: An Image-Based Rendering System”, Proceedings of SIGGRAPH 95, (Los Angeles, CA August 6-11, 1995), pp. 39-46. (pdf 0.8 MB)

Plenoptic sampling , Jin-Xiang Chai, Shing-Chow Chan, Heung-Yeung Shum, Xin Tong, SIGGRAPH 2000, pp. 307 – 318

 Unstructured lumigraph rendering  Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, Michael Cohen  SIGGRAPH 2001, Pages: 425 – 432


Week 3: Novel Viewpoints

Jan. 21: Structure from Motion and Multiview Geometry, Slides: lec5.pdf

This lecture largely provides an overview of methods for structure from motion, and such material can be found in any computer vision textbook. The following are two excellent books that focus specifically on this subject.  If you’re working in this area, one of these books should be on your bookshelf.

Multiple View Geometry in Computer Vision, by Richard Hartley,Andrew Zisserman, Cambridge University Press, 2000.

The Geometry of Multiple Images, by Olivier Faugeras, Quang-Tuan Luong, T. Papadopoulo, MIT Press, 2001.

 Below are links to three classic papers of the field:

C. Tomasi, T. Kanade, Shape and Motion from Image Streams: A Factorization Method , IJCV, 9(2), 1992, 137-154, [pdf]

Longuet-Higgins, Prazdny, The Interpretation of a Moving Retinal Image, Proc. R. Soc. Long. B, 1980, pp. 385-397,  [pdf]

Longuet-Higgins, A computer algorithm for reconstructing a scene from two projections, Nature, Vol. 293, 1981, pp. 133-135  [pdf]

Jan. 23: 3D Reconstruction for Rendering [Diem Vu], Slides: lec6.pdf, Discussion Board

C.J. Taylor, D. Kriegman,   Structure and motion from line segments in multiple images,   IEEE Trans. PAMI, Vol 17, 11, pp 1021-1033, Nov 1995.

P. E. Debevec, C.J.Taylor, J. Malik,  Modeling and Rendering Architecture from Photographs:A hybrid geometry-and image-based approach,   SIGGRAPH 1996, pp11-20.


Week 4: 

Jan. 28: Image Transfer Methods [Satya Malick], Slides: lec7.pdf, Discussion Board

E. Chen, L. Williams, View Interpolation for Image Synthesis,   SIGGRAPH 1993

S. Seitz, C. Dyer, View Morphing SIGGRAPH, pp. 21--30, 1996

Y. Genc, J. Ponce. Image-Based Rendering Using Parameterized Image Varieties. International Journal of Computer Vision, Vol. 41, No. 3, pp. 143-170, 2001.


Jan. 30: Single View Modeling [Jin-Su Kim], Slides: lec8.pdf, Discussion Board

A. Criminisi, I. Reid and A. Zisserman. Single View Metrology, Proc. IEEE International Conference on Computer Vision, 1999 


B.M. Oh, M. Chen, J. Dorsey, F. Durand, Image-Based Modeling and Photo Editing. ACM SIGGRAPH 2001.


Week 5:Lighting and compositing

Feb. 4: Measuring Lighting and using radiance maps [Sameer Agarwal] ,  Slides: lec9.pdf, Discussion Board

Langer, M.S., Zucker, S.W., What is a light source?, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)  1997, pp. 172-178,

"What can be Known about the Radiometric Response Function from Images ?" . M. D. Grossberg, S. K. Nayar , Proc. of European Conference on Computer Vision (ECCV) 2002. 


 "Radiometric Self Calibration", T. Mitsunaga ,S. K. Nayar,  Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1999, Other sources of information:

P. E. Debevec, J. Malik. Recovering High Dynamic Range Radiance Maps from Photographs. In SIGGRAPH 97, August 1997.

Paul E. Debevec. Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography. SIGGRAPH 98, 1998.  PDF version

Feb. 6: Compositing and Matting , [Slides: lec10-surf-lightfield.pdf, lect10-compositing.pdf],  Discussion Board

J. F. Blinn, Jim Blinn's Corner: Compositing, Part 1: Theory, IEEE Computer Graphics and Application, 14(5), Sept. 1994, pp 83-87

D. Zonker, D.M. Werner, B. Curles and D.H. Salesin   Environment Matting and Compositing ,   SIGGRAPH 1999, pp. 205-214


 Environment matting extensions: towards higher accuracy and real-time capture Yung-Yu Chuang, Douglas E. Zongker, Joel Hindorff, Brian Curless, David H. Salesin, Richard Szeliski , SIGGRPAPH 2000, pp. 121 - 130  


Video matting of complex scenes Yung-Yu Chuang, Aseem Agarwala, Brian Curless, David H. Salesin, Richard Szeliski

M. Koudelka, S. Magda, P. Belhumeur, D. Kriegman, “Image-based Modeling and Rendering of Surfaces with Arbitrary BRDFs,” IEEE Conf. on Computer Vision and Pattern Recognition, 2001, pp.568-575 (primarily Section 3.3)

Week 6:


Feb. 11: BRDF Modeling from Images [slides, lec11.pdf]  Discussion Board

"Image-based BRDF Measurement Including Human Skin," by Stephen R. Marschner, Stephen H. Westin, Eric P. F. Lafortune, Kenneth E. Torrance, and Donald P. Greenberg. Eurographics Workshop on Rendering, pp. 139-152, 1999.

There’s a wide literature on this, more references to be provided.


Feb: 13: Relighting  [slides, lec12.pdf ] Discussion Board

Efficient re-rendering of naturally illuminated environments.  J Nimeroff, E Simoncelli, and J Dorsey.  Proc Fifth Annual Eurographics Symposium on Rendering, 1994, Full Text (174k, pdf).

M. Koudelka, S. Magda, P. Belhumeur, D. Kriegman, “Image-based Modeling and Rendering of Surfaces with Arbitrary BRDFs,” IEEE Conf. on Computer Vision and Pattern Recognition, 2001, pp.568-575.

P. Debevec, T.Hawkins, C. Tchou, H.P. Duiker, W. Sarokin, M. Sagar, “Acquiring the Reflectance Field of a Human Face” SIGGRAPH 2000,

T.T. Wong , P.A. Heng, S.H. Or , W.Y. Ng,  Illuminating Image-based Objects, Proceedings of Pacific Graphics'97, Seoul, Korea, 1997, pp 69-78.

Z, Lin, T.T. Wong H.Y. Shum, Relighting with the Reflected Irradiance Field: Representation, Sampling and Reconstruction.  International Journal of Computer Vision, Vol. 49, No. 2-3, September-October 2002, pp. 229-246.  


Week 7:

Feb: 18: Bridging Shape and Reflectance [Jongwoo Lim] , [slides, lec13.pdf ] Discussion Board

Sato Y., Wheeler,  M. D., and Ikeuchi, K., Object shape and reflectance modeling from observation. In SIGGRAPH '97 (1997), pp. 379-387.

Paul E. Debevec. Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-Based Graphics with Global Illumination and High Dynamic Range Photography. SIGGRAPH 98, 1998.  PDF version

Y. Yu, P. Debevec, J. Malik, T. Hawkins, Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs, SIGGRAPH 99.

Feb. 20: Texture Mapping and Synthesis (2D) [Cindy Xin Wang] , [slides, lec14.pdf ]     Discussion Board

 P. S. Heckbert  Survey of Texture Mapping, (figures),   IEEE Computer Graphics and Applications, Nov. 1986

A. A. Efros, T.K. Leung,  Texture Synthesis by Non-Parametric SamplingICCV 1999

J. S. De Bonet  Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images   SIGGRAPH 1997


Week 8:  Textures

February 25: Bidirectional Texture Functions and their Synthesis [Peter Schwer] ,  [slides, lec15.pdf ] Discussion Board

Reflectance and Texture of Real World Surfaces, K J. Dana, B. van Ginneken, S.K. Nayar and J.J. Koenderink, ACM Transactions on Graphics, Volume 18, No. 1, pp. 1-34, January 1999

 Synthesizing bidirectional texture functions for real-world surfaces
Xinguo Liu, Yizhou Yu, Heung-Yeung Shum , SIGGRAPH 2001, pp 97 - 106

 Synthesis of bidirectional texture functions on arbitrary surfaces
Xin Tong, Jingdan Zhang, Ligang Liu, Xi Wang, Baining Guo, Heung-Yeung Shum, SIGGRAPH 2002, pp. 665-672/


Polynomial texture maps , Tom Malzbender, Dan Gelb, Hans Wolters  SIGGRAPH 2001, Pages: 519 – 528, see also http://www.hpl.hp.com/ptm/


February 27: Patch-Based Texture Synthesis in 2-D and 3-D, Junwen Wu, [slides, lec16.pdf ]  Discussion Board

A.A. Efros, W.T. Freeman, ``Image Quilting for Texture Synthesis and Transfer,” SIGGRAPH  2001. pdf version

L. Liang, C. Liu, Y-Q Xu, B. Guo, H-Y Shum, Real-time texture synthesis by patch-based sampling, ACM Transactions on Graphics (TOG), 20(3)  2001, pp. 127-150.

Week 9: Face and Video Modeling

March 4: Face Modeling, [slides, lec17.pdf ]  Discussion Board

F. Pighin, J. Hecker, D. Lischinski, D. H. Salesin, and R. Szeliski. Synthesizing realistic facial expressions from photographs. In ACM Computer Graphics (SIGGRAPH'98) Proceedings, pages 75-84, Orlando, July 1998.

A morphable model for the synthesis of 3D faces  Volker Blanz, Thomas Vetter
SIGGRAPH 99, pp: 187 - 194

A. Georghiades, P. Belhumeur, D. Kriegman, Illumination-Based Image Synthesis: Creating Novel Images of Human Faces Under Differing Pose and Lighting,  IEEE Workshop on Multi-View Modeling and Analysis of Visual Scenes, 1999, pp. 47-54.

Rapid Modeling of Animated Faces From Video. Zicheng Liu, Zhengyou Zhang, Chuck Jacobs, Michael Cohen. In Proceedings of The Third International Conference on Visual Computing (Visual 2000), pages 58-67, September 2000, Mexico City. Also available as Technical Report MSR-TR-99-21.


March 6: Video Modeling,  Ofar Achler, [slides, lec18.pdf ] Discussion Board

Schoedl, Szeliski, Salesin, Essa. "Video textures", SIGGRAPH 2000 [WebPage]

“Video Rewrite: Driving Visual Speech with Audio", Christoph Bregler,
Michele Covell, Malcolm Slaney, SIGGRAPH 97.


Week 10


March 10 (Monday): Special Event: CS Coloquium by Steve Sullivan, director of R&D at Industrial Light and Magic about computer vision and special effects


March 11:  Special class:

Attend S. Sullivan lecture instead on March 10.


March 13: Project Presentations




Notes and links


Programming languages:  For any work in this course, you can use any language.  We’ve often found it convenient to program many image operations in Matlab. . Click here for Serge Belongie’s  Matlab resource links.



Access to SIGGRAPH Proceedings:

http://www.acm.org/pubs/contents/ proceedings/series/siggraph/.

From on campus, you can pretty much get to all IEEE publications via IEEE Explore at:


Computer Vision -- A Modern Approach, Forsyth and Ponce

Introductory Techniques for 3-D Computer Vision Trucco and Verri

Some useful links for projects and papers can be found here


Class presentations

Each student taking this course for credit will be responsible for presenting the material for one class session.  The syllabus provides the current list of papers to be covered.  These have been selected to provide a coherent set of topics for each session.  To the extent possible, you should try to integrate the ideas of the papers into a cohesive lecture, rather than just covering the papers sequentially.  As you begin to prepare your lecture, you might consider bringing in material from other sources (papers, texts, web).  You may decide to cover some papers in greater depth than others.  If you have any doubts about what to cover, don’t hesitate to contact me. Each class meeting is 75 minutes, though you should prepare to talk for about an hour or so, leaving 15 minutes for discussion and questions. Please read, reflect upon, and follow these presentation guidelines, kindly provided by Prof. Elkan. 

I will work with you to make a good presentation, and in turn that will require doing some preparation in advance.

The procedure for one presentation is as follows

1.      You should prepare a draft set of slides about one week before your presentation.  Slides can be in Powerpoint, LaTeX, DjVu, or any other presentation package.  You can use materials from the web, etc so long as you attribute the source of the materials.  You can copy equations, diagrams, charts, and tables as necessary from the paper for the presentation (scan, cut & paste, redo, etc.). You may find movies, etc by the authors of the papers, so it will be worthwhile to surf their pages.

2.      About 1 week before your class, you will then meet with me for about ½ hour to go over the slides.  Bring a set of hardcopies of the slides, and either a laptop, CDROM, or else leave the slides accessible on the network so we can go over them.

3.      Depending upon the changes required, etc, we may meet a day or two before your seminar to go over them one more time.

4.      Day of presentation: Give a good presentation with confidence, enthusiasm, and clarity.   Try to relax.  You’ll be the expert!!

5.      Immediately afterwards: Make changes to the slides suggested by the class discussion.  Email the slides in PDF, two slides per page, to the instructor for publishing on the class web page.

Presentations will be evaluated, in a friendly way but with high standards

For each presentation, we will have a web-based discussion area.  Each seminar participant is expected to contribute at least one message to the discussion, before the presentation.  A message may ask an interesting question, point out a strength or weakness of the papers, or answer a question asked by someone else.  Messages should be thoughtful!