DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
UNIVERSITY OF CALIFORNIA, SAN DIEGO

CSE 167 Final Project


The final assignment will be a final project completed in groups of two. Each group needs to implement an interactive software application that uses some of the advanced rendering effects or modeling techniques discussed in class, but were not covered in previous assignments. Projects will be evaluated based on technical and creative merits.


Grading: The final project score consists of three parts:

Blog: 10 points
Presentation: 90 points
Extra Credit: 10 points

The final project will be presented and graded during our final exam period, Wed, Mar 21, 7:00 PM-9:59 PM. The final project will be presented to the entire class in CENTR 212 by means of a one minute long video. The video will be graded by the instructor, TAs, and tutors. Following the video presentations, groups must present their assignments in person for further grading in EBU3 B270.

Late submissions will not be accepted.


Blog (10 Points)

You need to create a blog to report on the progress you are making on your project. You need to make at least two blog entries to get the full score. The first is due on Monday, March 12 at 11:59 PM and the second is due on Monday, March 19 at 11:59 PM. Additionally, by 12:00 noon on March 21 you need to create a YouTube video of your application. This video will be shown during the first hour of the final presentation event, and is the primary basis for your grade.

You are free to create the blog on any web based blog site, such as Blogger or WordPress. Once you have created your blog, please make a private post to the instructor, TAs, and tutors on Piazza, informing them of its URL. If you need to move to a different blog server, please move your entire blog content over and revise your Piazza post with the new URL.

The first blog entry needs to contain at least the following information:

In the second blog entry, you need to update all of the above items and upload at least one additional screen shot.

The video should be at least 1 minute long and at most 1 minute 20 seconds long. You should use screen recording software, such as Open Broadcaster Software, which is available free of charge for Windows and Mac. There does not need to be an audio track, but you are welcome to talk over the video, or add sound or music. Upload the video to YouTube and link to it from your blog. The use of video editing software is optional, but you need to have a title slide with the name of your project and the team members' names. This can be done within YouTube itself with text embedding.

The points are distributed as follows.

Blog entry 1: 4 points
Blog entry 2: 3 points
Video: 3 points

Presentation (90 Points)

80% of the score for the presentation are for the technical features and 20% is for the creative quality of your demonstration. The grading will be based solely on your presentation. What you do not show us will not score points.

To obtain the full score for the technical quality, each team must implement at least three skill points worth of technical features per team member (i.e., 6 skill points). Also, each team must implement at least one medium or hard feature for each member of the team. That is, each team has the following options to cover the required 6 skill points:

If your team implements more than the required amount of skill points, then the additional features can fill in for points which you might lose for incomplete or insufficiently implemented features.

Your score for creativity is going to be determined by averaging the instructor's, TAs', and tutors' subjective scores. We will look for a cohesive theme and story, but also things such as aesthetic visuals, accomplished by carefully selected textures, colors and materials, placement of camera and lights, effective user interaction, and fluid rendering.

Here is the list of technical features you can choose from:

For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.

All technical features have to have a toggle switch (keyboard key) with which they can be enabled or disabled. For procedural algorithms, to recalculate the procedural objects with a different seed value for the random number generator and update the geometry in real-time.

Additional technical features may be approved by the course staff upon request.

If you want to have debugging support from the TAs and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other assignments.

Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:


Extra Credit (10 Points maximum)

Bounty Points: To motivate you to choose the implementation of some of the hardest algorithms, you can receive up to 10 extra points for a flawless implementation of two of the following algorithms, as well as adequate visual debugging aids. Note that some of these are not directly or sufficiently covered in class, so you will need to research them yourself.

For a full score of 10 points, teams of two people must implement two of these algorithms. In other words, each of the algorithms below gains the team 5/10 extra credit points.

Any other technical feature (even from the regular feature lists) can also be eligible for bounty points, but requires permission by a course staff member.

As an example for what is meant by visual debugging aids, look at Oleg Utkin's CSE 167 final project video.


Tips


Technical Feature Implementation Requirements

Technical Feature Requirements and Comments
Bezier Patches Need to make a surface out of at least 4 connected Bezier patches with C1 continuity between all of them.
Per-pixel illumination of texture-mapped polygons This effect can be hard to see without careful placement of the light source(s). You need to support a keyboard key to turn the effect on or off at runtime. The best type of light source to demonstrate the effect is a spot light with a small spot width, to illuminate only part of the texture.
Toon shading Needs to consist of both discretized colors and silhouettes. The use of Toon Shading must make sense for the project, and Rim shading must not be used in the same project.
Rim shading Add lighting along the rim of objects to further contrast it with the background. The use of Rim Shading must make sense for the project, and Toon shading must not be used in the same project.
Bump mapping Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press.
Glow or halo effect Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press.
Particle effect Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application should not experience a significant slowdown, and instancing has to be done cleanly (no memory leaks should occur when the particles die).
Linear fog Add a fog effect to your application, similar to the now deprecated glFog function, so that objects farther away are fogged more than those that are closer. The fog effect intensifies linearly with distance, so make sure you have objects at various distances to show that.
Collision detection with bounding boxes The bounding boxes must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option which upon key press displays the bounding boxes as wireframe boxes, and uses different colors for those boxes which are colliding (e.g., red) vs. those that don't (e.g., white). An example for an algorithm to implement is the Sweep and Prune algorithm with Dimension Reduction as discussed in class.
Procedurally modeled city Must include 3 types of buildings, some large landmark features that span multiple streets such as a park (a flat rectangular piece of land is fine) or lake or stadium, and roads that are more complex than a regular square grid (even Manhattan is more complex than a grid!)
Creativity points if you can input an actual city's data for the roads!
Procedurally modeled buildings Must have 3 non-identical portions (top, middle, bottom OR 1st floor, 2nd floor, roof).
Generate at least 3 different buildings with differently shaped windows.
Procedurally generated terrain Ability to input a height map: either real (1 point) or generated from different algorithms (2 points).
Shader that adds at least 3 different types of terrain (for example: grassland, plains, desert, snow, tundra, sea, rocks, wasteland, etc).
Procedurally generated plants with L-systems At least 4 language variables (X, F, +, -), and parameters (length of F, theta of + and -).
To make it 3D you can also add another axis for rotation(typical variables are & and ^).
At least 3 trees that demonstrate different rules.
Pseudorandom execution will make your trees look more varied and pretty.
Water effect with waves, reflection and refraction Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate wave animations for the water. To get the bounty points, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera).
Shadow mapping Upon key press, the shadow map (as a grayscale image) must be shown alongside the rendering window. You should be able to toggle shadows with another key.
Shadow volumes In addition to the shadow map, show the wireframe of the shadow volume that was created. Points cannot be stacked with shadow mapping. In other words, you will either get 3 points for having shadow volumes, or 2 points for shadow mapping, but not 5 points for both.
Procedurally modeled complex objects with shape grammar At least 5 different shapes must be used, and there must be rules for how they can be combined. It is not acceptable to allow their combination in completely arbitrary ways. You cannot stack this feature's skill points with the procedural modeling techniques listed in the Medium difficulty section. In other words, you will either get 3 points for having shape grammar, or 2 points for not having it.
Displacement mapping Using a texture or height map, the position of points on the geometry should be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion.
Motion blur This screen space effect must perform per-object motion blur. See screen space effects for additional requirements and comments.
Depth of field Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog should not be used in the same project. Make sure you have objects at various distances to show this feature works properly.
Screen space effects Screen space lighting effect using normal maps, depth maps such as haze or area lights.
Screen space rendering effects such as motion blur or reflection (ability to turn on/off).
Volumetric light scattering or some kind of screen space ray tracing effect.
Screen space ambient occlusion (SSAO) Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for bounty points, upon key press, SSAO map (as a grayscale image) must be shown alongside the rendering window.
Screen space directional occlusion (SSDO) with color bounce Demonstrate functionality on multiple objects which possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for bounty points, upon key press, SSDO map (colored image) must be shown alongside the rendering window.
Collision detection with arbitrary geometry Needs to test for the intersection of faces. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g. color the face red when colliding).

Restrictions

A summary of restrictions listed in the above table, along with extra explanations:


Last update: March 16, 2018. Content adapted from previous CSE courses by Jürgen Schulze.