DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
UNIVERSITY OF CALIFORNIA, SAN DIEGO

CSE 167 Final Project


Assignment 4 is a final project that must be completed in groups of two or three people. Each group needs to implement an interactive graphics software application that uses some of the advanced rendering effects or modeling techniques described in class, but were not covered in previous assignments. Projects will be evaluated based on technical and creative merits.

The final project score consists of four parts:

Blog: 10 points
Video: 5 points
Graphics application: 85 points
Extra Credit: 10 points

The final project will be presented and graded during our final exam period, Wednesday, March 18, 7:00 PM-9:59 PM. The final project will be presented to the entire class in the final exam room by means of a short video. The video will be graded by the instructor, TAs, and tutors. Following the video presentations, groups must present their graphics applications in person for further grading in EBU3 B260 and B270.

Late submissions will not be accepted.


Blog (10 Points)

You must create a blog to report on the progress you are making on your project. You need to make three blog entries, each summarizing the previous six days of your work, to get the full score. The first is due on Wednesday, March 4, at 11:59 PM, the second is due on Tuesday, March 10, at 11:59 PM, and the third is due on Monday, March 16, at 11:59 PM.

You are free to create the blog on any web based blog site, such as Blogger or WordPress. Once you have created your blog, please make a private post to the instructor, TAs, and tutors on Piazza, informing them of its URL. If you need to move to a different blog server, please move your entire blog content over and revise your Piazza post with the new URL.

The first blog entry (4 points) needs to contain at least the following information:

In the second blog entry (3 points) and third blog entry (3 points), you must update all of the above items and upload at least one additional screen shot. Additional sketches of planned work is optional.


Video (5 Points)

By 12:00 noon on Wednesday, March 18, you must create a YouTube video of your graphics application. This video will be shown during the first hour of the final presentation event.

The video must be at least 1 minute long and at most 1 minute 20 seconds long. You should use screen recording software, such as Open Broadcaster Software, which is available free of charge for Windows and Mac. There does not need to be an audio track, but you are encouraged to talk over the video, or add sound or music. Upload the video to YouTube and link to it from your blog. The use of video editing software is optional, but you need to have a title slide with the name of your project and the team members' names.


Graphics Application (85 Points)

The majority of the final project score is for the graphics application your group has developed. It will be graded for both technical features and creative quality. The grading will be based on what is shown in the video and you present during grading.

Technical Features (70 points)

To obtain the full score for the technical features, each team must implement at least three skill points worth of technical features per team member (i.e., 6 skill points for teams of 2, 9 skill points for teams of 3). Further, each team must implement at least one medium or hard feature for each team member. For example, a team of 2 has the following options to cover the required 6 skill points:

If your team implements more than the required amount of skill points, then the additional features can fill in for points that you might lose for incomplete or insufficiently implemented features.

Following is the list of technical features you can choose from:

For a full score, each of these technical features must fulfill the more detailed requirements listed at the bottom of this page.

All technical features must have a toggle switch (keyboard key) to enabled or disabled the feature. For procedural algorithms, they must also include a keyboard key to recalculate the procedural objects with a different seed value for the random number generator and update the geometry in real-time.

Additional technical features may be approved by the instructor upon request.

If you want to have debugging support from the TAs and tutors, you need to implement this project in C++ using OpenGL and GLFW, just like the other assignments.

Third party graphics libraries are generally not acceptable, unless cleared by the instructor. Exceptions are typically granted for libraries to load images and 3D models, libraries to support input devices, or support for audio. Pre-approved libraries are:

Creativity (15 points)

We will look for a cohesive theme and story, but also things such as aesthetic visuals, accomplished by carefully selected textures, colors and materials, placement of camera and lights, correct normal vectors for all your surfaces, effective user interaction, and fluid rendering. Your score for creativity is determined by the instructor, TAs, and tutors.

Extra Credit (10 Points maximum)

You can receive up to 10 extra points for a flawless implementation of the following algorithms, which must include adequate visual debugging aids (e.g., Oleg Utkin's CSE 167 final project video).

Each of these advanced effects also counts as 3 skill points towards your technical score.

Note that some of these algorithms are not directly or sufficiently covered in class, so you will need to research them yourself. Other advanced effects may also be eligible for extra credit, but require approval from the instructor.

For a full score of 10 points, teams must implement the same number of algorithms as their number of team members (i.e., 5 points per algorithm for teams of 2, 3 1/3 points per algorithm for teams of 3.


Tips


Technical feature implementation requirements

Technical Feature Requirements and Comments
Bezier Patches Need to make a surface out of at least 2 connected Bezier patches with C1 continuity between all of them. Additionally, the patches must have correct normals.
Toon shading Needs to consist of both discretized colors and silhouettes. The use of Toon Shading must make sense for the project.
Bump mapping Needs to use either a height map to derive the normals from, or a normal map directly. The map needs to be shown alongside the rendering window, upon key press.
Glow or halo effect Object glow must be obvious and clearly visible (not occluded). Implement functionality to turn off glow upon key press.
Particle effect Generate a lot of particles (at least 200), all of which can move and die shortly after appearing. Application must not experience a significant slowdown, and instancing has to be done cleanly (i.e., no memory leaks when the particles die).
Collision detection with bounding spheres or boxes The bounding volumes (spheres or boxes) must be tight enough around the colliding objects to be within 50% of the size of the objects. There must be a debug option that, upon key press, displays the bounding volumes as wireframe spheres or boxes, and uses different colors for those volumes that are colliding (e.g., red) vs those that are not colliding (e.g., white). An example for an algorithm to implement is the sweep and prune algorithm with dimension reduction as discussed in class.
Procedurally modeled city Must include 3 types of buildings and some large landmark features that span multiple blocks (e.g., a park (a flat rectangular piece of land is fine), lake, or stadium). There must be more than one building per block. Creativity points if you can input an actual city's data for the roads.
Procedurally modeled buildings Must have 3 non-identical portions (top, middle, and bottom; or 1st floor, 2nd floor, and roof). Generate at least 3 different buildings with differently shaped windows.
Procedurally generated terrain Must implement an algorithm that does not simply read from a terrain map. You can use any algorithm mentioned in class or on the internet, such as midpoint displacement, diamond-square, etc. The generated terrain must look like something potentially habitable, and we will ask you to explain how the terrain was generated. Your terrain also needs to have correct normals.
Procedurally generated plants with L-systems To make it 3D you can also add another axis for rotation (typical variables are & and ^). At least 3 trees that demonstrate different rules. Pseudorandom execution will make your trees look more varied and pretty.
Procedurally generated and animated clouds Must implement an algorithm to procedurally generate and animate clouds to texture an environment map (e.g., a skybox) and animate the clouds. The resulting clouds and animation must look realistic.
Water effect with realistic waves, reflection and refraction Add shaders to render water, reflecting and refracting the sky box textures as your cubic environment map. In addition, simulate realistic waves (e.g., Gerstner waves) for the water. To get the extra credit points, the water must reflect/refract all 3D objects of your scene, not just the sky box (you need to place multiple 3D objects so that they reflect off the water into the camera).
Shadow mapping Upon key press, the shadow map (as a grayscale image) must be shown alongside the rendering window. You must be able to toggle shadows with another key.
Shadow volumes In addition to the shadow map, show the wireframe of the shadow volume that was created. Points cannot be combined with shadow mapping. In other words, you will either get 3 points for having shadow volumes, or 2 points for shadow mapping, but not 5 points for both.
Displacement mapping Using a texture or height map, the position of points on the geometry must be modified. The map must optionally (upon key press) be shown alongside the actual graphics window to demonstrate that normal mapping or bump mapping was not used to achieve a displacement illusion.
Depth of field Similar to how cameras work, focus on an object in the scene, and apply a gradual decrease in sharpness to objects that are out of focus. You must be able to change the focal point at runtime, between at least 2 different points. Not to be confused with fog, and fog must not be used in the same project. Make sure you have objects at various distances to show this feature works properly.
Motion blur Needs to make use of per-pixel velocities (e.g., as described in this tutorial or a similar method).
Screen space effects Screen space lighting effect using normal maps, depth maps (e.g., haze), or area lights.
Screen space rendering effects such as motion blur or reflection (ability to turn on/off).
Volumetric light scattering or some kind of screen space ray tracing effect.
Screen space ambient occlusion (SSAO) Demonstrate functionality on an object with a large amount of crevices (note that the obj models given this quarter will not suffice; if you are unsure, just post several models you intend to work with on your blog for feedback). Implement functionality to turn SSAO off upon key press. To qualify for extra credit, upon key press, SSAO map (as a grayscale image) must be shown alongside the rendering window.
Screen space directional occlusion (SSDO) with color bounce Demonstrate functionality on multiple objects that possess large amounts of crevices and widely varying colors. Implement functionality to turn SSDO with color bounce off upon key press. To qualify for extra credit, upon key press, SSDO map (as a color image) must be shown alongside the rendering window.
Collision detection with arbitrary geometry Needs to test for the intersection of geometric primitives. Performance should be smooth enough to undoubtedly show that your algorithm is working properly. Support a debug mode to highlight colliding faces (e.g., color the face red when colliding).

Last update: March 15, 2020. Content adapted from previous CSE courses by Jürgen Schulze.