Title: Debiasing Image-to-Image Translation Models Speaker: Md. Mehrab Tanjim (UCSD) Abstract: Deep generative models for image-to-image translation tasks have shown a lot of promise to solve problems like image enhancement and generating images from sketches. However, most of these models need to be trained on a large and balanced training dataset, where all classes are well represented. In its absence, rare attributes in the input are often lost in the generated output image. Real datasets, unfortunately, often have many rare attributes, as a result, generative models may not perform well on these attributes. Our experiments with Pixel2Style2Pixel reveal that this bias is prevalent for infrequent attributes in the face dataset, e.g., eyeglasses and baldness. Even when the input image clearly has eyeglasses, the model is unable to create a face with them. To remedy this problem, we propose a general framework based on contrastive learning, resampling, and minority category supervision to debias existing generative models for various image-to-image translation tasks such as super-resolution and sketch-to-image generation.