Learning to See through Turbulent Water

Learning to See through Turbulent Water

WACV 2018

Zhengqin Li* Zak Murez* David Kriegman Ravi Ramamoorthi Manmohan Chandraker
*Equal contribution
University of California, San Diego


Imaging through dynamic refractive media, such as looking into turbulent water, or through hot air, is challenging since light rays are bent by unknown amounts leading to complex geometric distortions. Inverting these distortions and recovering high quality images is an inherently ill-posed problem, leading previous works to require extra information such as high frame-rate video or a template image, which limits their applicability in practice. This paper proposes training a deep convolution neural network to undistort dynamic refractive effects using only a single image. The neural network is able to solve this ill-posed problem by learning image priors as well as distortion priors. Our network consists of two parts, a warping net to remove geometric distortion and a color predictor net to further refine the restoration. Adversarial loss is used to achieve better visual quality and help the network hallucinate missing and blurred information. To train our network, we collect a large training set of images distorted by a turbulent water surface. Unlike prior works on water undistortion, our method is trained end-to-end, only requires a single image and does not use a ground truth template at test time. Experiments show that by exploiting the structure of the problem, our network outperforms state-of-the-art deep image to image translation.



7.0 MB

Training set

8.4 GB

Validation set

131 MB

Test set

251 MB

Source code


Pretrained Models

427 MB


    title={Learning to See Through Turbulent Water},
    author={Li, Zhengqin and Murez, Zak and Kriegman, David and Ramamoorthi, Ravi and Chandraker, Manmohan},
    booktitle={Applications of Computer Vision (WACV), 2018 IEEE Winter Conference on},