Resumen
Underwater images often come with blurriness, lack of contrast, and low saturation due to the physics of light propagation, absorption, and scattering in seawater. To improve the visual quality of underwater images, many have proposed image processing methods that vary based on different approaches. We use a generative adversarial network (GAN)-based solution and generate high-quality underwater images equivalent to given raw underwater images by training our network to specify the differences between high-quality and raw underwater images. In our proposed method, which is called dilated GAN (DGAN), we add an additional loss function using structural similarity. Moreover, this method can not only determine the realness of the entire image but also functions with classification ability on each constituent pixel in the discriminator. Finally, using two different datasets, we compare the proposed model with other enhancement methods. We conduct several comparisons and demonstrate via full-reference and nonreference metrics that the proposed approach is able to simultaneously improve clarity and correct color and restores the visual quality of the images acquired in typical underwater scenarios.