Resumen
To address the phenomenon of color shift and low contrast in underwater images caused by wavelength- and distance-related attenuation and scattering when light propagates in water, we propose a method based on an attention mechanism and adversarial autoencoder for enhancing underwater images. Firstly, the pixel and channel attention mechanisms are utilized to extract rich discriminative image information from multiple color spaces. Secondly, the above image information and the original image reverse medium transmittance map are feature-fused by a feature fusion module to enhance the network response to the image quality degradation region. Finally, the encoder learning is guided by the adversarial mechanism of the adversarial autoencoder, and the hidden space of the autoencoder is continuously approached to the hidden space of the pre-trained model. The results of the experimental images acquired from the Beihai Bay area of China on the HYSY-163 platform show that the average value of the Natural Image Quality Evaluator is reduced by 27.8%, the average value of the Underwater Color Image Quality Evaluation is improved by 28.8%, and the average values of the Structural Similarity and Peak Signal-to-Noise Ratio are improved by 35.7% and 42.8%, respectively, compared with the unprocessed real underwater images, and the enhanced underwater images have improved clarity and more realistic colors. In summary, our network can effectively improve the visibility of underwater target objects, especially the quality of images of submarine pipelines and marine organisms, and is expected to be applied in the future with underwater robots for pile legs of offshore wellhead platforms and large ship bottom sea life cleaning.