Resumen
According to global data on visual impairment from the World Health Organization in 2010, an estimated 285 million individuals, including 39 million who are blind, face visual impairments. These individuals use non-contact methods such as voice commands and hand gestures to interact with user interfaces. Recognizing the significance of hand gesture recognition for this vulnerable population and aiming to improve user usability, this study employs a Generative Adversarial Network (GAN) coupled with Convolutional Neural Network (CNN) techniques to generate a diverse set of hand gestures. Recognizing hand gestures using HaCk typically involves a two-step approach. First, the GAN is trained to generate synthetic hand gesture images, and then a separate CNN is employed to classify gestures in real-world data. The evaluation of HaCk is demonstrated through a comparative analysis using Leave-One-Out Cross-Validation (LOO CV) and Holdout Cross-Validation (Holdout CV) tests. These tests are crucial for assessing the model?s generalization, robustness, and suitability for practical applications. The experimental results reveal that the performance of HaCk surpasses that of other compared ML/DL models, including CNN, FTCNN, CDCGAN, GestureGAN, GGAN, MHG-CAN, and ASL models. Specifically, the improvement percentages for the LOO CV Test are 17.03%, 20.27%, 15.76%, 13.76%, 10.16%, 5.90%, and 15.90%, respectively. Similarly, for the Holdout CV Test, HaCk outperforms HU, ZM, GB, GB-ZM, GB-HU, CDCGAN, GestureGAN, GGAN, MHG-CAN, and ASL models, with improvement percentages of 56.87%, 15.91%, 13.97%, 24.81%, 23.52%, 17.72%, 15.72%, 12.12%, 7.94%, and 17.94%, respectively.