Resumen
In today?s society, where people spend over 90% of their time indoors, indoor air quality (IAQ) is crucial for sustaining human life. However, as various indoor activities such as cooking generate diverse types of pollutants in indoor spaces, IAQ has emerged as a serious issue. Previous studies have employed methods such as CO2 sensors, smart floor systems, and video-based pattern recognition to distinguish occupants? activities; however, each method has its limitations. This study delves into the classification of occupants? cooking activities using sound recognition technology. Four deep learning-based sound recognition models capable of recognizing and classifying sounds generated during cooking were presented and analyzed. Experiments were carried out using sound data collected from real kitchen environments and online data-sharing websites. Additionally, changes in performance according to the amount of collected data were observed. Among the developed models, the most efficient is found to be the convolutional neural network, which is relatively unaffected by fluctuations in the amount of sound data and consistently delivers excellent performance. In contrast, other models exhibited a tendency for reduced performance as the amount of sound data decreased. Consequently, the results of this study offer insights into the classification of cooking activities based on sound data and underscore the research potential for sound-based occupant behavior recognition classification models.