|
|
|
Suryakant Tyagi and Sándor Szénási
Machine learning and speech emotion recognition are rapidly evolving fields, significantly impacting human-centered computing. Machine learning enables computers to learn from data and make predictions, while speech emotion recognition allows computers t...
ver más
|
|
|
|
|
|
|
Yuan Luo, Changbo Wu and Caiyun Lv
The proposed method can improve emotion recognition accuracy in human?computer interactions.
|
|
|
|
|
|
|
Willams Costa, Estefanía Talavera, Renato Oliveira, Lucas Figueiredo, João Marcelo Teixeira, João Paulo Lima and Veronica Teichrieb
Emotion recognition is the task of identifying and understanding human emotions from data. In the field of computer vision, there is a growing interest due to the wide range of possible applications in smart cities, health, marketing, and surveillance, a...
ver más
|
|
|
|
|
|
|
Zhipeng Zhang and Liyi Zhang
Electroencephalography (EEG)-based emotion recognition technologies can effectively help robots to perceive human behavior, which have attracted extensive attention in human?machine interaction (HMI). Due to the complexity of EEG data, current researcher...
ver más
|
|
|
|
|
|
|
Omar Adel, Karma M. Fathalla and Ahmed Abo ElFarag
Emotion recognition is crucial in artificial intelligence, particularly in the domain of human?computer interaction. The ability to accurately discern and interpret emotions plays a critical role in helping machines to effectively decipher users? underly...
ver más
|
|
|
|
|
|
|
Matthieu Saumard
Speech Emotions Recognition (SER) has gained significant attention in the fields of human?computer interaction and speech processing. In this article, we present a novel approach to improve SER performance by interpreting the Mel Frequency Cepstral Coeff...
ver más
|
|
|
|
|
|
|
Aryan Yousefi and Kalpdrum Passi
Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies...
ver más
|
|
|
|
|
|
|
Lihong Zhang, Chaolong Liu and Nan Jia
Multimodal emotion classification (MEC) has been extensively studied in human?computer interaction, healthcare, and other domains. Previous MEC research has utilized identical multimodal annotations (IMAs) to train unimodal models, hindering the learning...
ver más
|
|
|
|
|
|
|
Kuntao Hu, Ziqi Xu, Xiufang Wang, Yingyu Wang, Haoran Li and Yibing Zhang
The color of urban streets plays a crucial role in shaping a city?s image, enhancing street appeal, and optimizing the experience of citizens. Nevertheless, the relationship between street color environment and residents? perceptions has rarely been deep...
ver más
|
|
|
|
|
|
|
Zhichao Peng, Wenhua He, Yongwei Li, Yegang Du and Jianwu Dang
Speech emotion recognition is a critical component for achieving natural human?robot interaction. The modulation-filtered cochleagram is a feature based on auditory modulation perception, which contains multi-dimensional spectral?temporal modulation repr...
ver más
|
|
|
|