Resumen
Enhancing information representation in electromyography (EMG) signals is pivotal for interpreting human movement intentions. Traditional methods often concentrate on specific aspects of EMG signals, such as the time or frequency domains, while overlooking spatial features and hidden human motion information that exist across EMG channels. In response, we introduce an innovative approach that integrates multiple feature domains, including time, frequency, and spatial characteristics. By considering the spatial distribution of surface electromyographic electrodes, our method deciphers human movement intentions from a multidimensional perspective, resulting in significantly enhanced gesture recognition accuracy. Our approach employs a divide-and-conquer strategy to reveal connections between different muscle regions and specific gestures. Initially, we establish a microscopic viewpoint by extracting time-domain and frequency-domain features from individual EMG signal channels. We subsequently introduce a macroscopic perspective and incorporate spatial feature information by constructing an inter-channel electromyographic signal covariance matrix to uncover potential spatial features and human motion information. This dynamic fusion of features from multiple dimensions enables our approach to provide comprehensive insights into movement intentions. Furthermore, we introduce the space-to-space (SPS) framework to extend the myoelectric signal channel space, unleashing potential spatial information within and between channels. To validate our method, we conduct extensive experiments using the Ninapro DB4, Ninapro DB5, BioPatRec DB1, BioPatRec DB2, BioPatRec DB3, and Mendeley Data datasets. We systematically explore different combinations of feature extraction techniques. After combining multi-feature fusion with spatial features, the recognition performance of the ANN classifier on the six datasets improved by 2.53%, 2.15%, 1.15%, 1.77%, 1.24%, and 4.73%, respectively, compared to a single fusion approach in the time and frequency domains. Our results confirm the substantial benefits of our fusion approach, emphasizing the pivotal role of spatial feature information in the feature extraction process. This study provides a new way for surface electromyography-based gesture recognition through the fusion of multi-view features.