Abstract:
In recent years, deep neural networks have been applied to EEG emotion recognition and have demonstrated superior performance compared to traditional algorithms. However, convolutional neural networks exhibit weaknesses in recognizing spatial relationships between objects, identifying features after object rotation, losing va-luable information through pooling operations, and describing the inherent connections among different EEG signal channels. To address these shortcomings, a multi-channel EEG emotion recognition model based on Attention Routing Capsule Network (AR-CapsNet) is proposesd, which introduces attention routing and capsule activation into the EEG emotion recognition model. Compared to traditional capsule network EEG emotion models, the AR-CapsNet model maintains spatial information while performing forward propagation quickly. Finally, experiments on the DEAP dataset compare the AR-CapsNet model with machine learning model and other deep learning-based models (dynamic graph convolutional neural network, 4D convolutional recurrent neural network and traditional capsule networks, etc.), in experiments evaluating emotion recognition accuracy. A comparison was also conducted with multi-channel EEG-based emotion recognition via a multi-level features guided capsule network in terms of parame-ter count and training time. Experimental results indicate that: (1)The AR-CapsNet model achieves higher recog-nition accuracy compared to other models, with average recognition accuracies of 99.46%, 98.45%, 99.54% for valence, arousal and dominance, respectively. (2)In comparison to the currently high-performing capsule network model for EEG-based emotion recognition, namely, the multi-level feature-guided capsule network, a lower total parameter count is employed by the AR-CapsNet model, thereby reducing the complexity of EEG signal emotion recognition.