张涵, 郑晓敏, 刁肖磊, 蔡冬丽. 基于注意力路由胶囊网络的多通道脑电情感识别[J]. 华南师范大学学报(自然科学版), 2023, 55(5): 103-110. doi: 10.6054/j.jscnun.2023069
引用本文: 张涵, 郑晓敏, 刁肖磊, 蔡冬丽. 基于注意力路由胶囊网络的多通道脑电情感识别[J]. 华南师范大学学报(自然科学版), 2023, 55(5): 103-110. doi: 10.6054/j.jscnun.2023069
ZHANG Han, ZHENG Xiaomin, DIAO Xiaolei, CAI Dongli. Attention Routing-Based Capsule Networks for Emotion Recognition on Multi-Channel EEG[J]. Journal of South China Normal University (Natural Science Edition), 2023, 55(5): 103-110. doi: 10.6054/j.jscnun.2023069
Citation: ZHANG Han, ZHENG Xiaomin, DIAO Xiaolei, CAI Dongli. Attention Routing-Based Capsule Networks for Emotion Recognition on Multi-Channel EEG[J]. Journal of South China Normal University (Natural Science Edition), 2023, 55(5): 103-110. doi: 10.6054/j.jscnun.2023069

基于注意力路由胶囊网络的多通道脑电情感识别

Attention Routing-Based Capsule Networks for Emotion Recognition on Multi-Channel EEG

  • 摘要: 近年来,深度神经网络应用于脑电情感识别并取得了比传统算法更好的性能,但是卷积神经网络存在对于物体空间关系识别和物体旋转后的特征识别能力较弱、池化操作会丢失大量有价值的信息以及无法描述脑电信号不同通道间的内在联系等缺点。为了克服上述缺点,文章提出了基于注意力路由胶囊网络的多通道脑电情感识别模型(AR-CapsNet),将注意力路由和胶囊激活引入脑电信号情感识别模型中。与传统的胶囊网络脑电情感模型相比,AR-CapsNet模型在保持空间信息的同时,快速地进行前向传递。最后,在数据集DEAP上,将AR-CapsNet模型与机器学习模型、其他深度学习模型(动态图卷积神经网络、四维卷积递归神经网络和传统胶囊网络等)进行了情感识别准确率实验;与多层级特征引导的胶囊网络进行参数量和训练时间对比实验。实验结果表明:(1)与其他模型相比,AR-CapsNet模型有更高的识别精度,在效价、唤醒和主导上的平均识别准确率分别为99.46%、98.45% 和99.54%;(2)与目前性能较好的胶囊网络脑电情感识别模型(多层级特征引导的胶囊网络)相比,AR-CapsNet模型使用了更少的总参数量,降低了脑电信号情感识别的复杂度。

     

    Abstract: In recent years, deep neural networks have been applied to EEG emotion recognition and have demonstrated superior performance compared to traditional algorithms. However, convolutional neural networks exhibit weaknesses in recognizing spatial relationships between objects, identifying features after object rotation, losing va-luable information through pooling operations, and describing the inherent connections among different EEG signal channels. To address these shortcomings, a multi-channel EEG emotion recognition model based on Attention Routing Capsule Network (AR-CapsNet) is proposesd, which introduces attention routing and capsule activation into the EEG emotion recognition model. Compared to traditional capsule network EEG emotion models, the AR-CapsNet model maintains spatial information while performing forward propagation quickly. Finally, experiments on the DEAP dataset compare the AR-CapsNet model with machine learning model and other deep learning-based models (dynamic graph convolutional neural network, 4D convolutional recurrent neural network and traditional capsule networks, etc.), in experiments evaluating emotion recognition accuracy. A comparison was also conducted with multi-channel EEG-based emotion recognition via a multi-level features guided capsule network in terms of parame-ter count and training time. Experimental results indicate that: (1)The AR-CapsNet model achieves higher recog-nition accuracy compared to other models, with average recognition accuracies of 99.46%, 98.45%, 99.54% for valence, arousal and dominance, respectively. (2)In comparison to the currently high-performing capsule network model for EEG-based emotion recognition, namely, the multi-level feature-guided capsule network, a lower total parameter count is employed by the AR-CapsNet model, thereby reducing the complexity of EEG signal emotion recognition.

     

/

返回文章
返回