Abstract:
To address the generalization challenges faced by Electroencephalogram(EEG)-based emotion recognition models in cross-subject scenarios, a domain generalization-oriented multi-mask collaborative learning framework: Multi-task Cross-subject Emotion Recognition based on Multi-masked EEG(M2CER) is proposed. Traditional methods often struggle to learn stable feature representations across individuals due to significant inter-subject variations in EEG signal amplitude and temporal patterns. This study designs a multi-mask multi-task learning model by constructing a multi-task collaborative learning mechanism centered on mask reconstruction, integrating mask reconstruction, masked contrastive learning, and domain adversarial training into a unified framework. This mechanism recovers masked information through reconstruction, enhances feature discriminability via contrastive learning, and explicitly reduces feature distribution discrepancies among subjects through domain adversarial training. Additionally, a cross-domain aggregation mechanism is integrated into the reconstruction process, performing similarity-weighted aggregation in the latent space between the current subject's multi-mask sequences and samples from other subjects, encouraging the model to focus on common and invariant essential features across subjects. Experiments on public datasets demonstrate that the proposed method effectively enhances the recognition performance and robustness on unseen subject data, offering a novel approach for cross-subject emotion recognition without requiring target domain data and exhibiting strong generalization capability.