Cross-lingual sentiment analysis plays a crucial role in accurately interpreting emotions across diverse linguistic contexts. However, performance disparities remain a major challenge, particularly in fewer-resource (including medium-resource and low-resource) languages. This study proposes an adaptive self-alignment framework for large language models, incorporating novel data augmentation techniques and transfer learning strategies to mitigate resource imbalances. Comprehensive experiments conducted on 11 languages demonstrate that our approach consistently surpasses state-of-the-art baselines, achieving an average F1-score improvement of 7.35 points. Notably, our method exhibits exceptional effectiveness in fewer-resource languages, significantly narrowing the performance gap between fewer- and high-resource settings. With robust domain adaptation capabilities and strong potential for real-world industrial applications, this research establishes a new benchmark for multilingual sentiment analysis, advancing the development of more inclusive and equitable natural language processing solutions.
Keywords: Cross-lingual sentiment analysis; Fine-tuning; Self-alignment.
© 2025 Chen et al.