Multi-modal endoscopic artificial intelligence system improving classification of colorectal subepithelial lesions: a multicenter study

Dig Liver Dis. 2025 Jun 27:S1590-8658(25)00807-2. doi: 10.1016/j.dld.2025.06.003. Online ahead of print.

Abstract

Background: Colorectal subepithelial lesions (SELs) present with similar endoscopic appearance but diverse in histology and prognosis, making precise diagnosis crucial but challenging.

Aim: This study aims to develop a multi-modal endoscopic artificial intelligence (AI) system to facilitate colorectal SELs classification.

Methods: Patients with histologically confirmed colorectal SELs were retrospectively enrolled from 7 hospitals. Single-modal models [Model W (WLE), Model E (EUS)] and multi-modal AI system, AIOSCOPE-WE, were constructed. The classification performance was evaluated both internally and externally. Endoscopists' performance with and without AIOSCOPE-WE support was compared.

Results: 510 patients (364 neuroendocrine tumors (NETs), 72 lipomas, 23 leiomyomas and 51 inflammatory hyperplasia) with 5118 WLE images and 4950 EUS images were included. In classifying colorectal SELs, AIOSCOPE-WE achieved higher accuracy than EUS experts and non-experts (86.0 % vs 75.8 % and 65.4 %, P < 0.01). With AI support, endoscopists' performance was improved from 70.6 % to 83.1 % (P < 0.001). AIOSCOPE-WE also significantly outperformed Model W and Model E in accuracy in both internal (92.1 % vs 71.9 % and 78.6 %, P < 0.001) and external tests (84.9 % vs 69.4 % and 63.9 %, P < 0.001), with consistent performance across lesion sizes and NET grades.

Conclusion: AIOSCOPE-WE holds great promise as a potent tool for endoscopists in classifying colorectal SELs and optimizing clinical decisions.

Keywords: Artificial intelligence; Endoscopy; Multi-modal image; Subepithelial lesion.