From Image to Sequence: Exploring Vision Transformers for Optical Coherence Tomography Classification

J Med Signals Sens. 2025 Jun 9:15:18. doi: 10.4103/jmss.jmss_58_24. eCollection 2025.

Abstract

Background: Optical coherence tomography (OCT) is a pivotal imaging technique for the early detection and management of critical retinal diseases, notably diabetic macular edema and age-related macular degeneration. These conditions are significant global health concerns, affecting millions and leading to vision loss if not diagnosed promptly. Current methods for OCT image classification encounter specific challenges, such as the inherent complexity of retinal structures and considerable variability across different OCT datasets.

Methods: This paper introduces a novel hybrid model that integrates the strengths of convolutional neural networks (CNNs) and vision transformer (ViT) to overcome these obstacles. The synergy between CNNs, which excel at extracting detailed localized features, and ViT, adept at recognizing long-range patterns, enables a more effective and comprehensive analysis of OCT images.

Results: While our model achieves an accuracy of 99.80% on the OCT2017 dataset, its standout feature is its parameter efficiency-requiring only 6.9 million parameters, significantly fewer than larger, more complex models such as Xception and OpticNet-71.

Conclusion: This efficiency underscores the model's suitability for clinical settings, where computational resources may be limited but high accuracy and rapid diagnosis are imperative.Code Availability: The code for this study is available at https://github.com/Amir1831/ViT4OCT.

Keywords: Computer vision; convolutional neural network; deep learning; multi-headed self-attention; optical coherence tomography; vision transformers.