A self-adaptive deep learning method for automated eye laterality detection based on color fundus photography

PLoS One. 2019 Sep 19;14(9):e0222025. doi: 10.1371/journal.pone.0222025. eCollection 2019.

Abstract

Purpose: To provide a self-adaptive deep learning (DL) method to automatically detect the eye laterality based on fundus images.

Methods: A total of 18394 fundus images with real-world eye laterality labels were used for model development and internal validation. A separate dataset of 2000 fundus images with eye laterality labeled manually was used for external validation. A DL model was developed based on a fine-tuned Inception-V3 network with self-adaptive strategy. The area under receiver operator characteristic curve (AUC) with sensitivity and specificity and confusion matrix were applied to assess the model performance. The class activation map (CAM) was used for model visualization.

Results: In the external validation (N = 2000, 50% labeled as left eye), the AUC of the DL model for overall eye laterality detection was 0.995 (95% CI, 0.993-0.997) with an accuracy of 99.13%. Specifically for left eye detection, the sensitivity was 99.00% (95% CI, 98.11%-99.49%) and the specificity was 99.10% (95% CI, 98.23%-99.56%). Nineteen images were wrongly classified as compared to the human labels: 12 were due to human wrong labelling, while 7 were due to poor image quality. The CAM showed that the region of interest for eye laterality detection was mainly the optic disc and surrounding areas.

Conclusion: We proposed a self-adaptive DL method with a high performance in detecting eye laterality based on fundus images. Results of our findings were based on real world labels and thus had practical significance in clinical settings.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Aged
  • Algorithms
  • Area Under Curve
  • Deep Learning
  • Diagnostic Techniques, Ophthalmological
  • Eye / diagnostic imaging*
  • Functional Laterality*
  • Fundus Oculi
  • Humans
  • Middle Aged
  • Ocular Physiological Phenomena
  • Photography / methods*
  • Sensitivity and Specificity

Grants and funding

This work was supported by the National KeyR&D Program of China (2018YFC0116500). MH receives support from the Fundamental Research Funds of the State Key Laboratory in Ophthalmology, Science and Technology Planning Project of Guangdong Province 2013B20400003. MH receives support from the University of Melbourne at Research Accelerator Program and the CERA Foundation. The Center for Eye Research Australia receives Operational Infrastructure Support from the Victorian State Government. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.