Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging

Radiol Artif Intell. 2025 Jul;7(4):e240637. doi: 10.1148/ryai.240637.

Abstract

Artificial intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, because large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. Keywords: Supervised Learning, Perception, Neural Networks, Radiology-Pathology Integration Supplemental material is available for this article. © RSNA, 2025.

Keywords: Neural Networks; Perception; Radiology-Pathology Integration; Supervised Learning.

Publication types

  • Review

MeSH terms

  • Artificial Intelligence*
  • Confidentiality*
  • Diagnostic Imaging* / methods
  • Federated Learning
  • Humans
  • Machine Learning*
  • Privacy*
  • Uncertainty