Artificial intelligence (AI) is rapidly transforming cardiovascular imaging by automating tasks such as image segmentation, feature extraction, and risk prediction - leading to significant improvements in diagnostic precision and efficiency. However, the integration of AI into clinical workflows comes with critical risks that must be addressed to ensure safe and reliable patient care.This review explores the technical, clinical, and ethical challenges of AI in cardiovascular imaging, particularly highlighting the risks of model errors, data drift and inappropriate usage. We also examine concerns about explainability, the potential for deskilling of healthcare professionals, generalisability across diverse populations, and accountability in AI implementation.We present real-world examples of where these risks have been realised, along with attempts at mitigations, including the adoption of explainable AI techniques, rigorous validation frameworks to ensure fairness and broad applicability, continuous performance monitoring, and transparency at every stage of model development and deployment.The successful adoption of AI in cardiovascular imaging relies on striking a balance between innovation and the need for ethical and legal safeguards. Achieving this requires collaborative efforts between clinicians, data scientists, patients and regulators.Evaluating and addressing these challenges is essential for responsible AI implementation and advancing patient care while maintaining high safety standards.
Keywords: Cardiac Imaging Techniques; Ethics, Medical.
© Author(s) (or their employer(s)) 2025. No commercial re-use. See rights and permissions. Published by BMJ Group.