Deformable MR-CT image registration using an unsupervised, dual-channel network for neurosurgical guidance

Med Image Anal. 2022 Jan:75:102292. doi: 10.1016/j.media.2021.102292. Epub 2021 Oct 29.

Abstract

Purpose: The accuracy of minimally invasive, intracranial neurosurgery can be challenged by deformation of brain tissue - e.g., up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach. We report an unsupervised, deep learning-based registration framework to resolve such deformations between preoperative MR and intraoperative CT with fast runtime for neurosurgical guidance.

Method: The framework incorporates subnetworks for MR and CT image synthesis with a dual-channel registration subnetwork (with synthesis uncertainty providing spatially varying weights on the dual-channel loss) to estimate a diffeomorphic deformation field from both the MR and CT channels. An end-to-end training is proposed that jointly optimizes both the synthesis and registration subnetworks. The proposed framework was investigated using three datasets: (1) paired MR/CT with simulated deformations; (2) paired MR/CT with real deformations; and (3) a neurosurgery dataset with real deformation. Two state-of-the-art methods (Symmetric Normalization and VoxelMorph) were implemented as a basis of comparison, and variations in the proposed dual-channel network were investigated, including single-channel registration, fusion without uncertainty weighting, and conventional sequential training of the synthesis and registration subnetworks.

Results: The proposed method achieved: (1) Dice coefficient = 0.82±0.07 and TRE = 1.2 ± 0.6 mm on paired MR/CT with simulated deformations; (2) Dice coefficient = 0.83 ± 0.07 and TRE = 1.4 ± 0.7 mm on paired MR/CT with real deformations; and (3) Dice = 0.79 ± 0.13 and TRE = 1.6 ± 1.0 mm on the neurosurgery dataset with real deformations. The dual-channel registration with uncertainty weighting demonstrated superior performance (e.g., TRE = 1.2 ± 0.6 mm) compared to single-channel registration (TRE = 1.6 ± 1.0 mm, p < 0.05 for CT channel and TRE = 1.3 ± 0.7 mm for MR channel) and dual-channel registration without uncertainty weighting (TRE = 1.4 ± 0.8 mm, p < 0.05). End-to-end training of the synthesis and registration subnetworks also improved performance compared to the conventional sequential training strategy (TRE = 1.3 ± 0.6 mm). Registration runtime with the proposed network was ∼3 s.

Conclusion: The deformable registration framework based on dual-channel MR/CT registration with spatially varying weights and end-to-end training achieved geometric accuracy and runtime that was superior to state-of-the-art baseline methods and various ablations of the proposed network. The accuracy and runtime of the method may be compatible with the requirements of high-precision neurosurgery.

Keywords: Deformable registration; Image synthesis; Inter-modality registration; Unsupervised learning.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Humans
  • Image Processing, Computer-Assisted*
  • Neurosurgical Procedures
  • Tomography, X-Ray Computed*
  • Uncertainty