Purpose: To support research on autonomous robotic micro-drilling for cranial window creation in mice, a multimodal digital twin (DT) is developed to generate realistic synthetic images and drilling sounds. The realism of the DT is evaluated using data from an eggshell drilling scenario, demonstrating its potential for training AI models with multimodal synthetic data.
Methods: The asynchronous multi-body framework (AMBF) simulator for volumetric drilling with haptic feedback is combined with the Isaac Sim simulator for photorealistic rendering. A deep audio generator (DAG) model is presented and its realism is evaluated on real drilling sounds. A convolutional neural network (CNN) trained on synthetic images is used to assess visual realism by detecting drilling areas in real eggshell images. Finally, the accuracy of the DT is evaluated by experiments on a real eggshell.
Results: The DAG model outperformed pitch modulation methods, achieving lower Frechet audio distance (FAD) and Frechet inception distance (FID) scores, demonstrating a closer resemblance to real drilling sounds. The CNN trained on synthetic images achieved a mean average precision (mAP) of 70.2 when tested on real drilling images. The DT had an alignment error of 0.22 ± 0.03 mm.
Conclusion: A multimodal DT has been developed to simulate the creation of the cranial window on an eggshell model and its realism has been evaluated. The results indicate a high degree of realism in both the synthetic audio and images and submillimeter accuracy.
Keywords: Deep learning methods; Digital twin; Robotics and automation in life sciences; Simulation and animation.
© 2025. The Author(s).