Dynamic reconstruction of deformable tissues in endoscopic video is a key technology for robot-assisted surgery. Recent reconstruction methods based on neural radiance fields (NeRFs) have achieved remarkable results in the reconstruction of surgical scenes. However, based on implicit representation, NeRFs struggle to capture the intricate details of objects in the scene and cannot achieve real-time rendering. In addition, restricted single view perception and occluded instruments also propose special challenges in surgical scene reconstruction. To address these issues, we develop SurgicalGaussian, a deformable 3D Gaussian Splatting method to model dynamic surgical scenes. Our approach models the spatio-temporal features of soft tissues at each time stamp via a forward-mapping deformation MLP and regularization to constrain local 3D Gaussians to comply with consistent movement. With the depth initialization strategy and tool mask-guided training, our method can remove surgical instruments and reconstruct high-fidelity surgical scenes. Through experiments on various surgical videos, our network outperforms existing method on many aspects, including rendering quality, rendering speed and GPU usage.
Comparison with EndoNeRF, EndoSurf, LerPlane and EndoGaussian:
EndoNeRF: Neural Rendering for Stereo 3D Reconstruction of Deformable Tissues in Robotic Surgery.
EndoSurf: Neural Surface Reconstruction of Deformable Tissues with Stereo Endoscope Videos.
LerPlane: Neural Representations for Fast 4D Reconstruction of Deformable Tissues.
EndoGaussian: Real-time Gaussian Splatting for Dynamic Endoscopic Scene Reconstruction.
If you find this work helpful, you can cite our paper as follows:
@article{xie2024surgicalgaussian,
author = {Xie, Weixing and Yao, Junfeng and Cao, Xianpeng and Lin, Qiqin and Tang, Zerui and Dong, Xiao and Guo, Xiaohu},
title = {SurgicalGaussian: Deformable 3D Gaussians for High-Fidelity Surgical Scene Reconstruction},
journal = {arXiv preprint arXiv:2407.05023},
year = {2024},
}
If you have any questions or feedbacks, please contact Weixing Xie (xwxxmu@gmail.com).