Recent advances in few-shot novel-view synthesis based on 3D Gaussian Splatting (3DGS) have shown remarkable progress. Existing methods usually rely on carefully designed geometric regularizers to reinforce geometric supervision; however, applying multiple regularizers consistently across scenes is hard to tune and often degrades robustness. Consequently, generating reliable geometry from extremely sparse viewpoints remains a key challenge. To overcome this limitation, we introduce SREGS, a framework tailored for few-shot reconstruction whose contributions focus on two aspects: explicitly consistent geometry and multi-scale depth-guided optimization. Specifically, to explicitly optimize reconstruction consistency, we initialize the point cloud with 2D Gaussians, thereby enhancing depth consistency for the same Gaussian observed from different views. Secondly, we employ region-adaptive rapid densificationn to fill under-covered regions with additional representations, while an opacity-aware noise term injects stochasticity into each Gaussian to boost exploration in under-observed areas. In addition, to strengthen geometric refinement of the radiance field, we impose multi-scale depth constraints based on a monocular depth prior, performing geometric refinement from global to local scales and ensuring highly accurate reconstruction. Extensive experiments on LLFF, MipNeRF360, and Blender show that SREGS achieves higher synthesis quality with lower computational cost and demonstrates robust performance. The code is available at:https://github.com/LeeXiaoTong1/SREGS.
Keywords: 3DGS; Densification; Depth regularization; Few-shot novel view synthesis; Multi-view consistency.
Copyright © 2025 Elsevier Ltd. All rights reserved.