3D Gaussian Splatting has shown impressive novel view synthesis results; nonetheless, it is vulnerable to dynamic objects polluting the input data of an otherwise static scene, so called distractors. Distractors have severe impact on the rendering quality as they get represented as view-dependent effects or result in floating artifacts. Our goal is to identify and ignore such distractors during the 3D Gaussian optimization to obtain a clean reconstruction. To this end, we take a self-supervised approach that looks at the image residuals during the optimization to determine areas that have likely been falsified by a distractor. In addition, we leverage a pretrained segmentation network to provide object awareness, enabling more accurate exclusion of distractors. This way, we obtain segmentation masks of distractors to effectively ignore them in the loss formulation. We demonstrate that our approach is robust to various distractors and significantly improves rendering quality on distractor-polluted scenes, improving PSNR by 1.86dB compared to 3D Gaussian Splatting.
Robust 3D Gaussian Splatting computes semantic distractor masks to ignore distractors during optimization. Furthermore, it leverages SegmentAnything to provide object awareness for more accurate exclusion of distractors.
@article{ungermann2024robustgaussians,
author = {Ungermann, Paul and Ettenhofer, Armin and Nie{\ss}ner, Matthias and Roessle, Barbara},
title = {Robust 3D Gaussian Splatting for Novel ViewSynthesis in Presence of Distractors},
journal = {GCPR},
year = {2024}
}