[NeurIPS 2023 Spotlight]
Project Page | Paper | Data
Feng Wang1*, Zilong Chen1*, Guokang Wang1, Yafei Song2, Huaping Liu1
1Department of Computer Science and Technology, Tsinghua University 2Alibaba Group
We propose the Masked Space-Time Hash encoding (MSTH), a novel method for efficiently reconstructing dynamic 3D scenes from multi-view or monocular videos. Based on the observation that dynamic scenes often contain substantial static areas that result in redundancy in storage and computations, MSTH represents a dynamic scene as a weighted combination of a 3D hash encoding and a 4D hash encoding. The weights for the two components are represented by a learnable mask which is guided by an uncertainty-based objective to reflect the spatial and temporal importance of each 3D position. With this design, our method can reduce the hash collision rate by avoiding redundant queries and modifications on static areas, making it feasible to represent a large number of space-time voxels by hash tables with small size. Besides, without the requirements to fit the large numbers of temporally redundant features independently, our method is easier to optimize and converge rapidly with only twenty minutes of training for a 300-frame dynamic scene. We evaluate our method on extensive dynamic scenes. As a result, MSTH obtains consistently better results than previous state-of-the-art methods with only 20 minutes of training time and 130 MB of memory storage.
We recommend to visit our project page for watching clear videos.
imm.mp4
n3dv.mp4
campus.mp4
conda create -n MSTH python=3.8
pip install -e .
and install tiny-cuda-nn for fast feed forward NNs:
pip install
python download.py <dataset-name> --scene <scene-name>
python -m MSTH.script.train <config-name> --experiment-name <exp-name> --vis <logger> --output-dir <output-dir>
Our code provides a viewer based on the NeRFStudio web viewer.
Our code is based on NeRFStudio