本文只介绍与NeRF原文不同的部分
这里是i→i+1i \rightarrow i + 1i→i+1的距离。
用相机旋转 R∈R3×3R\in R^{3×3}R∈R3×3和平移t∈R3t\in R^3t∈R3渲染成一个新的视图,通过三个步骤实现:
体渲染需要体密度σ\sigmaσ以及每个点沿射线之间的距离:
最后,可以通过体渲染渲染出新视角。
体积渲染依赖于每个位置的密度σ\sigmaσ,以及沿着光线的每个点之间的距离。因此,我们可以计算:
[1] Li J, Feng Z, She Q, et al. Mine: Towards continuous depth mpi with nerf for novel view synthesis[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 12578-12588.
[2] Richard Tucker and Noah Snavely. Single-view view synthesis with multiplane images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.