Accurate Single Image to 3D Using View-Specific Neural Renderer
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

0

초록

Synthesizing a 3D model from a single 2D image is a significant challenge in computer vision and 3D modeling. Previous single image to-3D methods generate multi-view images from a single image first and then feed these images to Neural Radiance Fields (NeRF) for 3D reconstruction. Therefore, visual consistency across viewpoints of these generated multi-view images directly affects the accuracy of 3D reconstruction. However, the previous methods tend to generate view-inconsistent images due to the projective ambiguity of a single image. To address the view inconsistency, we propose a viewpoint-specific learning method for single image-to-3D reconstruction using variants of NeRF. By introducing viewpoint-specific self-attention to NeRF, our method specializes the learning for viewpoints, enabling accurate 3D reconstruction even with visually discontinuous multi-view images. Experimental results demonstrate that the proposed method outperforms state-of-the-art single image-to-3D techniques by generating more accurate and coherent 3D models.

키워드

Multi-View Generation Model3D ReconstructionView Consistency.
제목
Accurate Single Image to 3D Using View-Specific Neural Renderer
저자
U, Chae JunKo, JaeeunHong, Kibeom
DOI
10.33851/JMIS.2024.11.4.241
발행일
2024-12
저널명
Journal of Multimedia Information System
11
4
페이지
241 ~ 248