상세 보기
- U, Chae Jun;
- Ko, Jaeeun;
- Hong, Kibeom
WEB OF SCIENCE
0SCOPUS
0초록
Synthesizing a 3D model from a single 2D image is a significant challenge in computer vision and 3D modeling. Previous single image to-3D methods generate multi-view images from a single image first and then feed these images to Neural Radiance Fields (NeRF) for 3D reconstruction. Therefore, visual consistency across viewpoints of these generated multi-view images directly affects the accuracy of 3D reconstruction. However, the previous methods tend to generate view-inconsistent images due to the projective ambiguity of a single image. To address the view inconsistency, we propose a viewpoint-specific learning method for single image-to-3D reconstruction using variants of NeRF. By introducing viewpoint-specific self-attention to NeRF, our method specializes the learning for viewpoints, enabling accurate 3D reconstruction even with visually discontinuous multi-view images. Experimental results demonstrate that the proposed method outperforms state-of-the-art single image-to-3D techniques by generating more accurate and coherent 3D models.
키워드
- 제목
- Accurate Single Image to 3D Using View-Specific Neural Renderer
- 저자
- U, Chae Jun; Ko, Jaeeun; Hong, Kibeom
- 발행일
- 2024-12
- 저널명
- Journal of Multimedia Information System
- 권
- 11
- 호
- 4
- 페이지
- 241 ~ 248