Linear Four-Point LiDAR SLAM for Manhattan World Environments
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jeong, Eunju | - |
dc.contributor.author | Lee, Jina | - |
dc.contributor.author | Kang, Suyoung | - |
dc.contributor.author | Kim, Pyojin | - |
dc.date.accessioned | 2023-12-19T04:01:30Z | - |
dc.date.available | 2023-12-19T04:01:30Z | - |
dc.date.issued | 2023-11 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.issn | 2377-3766 | - |
dc.identifier.uri | https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/159480 | - |
dc.description.abstract | We present a new SLAM algorithm that utilizes an inexpensive four-point LiDAR to supplement the limitations of the short-range and viewing angles of RGB-D cameras. Herein, the four-point LiDAR can detect distances up to 40 m, and it senses only four distance measurements per scan. In open spaces, RGB-D SLAM approaches, such as L-SLAM, fail to estimate robust 6-DoF camera poses due to the limitations of the RGB-D camera. We detect walls beyond the range of RGB-D cameras using four-point LiDAR; subsequently, we build a reliable global Manhattan world (MW) map while simultaneously estimating 6-DoF camera poses. By leveraging the structural regularities of indoor MW environments, we overcome the challenge of SLAM with sparse sensing owing to the four-point LiDARs. We expand the application range of L-SLAM while preserving its strong performance, even in low-textured environments, using the linear Kalman filter (KF) framework. Our experiments in various indoor MW spaces, including open spaces, demonstrate that the performance of the proposed method is comparable to that of other state-of-the-art SLAM methods. | - |
dc.format.extent | 8 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Linear Four-Point LiDAR SLAM for Manhattan World Environments | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/LRA.2023.3315205 | - |
dc.identifier.scopusid | 2-s2.0-85171593120 | - |
dc.identifier.wosid | 001081553900005 | - |
dc.identifier.bibliographicCitation | IEEE Robotics and Automation Letters, v.8, no.11, pp 7392 - 7399 | - |
dc.citation.title | IEEE Robotics and Automation Letters | - |
dc.citation.volume | 8 | - |
dc.citation.number | 11 | - |
dc.citation.startPage | 7392 | - |
dc.citation.endPage | 7399 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordAuthor | 6-DOF | - |
dc.subject.keywordAuthor | Cameras | - |
dc.subject.keywordAuthor | Computer Vision for Transportation | - |
dc.subject.keywordAuthor | Laser radar | - |
dc.subject.keywordAuthor | Point cloud compression | - |
dc.subject.keywordAuthor | RGB-D Perception | - |
dc.subject.keywordAuthor | Sensor Fusion | - |
dc.subject.keywordAuthor | Sensors | - |
dc.subject.keywordAuthor | Simultaneous localization and mapping | - |
dc.subject.keywordAuthor | Three-dimensional displays | - |
dc.subject.keywordAuthor | Vision-Based Navigation | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10250905 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Sookmyung Women's University. Cheongpa-ro 47-gil 100 (Cheongpa-dong 2ga), Yongsan-gu, Seoul, 04310, Korea02-710-9127
Copyright©Sookmyung Women's University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.