상세 보기
초록
In the rapid development of artificial intelligence(AI), explainability(XAI) in particular plays a critical role in ensuring the reliability and safety of AI systems. Existing XAI studies mainly focus on interpreting the process by which a model derives a specific conclusion, which is essential for providing transparency in AI decision-making in high-risk applications. However, most studies focus on how AI makes correct predictions, and there is a relative lack of research on how to analyze and effectively correct the root causes of errors that occur during the data collection process. Errors that occur during the data collection process can lead to biased model learning, misclassification, and decreased overall system reliability. In general, AI pipelines regard such errors as simple noise and respond by applying outlier detection or noise removal techniques to refine the data, but these approaches have limitations in resolving the root causes of errors. In this study, we propose a feedback-based error correction framework that goes beyond simply detecting data errors to explain why the errors occurred and quickly correct them through a more systematic approach. The proposed framework detects errors early in the data collection phase, classifies the types of errors, identifies the error occurrence mechanism through question-answering({Q A})-based cause analysis, and constructs a feedback loop to correct errors through automated correction or human intervention based on the results, thereby continuously improving data quality. In particular, it focuses on data error management in high-risk environments such as autonomous driving, and proposes an environment-adaptive data error detection and correction technique.
키워드
- 제목
- SAPGDR: Situation Adaptive Prompt Guided Data Rectification Architecture for Safety AI
- 저자
- Kang, Jieun; Kim, Subi; Ryu, Jimin; Yoon, Yongik
- 발행일
- 2025-09
- 유형
- Conference Paper
- 저널명
- 2025 IEEE/ACIS 23rd International Conference on Software Engineering Research, Management and Applications, SERA 2025 - Proceedings
- 페이지
- 349 ~ 354