상세 보기
WEB OF SCIENCE
0SCOPUS
0초록
Software completeness appraisal plays a critical role in contractual compliance verification, dispute resolution, and public procurement evaluation. It is a dynamic, execution-based assessment process that determines whether contracted requirements are fulfilled through observable system behavior. Large Language Models (LLMs) offer potential support for requirement understanding and code analysis, yet their susceptibility to hallucination and non-determinism limits their suitability as final decision-making tools in high-reliability contexts. This paper proposes a hallucination-controlled, LLM-assisted appraisal framework in which LLMs are restricted to auxiliary analytical roles. Specifically, LLMs support test scenario drafting, requirement-evidence semantic matching, and report drafting, while final completeness judgments are made by experts based on execution evidence and predefined rules. The proposed framework demonstrates how LLMs can be integrated into software completeness appraisal in a controlled manner, improving efficiency while preserving reliability.
키워드
- 제목
- 소프트웨어 완성도 감정에서 대규모 언어모델 활용 기법 연구
- 제목 (타언어)
- A Study on the Use of Large Language Models for Software Completeness Appraisal
- 저자
- 김유경
- 발행일
- 2026-03
- 유형
- Y
- 저널명
- 소프트웨어포렌식 논문지
- 권
- 22
- 호
- 1
- 페이지
- 1 ~ 10