상세 보기
- 박소영;
- 이병윤;
- 홍유정
WEB OF SCIENCE
0SCOPUS
0초록
This study explored the possibility of using ChatGPT for writing skills assessment. 47 essays written by undergraduate students on a given topic were assessed by both humans and ChatGPT, and their assessment results were compared and analyzed. The analysis highlighted varying degrees of agreement between the two evaluators depending on the assessment domain and specific assessment items, with an overall low level of agreement. Among the 13 assessment items, three related to content, one to organization, and one to expression were significant. The agreement between humans and ChatGPT assessments were significant when assessing the relevance of topics and content or the coherence between paragraphs rather than the qualitative aspects of content. Additionally, when examining the reasons why ChatGPT assigned certain scores, it was found that the consistency of assessments decreased when more than one criterion needed to be considered simultaneously. In contrast, scoring based on a numerical count showed that ChatGPT seemed to scrutinize the text more meticulously than humans. Moreover, when assessing the validity of content, there was a tendency to expect relatively specific evidence to be presented. This study is distinctive as it explores areas where ChatGPT can evaluate discursive essays, which require the logical expression of one’s thoughts, rather than essays with definite answers. Furthermore, it is significant in examining how ChatGPT interprets and executes writing skills assessment criteria.
키워드
- 제목
- 글쓰기 역량 평가에서 ChatGPT 활용 가능성 탐색 : 논술형 글쓰기 평가를 중심으로
- 제목 (타언어)
- A Study on Exploring the Potential of ChatGPT in Writing Skills Assessment : Focusing on Essay Writing
- 저자
- 박소영; 이병윤; 홍유정
- 발행일
- 2024-08
- 저널명
- 교육학연구
- 권
- 62
- 호
- 5
- 페이지
- 219 ~ 248