상세 보기
초록
Recent advancements in online learning platforms have improved accessibility; however, visually impaired learners still face barriers accessing visual materials such as charts and diagrams in lecture videos. When instructors provide limited or no descriptions, learners miss out on critical content. This study proposes an AI-powered web-based system that detects visual elements in lecture videos and generates interactive natural language descriptions. The system integrates a YOLO-based chart detector with a T5-based caption generator in a novel detection-captioning pipeline. Leveraging cloud deployment, it achieves up to 27x faster processing than conventional methods, ensuring smooth operation on low-spec devices. Lightweight models further enhance computational efficiency and responsiveness. Upon detecting a chart, the system pauses playback, delivers a synthesized spoken and textual description, then resumes the lecture, offering seamless access to visual content. A user study with 20 visually impaired participants guided the design of an optimized interaction flow that supports accessibility without disrupting learning engagement. Users can also review content and request additional explanations. The platform complies with WAI-ARIA guidelines, supporting screen readers, keyboard navigation, and interactive feedback for diverse users. This research contributes to accessible education by combining AI-driven visual interpretation with interactive HCI design, transforming visual content into a more inclusive, interactive learning experience.
키워드
- 제목
- Toward Inclusive Online Learning: AI-Driven Chart Description for Visually Impaired Learners
- 저자
- Park, Joo Hyun; Jeong, Sungheon; Yoo, Juhan; Song, Yoojeong
- 발행일
- 2026-02
- 유형
- Article
- 저널명
- IEEE Access
- 권
- 14
- 페이지
- 25299 ~ 25310