상세 보기
- 임채문;
- 박수현;
- 김중헌
WEB OF SCIENCE
0SCOPUS
0초록
Reinforcement learning (RL) has proven to be an effective solution for addressing sequential decision-making problems. However, its applicability is often limited in real-world scenarios where the interaction with the environment is restricted or prohibited. To overcome this challenge, offline RL has been proposed, which seeks to optimize actions based on the datasets gathered from the environment. Recent advancements in of generative artificial intelligence (GAI) have been integrated with offline RL, resulting in GAI-based offline RL. This approach leverages the capabilities of GAI to approximate arbitrary probability distribution, allowing it to mimic the distribution of the expert datasets. This paper conducts a comprehensive study of the theoretical background and properties of the diffusion-based RL algorithms, and analyzes their performance in various RL environments. Furthermore, it outlines the future directions for research on diffusion-based offline RL based on this analysis.
키워드
- 제목
- 데이터 기반 제어를 위한 확산 모델 기반 오프라인 강화학습 연구
- 제목 (타언어)
- A Research on Diffusion-based Offline Reinforcement Learning for Data-Driven Control
- 저자
- 임채문; 박수현; 김중헌
- 발행일
- 2025-12
- 유형
- Y
- 저널명
- 정보과학회논문지
- 권
- 52
- 호
- 12
- 페이지
- 1047 ~ 1055