상세 보기
- Chung, Jaehyun;
- Kim, Minjoo;
- Min, Seokhyeon;
- Choi, Hyunseok;
- Park, Soohyun;
- 외 1명
WEB OF SCIENCE
3SCOPUS
2초록
Investors struggle with the unpredictable, nonlinear nature of stock price volatility. Econometric models based on machine learning algorithms have improved prediction accuracy but remain limited in dynamic and highly correlated markets. This paper builds upon the proximal policy optimization (PPO) algorithm, the well-established deep reinforcement learning (DRL) method, and proposes an enhanced variant called correlation graph-based PPO (CGPPO), which incorporates spatio-temporal stock correlations for more realistic and robust predictions. The reward function, designed based on trading frequency and portfolio value, enhances experimental sophistication by reflecting practical investment objectives. The experiment is conducted in the simulated market environment using four major Korean stocks while explicitly considering the correlations among them. Experimental results show that the proposed CGPPO algorithm outperforms baseline methods, achieving 64.60 % reward convergence value during training and 69.04 % prediction value during inference.
키워드
- 제목
- Correlation-assisted spatio-temporal reinforcement learning for stock revenue maximization
- 저자
- Chung, Jaehyun; Kim, Minjoo; Min, Seokhyeon; Choi, Hyunseok; Park, Soohyun; Kim, Joongheon
- 발행일
- 2025-09
- 유형
- Article
- 권
- 289