An intelligent stock trading system based on reinforcement learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Won Lee J. | - |
dc.contributor.author | Kim S.-D. | - |
dc.contributor.author | Lee J. | - |
dc.contributor.author | Chae J. | - |
dc.date.accessioned | 2022-04-19T12:05:11Z | - |
dc.date.available | 2022-04-19T12:05:11Z | - |
dc.date.issued | 2003-02 | - |
dc.identifier.issn | 0916-8532 | - |
dc.identifier.issn | 1745-1361 | - |
dc.identifier.uri | https://scholarworks.sookmyung.ac.kr/handle/2020.sw.sookmyung/149190 | - |
dc.description.abstract | This paper describes a stock trading system based on reinforcement learning, regarding the process of stock price changes as Markov decision process (MDP). The system adopts two popular reinforcement learning algorithms, temporal-difference (TD) and Q, for selecting stocks and optimizing trading parameters, respectively. Input features of the system are devised using technical analysis and value functions are approximated by feedforward neural networks. Multiple cooperative agents are used for Q-learning to efficiently integrate global trend prediction with local trading strategy. Agents communicate with others sharing training episodes and learned policies, while keeping the overall scheme of conventional Q-learning. Experimental results on the Korean stock market show that our trading system outperforms the market average and makes appreciable profits. Furthermore, we can find that our system is superior to a system trained by supervised learning in view of risk management. | - |
dc.format.extent | 10 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Oxford University Press | - |
dc.title | An intelligent stock trading system based on reinforcement learning | - |
dc.type | Article | - |
dc.publisher.location | 일본 | - |
dc.identifier.scopusid | 2-s2.0-0038719209 | - |
dc.identifier.wosid | 000181032800016 | - |
dc.identifier.bibliographicCitation | IEICE Transactions on Information and Systems, v.E86D, no.2, pp 296 - 305 | - |
dc.citation.title | IEICE Transactions on Information and Systems | - |
dc.citation.volume | E86D | - |
dc.citation.number | 2 | - |
dc.citation.startPage | 296 | - |
dc.citation.endPage | 305 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Multiple agents | - |
dc.subject.keywordAuthor | Neural network | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | Stock selection | - |
dc.subject.keywordAuthor | TD algorithm | - |
dc.identifier.url | https://search.ieice.org/bin/summary.php?id=e86-d_2_296&category=D&year=2003&lang=E&abst= | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Sookmyung Women's University. Cheongpa-ro 47-gil 100 (Cheongpa-dong 2ga), Yongsan-gu, Seoul, 04310, Korea02-710-9127
Copyright©Sookmyung Women's University. All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.