Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks
Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks
Citations

WEB OF SCIENCE

1
Citations

SCOPUS

3

초록

With the increasing number of mobile device users worldwide, utilizing mobile edge computing (MEC) devicesclose to users for content caching can reduce transmission latency than receiving content from a server or cloud. However, because MEC has limited storage capacity, it is necessary to determine the content types and sizesto be cached. In this study, we investigate a caching strategy that increases the hit ratio from small base stations(SBSs) for mobile users in a heterogeneous network consisting of one macro base station (MBS) and multipleSBSs. If there are several SBSs that users can access, the hit ratio can be improved by reducing duplicate contentand increasing the diversity of content in SBSs. We propose a Deep Q-Network (DQN)-based caching strategythat considers time-varying content popularity and content redundancy in multiple SBSs. Content is stored inthe SBS in a divided form using maximum distance separable (MDS) codes to enhance the diversity of thecontent. Experiments in various environments show that the proposed caching strategy outperforms the othermethods in terms of hit ratio.

키워드

Edge CachingHeterogeneous NetworksReinforcement Learning
제목
Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks
제목 (타언어)
Deep Reinforcement Learning-Based Edge Caching in Heterogeneous Networks
저자
최윤정임유진
DOI
10.3745/JIPS.03.0180
발행일
2022-12
저널명
JIPS(Journal of Information Processing Systems)
18
6
페이지
803 ~ 812