상세 보기
- 문성원;
- 임유진
WEB OF SCIENCE
2SCOPUS
3초록
Recently, multi-access edge computing (MEC) has emerged as a promising technology to alleviate thecomputing burden of vehicular terminals and efficiently facilitate vehicular applications. The vehicle canimprove the quality of experience of applications by offloading their tasks to MEC servers. However, channelconditions are time-varying due to channel interference among vehicles, and path loss is time-varying due tothe mobility of vehicles. The task arrival of vehicles is also stochastic. Therefore, it is difficult to determine anoptimal offloading with resource allocation decision in the dynamic MEC system because offloading is affectedby wireless data transmission. In this paper, we study computation offloading with resource allocation in thedynamic MEC system. The objective is to minimize power consumption and maximize throughput whilemeeting the delay constraints of tasks. Therefore, it allocates resources for local execution and transmissionpower for offloading. We define the problem as a Markov decision process, and propose an offloading methodusing deep reinforcement learning named deep deterministic policy gradient. Simulation shows that, comparedwith existing methods, the proposed method outperforms in terms of throughput and satisfaction of delayconstraints.
키워드
- 제목
- Computation Offloading with Resource Allocation Based on DDPG in MEC
- 제목 (타언어)
- Computation Offloading with Resource Allocation Based on DDPG in MEC
- 저자
- 문성원; 임유진
- 발행일
- 2024-04
- 유형
- Article
- 권
- 20
- 호
- 2
- 페이지
- 226 ~ 238