Task Migration Based on Reinforcement Learning in Vehicular Edge Computing
Citations

WEB OF SCIENCE

9
Citations

SCOPUS

17

초록

Multiaccess edge computing (MEC) has emerged as a promising technology for time-sensitive and computation-intensive tasks. With the high mobility of users, especially in a vehicular environment, computational task migration between vehicular edge computing servers (VECSs) has become one of the most critical challenges in guaranteeing quality of service (QoS) requirements. If the vehicle's tasks unequally migrate to specific VECSs, the performance can degrade in terms of latency and quality of service. Therefore, in this study, we define a computational task migration problem for balancing the loads of VECSs and minimizing migration costs. To solve this problem, we adopt a reinforcement learning algorithm in a cooperative VECS group environment that can collaborate with VECSs in the group. The objective of this study is to optimize load balancing and migration cost while satisfying the delay constraints of the computation task of vehicles. Simulations are performed to evaluate the performance of the proposed algorithm. The results show that compared to other algorithms, the proposed algorithm achieves approximately 20-40% better load balancing and approximately 13-28% higher task completion rate within the delay constraints. © 2021 Sungwon Moon et al.

키워드

Computational methodsEdge computingQuality of serviceReinforcement learningComputation tasksComputation-intensive taskComputational taskCritical challengesDelay constraintsMigration costsQualityof-service requirement (QoS)Vehicular environmentsLearning algorithms
제목
Task Migration Based on Reinforcement Learning in Vehicular Edge Computing
저자
Moon, SungwonPark, JaesungLim, Yujin
DOI
10.1155/2021/9929318
발행일
2021-05
유형
Article
저널명
Wireless Communications and Mobile Computing
2021
페이지
1 ~ 10