Optimal Task Offloading Decision in IIoT Enviornments Using Reinforcement Learning
Citations

WEB OF SCIENCE

0
Citations

SCOPUS

6

초록

In the Industrial Internet of Things (IIoT), various types of tasks are processed for the small quantity batch production. But there are many challenges due to the limited battery lifespan and computational capabilities of devices. To overcome the limitations, Mobile Edge Computing (MEC) has been introduced. In MEC, a task offloading technique to execute the tasks attracts much attention. A MEC server (MECS) has limited computational capability, which increases the burden on the server and a cellular network if a larger number of tasks are offloaded to the server. It can reduce the quality of service for task execution. Thus, offloading between nearby devices through device-to-device (D2D) communication is drawing attention. We propose the optimal task offloading decision strategy in the MEC and D2D communication architecture. We aim to minimize the energy consumption of devices and task execution delay under delay constraints. To solve the problem, we adopt Q-learning algorithm as one of Reinforcement Learning (RL). Simulation results show that the proposed algorithm outperforms the other methods in terms of energy consumption of devices and task execution delay. © 2021 IEEE.

키워드

Computation OffloadingDevice-to-Device (D2D) CommunicationIndustrial Internet of ThingsMobile Edge ComputingQ-Learning
제목
Optimal Task Offloading Decision in IIoT Enviornments Using Reinforcement Learning
저자
Koo, SeolwonLim, Yujin
DOI
10.1109/ECICE52819.2021.9645710
발행일
2021-10
유형
Conference Paper
저널명
Proceedings of the 3rd IEEE Eurasia Conference on IOT, Communication and Engineering 2021, ECICE 2021
페이지
86 ~ 89