Skip to the main content

Original scientific paper

https://doi.org/10.24138/jcomss-2022-0141

Twin Delayed DDPG based Dynamic Power Allocation for Mobility in IoRT

Homayun Kabir ; Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
Mau-Luen Tham ; Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
Yoong Choon Chang ; Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia


Full text: english pdf 1.863 Kb

page 19-29

downloads: 174

cite


Abstract

The internet of robotic things (IoRT) is a modern as well as fast-evolving technology employed in abundant socio-economical aspects which connect user equipment (UE) for communication and data transfer among each other. For ensuring the quality of service (QoS) in IoRT applications, radio resources, for example, transmitting power allocation (PA), interference management, throughput maximization etc., should be efficiently employed and allocated among UE. Traditionally, resource allocation has been formulated using optimization problems, which are then solved using mathematical computer techniques. However, those optimization problems are generally nonconvex as well as nondeterministic polynomial-time hardness (NP-hard). In this paper, one of the most crucial challenges in radio resource management is the emitting power of an antenna called PA, considering that the interfering multiple access channel (IMAC) has been considered. In addition, UE has a natural movement behavior that directly impacts the channel condition between remote radio head (RRH) and UE. Additionally, we have considered two well-known UE mobility models i) random walk and ii) modified Gauss-Markov (GM). As a result, the simulation environment is more realistic and complex. A data-driven as well as model-free continuous action based deep reinforcement learning algorithm called twin delayed deep deterministic policy gradient (TD3) has been proposed that is the combination of policy gradient, actor-critics, as well as double deep Q-learning (DDQL). It optimizes the PA for i) stationary UE, ii) the UE movements according to random walk model, and ii) the UE movement based on the modified GM model. Simulation results show that the proposed TD3 method outperforms model-based techniques like weighted MMSE (WMMSE) and fractional programming (FP) as well as model-free algorithms, for example, deep Q network (DQN) and DDPG in terms of average sum-rate performance.

Keywords

IoRT; Power Allocation; Radio Resource Management; User Mobility; Deep Reinforcement Learning; Twin Delayed Deep Deterministic Policy Gradient

Hrčak ID:

299767

URI

https://hrcak.srce.hr/299767

Publication date:

31.3.2023.

Visits: 822 *