Skip to the main content

Original scientific paper

https://doi.org/10.7307/ptt.v34i6.4159

Reinforcement Learning-Based Routing Protocols in Vehicular and Flying Ad Hoc Networks – A Literature Survey

Pavle Bugarčić ; University of Belgrade, Faculty of Transport and Traffic Engineering
Nenad Jevtić ; University of Belgrade, Faculty of Transport and Traffic Engineering
Marija Malnar ; University of Belgrade, Faculty of Transport and Traffic Engineering


Full text: english pdf 690 Kb

page 893-906

downloads: 138

cite


Abstract

Vehicular and flying ad hoc networks (VANETs and FANETs) are becoming increasingly important with the development of smart cities and intelligent transporta-tion systems (ITSs). The high mobility of nodes in these networks leads to frequent link breaks, which complicates the discovery of optimal route from source to destination and degrades network performance. One way to over-come this problem is to use machine learning (ML) in the routing process, and the most promising among different ML types is reinforcement learning (RL). Although there are several surveys on RL-based routing protocols for VANETs and FANETs, an important issue of integrating RL with well-established modern technologies, such as software-defined networking (SDN) or blockchain, has not been adequately addressed, especially when used in complex ITSs. In this paper, we focus on performing a comprehensive categorisation of RL-based routing pro-tocols for both network types, having in mind their simul-taneous use and the inclusion with other technologies. A detailed comparative analysis of protocols is carried out based on different factors that influence the reward func-tion in RL and the consequences they have on network performance. Also, the key advantages and limitations of RL-based routing are discussed in detail.

Keywords

reinforcement learning; Q-learning; routing protocols; VANET; FANET; ITS

Hrčak ID:

293673

URI

https://hrcak.srce.hr/293673

Publication date:

2.12.2022.

Visits: 292 *