Multi-agent Deep Reinforcement Learning-based Task Scheduling and Resource Sharing for O-RAN-empowered Multi-UAV-assisted Wireless Sensor Networks

Document Type

Article

Publication Title

IEEE Transactions on Vehicular Technology

Abstract

Wireless sensor networks (WSNs) with ultra-dense sensors are crucial for several industries, such as smart agricultural systems deployed in the fifth generation (5G) and beyond 5G Open Radio Access Networks (O-RAN). The WSNs employ multiple unmanned aerial vehicles (UAVs) to collect data from multiple sensor nodes (SNs) and relay it to the central controller for processing. UAVs also provide resources to SNs and extend the network coverage over a vast geographical area. The O-RAN allows the use of open standards and interfaces to create a wireless network for communications between the UAVs and ground SNs. It enables real-time data transfer, remote control, and other applications that require a reliable and high-speed connection by providing flexibility and reliability for UAV-assisted WSNs to meet the requirements of smart agricultural applications. However, the limited battery life of UAVs, transmission power, and shortage of energy resources SNs make it difficult to collect all the data and relay it to the base station, resulting in inefficient task computation and resource management in smart agricultural systems. In this paper, we propose a joint UAV task scheduling, trajectory planning, and resource-sharing framework for multi-UAV-assisted WSNs for smart agricultural monitoring scenarios that schedule UAVs' charging, data collection, and landing times and allow UAVs to share energy with SNs. The main objective of our proposed framework is to minimize the UAV energy consumption and network latency for effective data collection within a specific time frame. We formulate the multi-objective, which is a non-convex optimization problem, and transform it into a Markov decision process (MDP) with a multi-agent deep reinforcement learning (MADRL) algorithm. The simulation results show that the proposed MADRL algorithm reduces the energy consumption cost when compared to deep Q-network, Greedy, and mixed-integer linear program (MILP) by 61.92%, 68.02%, and 69.9%, respectively.

DOI

10.1109/TVT.2023.3330661

Publication Date

1-1-2023

Keywords

Autonomous aerial vehicles, Data collection, Energy consumption, Monitoring, multi-agent deep reinforcement learning, Resource management, resource sharing, Task analysis, task scheduling, trajectory planning, unmanned aerial vehicles, Wireless sensor networks, Wireless sensor networks

This document is currently not available here.

Share

COinS