Noman, Hafiz Muhammad Fahad and Dimyati, Kaharudin and Noordin, Kamarul Ariffin and Hanafi, Effariza and Abdrabou, Atef (2024) FeDRL-D2D: Federated Deep Reinforcement Learning- Empowered Resource Allocation Scheme for Energy Efficiency Maximization in D2D-Assisted 6G Networks. IEEE Access, 12. pp. 109775-109792. ISSN 2169-3536, DOI https://doi.org/10.1109/ACCESS.2024.3434619.
Full text not available from this repository.Abstract
Device-to-device (D2D)-assisted 6G networks are expected to support the proliferation of ubiquitous mobile applications by enhancing system capacity and overall energy efficiency towards a connected-sustainable world. However, the stringent quality of service (QoS) requirements for ultra-massive connectivity, limited network resources, and interference management are the significant challenges to deploying multiple device-to-device pairs (DDPs) without disrupting cellular users. Hence, intelligent resource management and power control are indispensable for alleviating interference among DDPs to optimize overall system performance and global energy efficiency. Considering this, we present a Federated DRL-based method for energy-efficient resource management in a D2D-assisted heterogeneous network (HetNet). We formulate a joint optimization problem of power control and channel allocation to maximize the system's energy efficiency under QoS constraints for cellular user equipment (CUEs) and DDPs. The proposed scheme employs federated learning for a decentralized training paradigm to address user privacy, and a double-deep Q-network (DDQN) is used for intelligent resource management. The proposed DDQN method uses two separate Q-networks for action selection and target estimation to rationalize the transmit power and dynamic channel selection in which DDPs as agents could reuse the uplink channels of CUEs. Simulation results depict that the proposed method improves the overall system energy efficiency by 41.52% and achieves a better sum rate of 11.65%, 24.78%, and 47.29% than multi-agent actor-critic (MAAC), distributed deep-deterministic policy gradient (D3PG), and deep Q network (DQN) scheduling, respectively. Moreover, the proposed scheme achieves a 5.88%, 15.79%, and 27.27% reduction in cellular outage probability compared to MAAC, D3PG, and DQN scheduling, respectively, which makes it a robust solution for energy-efficient resource allocation in D2D-assisted 6G networks.
Item Type: | Article |
---|---|
Funders: | Ministry of Higher Education Malaysia under the Fundamental Research Grant Scheme (FRGS/1/2020/TK0/UM/02/30) |
Uncontrolled Keywords: | 6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation; 6G; device-to-device communications; double deep Q-network (DDQN); energy efficiency; federated-deep reinforcement learning (F-DRL); resource allocation |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science T Technology > TK Electrical engineering. Electronics Nuclear engineering |
Divisions: | Faculty of Engineering > Department of Electrical Engineering |
Depositing User: | Ms. Juhaida Abd Rahim |
Date Deposited: | 22 Nov 2024 04:56 |
Last Modified: | 22 Nov 2024 04:56 |
URI: | http://eprints.um.edu.my/id/eprint/47101 |
Actions (login required)
View Item |