Title AI Empowered RIS-Assisted NOMA Networks: Deep Learning or Reinforcement Learning?
Authors Zhong, Ruikang
Liu, Yuanwei
Mu, Xidong
Chen, Yue
Song, Lingyang
Affiliation Queen Mary Univ London, Sch Elect Engn & Comp Sci EECS, London E1 4NS, England
Beijing Univ Posts & Telecommun, Key Lab Universal Wireless Commun, Minist Educ, Beijing 100876, Peoples R China
Beijing Univ Posts & Telecommun, Sch Artificial Intelligence, Beijing 100876, Peoples R China
Peking Univ, Dept Elect, Beijing 100871, Peoples R China
Keywords RECONFIGURABLE INTELLIGENT SURFACES
PASSIVE BEAMFORMING DESIGN
REFLECTING SURFACE
MULTIPLE-ACCESS
PERFORMANCE
SYSTEMS
Issue Date Jan-2022
Publisher IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
Abstract A reconfigurable intelligent surface (RIS)-assisted multi-user downlink communication system over fading channels is investigated, where both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA) schemes are employed. In particular, the time overhead for configuring the RIS reflective elements at the beginning of each fading channel is considered. The optimization goal is maximizing the effective throughput of the entire transmission period by jointly optimizing the phase shift of the RIS and the power allocation of the AP for each channel block. In an effort to solve the formulated problem and fill the research vacancy of the performance comparison between different machine learning tools in wireless networks, a deep learning (DL) approach and a reinforcement learning (RL) approach are proposed and their representative superiority and inferiority are investigated. The DL approach can locate the optimal phase shifts with the deep neural network fitting as well as the corresponding power allocation for each user. From the perspective of long-term reward, the phase shift control with configuration overhead can be regarded as a Markov decision process and the RL algorithm is proficient in solving such problems with the assistance of the Bellman equation. The numerical results indicate that: 1) From the perspective of the wireless network, NOMA can achieve a throughput gain of about 42% compared with OMA; 2) The well-trained RL and DL agents are able to achieve the same performance in Rician channel, while RL is superior in the Rayleigh channel; 3) The DL approach has lower complexity and faster convergence, while the RL approach has preferable strategy flexibility.
URI http://hdl.handle.net/20.500.11897/632777
ISSN 0733-8716
DOI 10.1109/JSAC.2021.3126068
Indexed SCI(E)
Appears in Collections: 信息科学技术学院

Files in This Work
There are no files associated with this item.

Web of Science®


0

Checked on Last Week

Scopus®



Checked on Current Time

百度学术™


0

Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.