Wireless Sensor Networks(WSNs)play a crucial role in numerous Internet of Things(IoT)applications and next-generation communication systems,yet they continue to face challenges in balancing energy efficiency and relia...Wireless Sensor Networks(WSNs)play a crucial role in numerous Internet of Things(IoT)applications and next-generation communication systems,yet they continue to face challenges in balancing energy efficiency and reliable connectivity.This study proposes SAC-HTC(Soft Actor-Critic-based High-performance Topology Control),a deep reinforcement learning(DRL)method based on the Actor-Critic framework,implemented within a Software Defined Wireless Sensor Network(SDWSN)architecture.In this approach,sensor nodes periodically transmit state information,including coordinates,node degree,transmission power,and neighbor lists,to a centralized controller.The controller acts as the reinforcement learning(RL)agent,with the Actor generating decisions to adjust transmission ranges,while the Critic evaluates action values to reflect the overall network performance.The bidirectional Node-Controller feedback mechanism enables the controller to issue appropriate control commands to each node,ensuring the maintenance of the desired node degree,reducing energy consumption,and preserving network connectivity.The algorithmfurther incorporates soft entropy adjustment to balance exploration and exploitation,alongwith an off-policy mechanism for efficient data reuse,making it well-suited to the resource-constrained conditions ofWSNs.Simulation results demonstrate that SAC-HTC not only outperforms traditional methods and several existing RL algorithms but also achieves faster convergence,optimized communication range control,global connectivity maintenance,and extended network lifetime.The key novelty of this research lies in the integration of the SAC method with the SDWSN architecture forWSNs topology control,providing an adaptive,efficient,and highly promisingmechanism for large-scale,dynamic,and high-performance sensor networks.展开更多
文摘Wireless Sensor Networks(WSNs)play a crucial role in numerous Internet of Things(IoT)applications and next-generation communication systems,yet they continue to face challenges in balancing energy efficiency and reliable connectivity.This study proposes SAC-HTC(Soft Actor-Critic-based High-performance Topology Control),a deep reinforcement learning(DRL)method based on the Actor-Critic framework,implemented within a Software Defined Wireless Sensor Network(SDWSN)architecture.In this approach,sensor nodes periodically transmit state information,including coordinates,node degree,transmission power,and neighbor lists,to a centralized controller.The controller acts as the reinforcement learning(RL)agent,with the Actor generating decisions to adjust transmission ranges,while the Critic evaluates action values to reflect the overall network performance.The bidirectional Node-Controller feedback mechanism enables the controller to issue appropriate control commands to each node,ensuring the maintenance of the desired node degree,reducing energy consumption,and preserving network connectivity.The algorithmfurther incorporates soft entropy adjustment to balance exploration and exploitation,alongwith an off-policy mechanism for efficient data reuse,making it well-suited to the resource-constrained conditions ofWSNs.Simulation results demonstrate that SAC-HTC not only outperforms traditional methods and several existing RL algorithms but also achieves faster convergence,optimized communication range control,global connectivity maintenance,and extended network lifetime.The key novelty of this research lies in the integration of the SAC method with the SDWSN architecture forWSNs topology control,providing an adaptive,efficient,and highly promisingmechanism for large-scale,dynamic,and high-performance sensor networks.