在室内可见光通信中符号间干扰和噪声会严重影响系统性能,K均值(K-means)均衡方法可以抑制光无线信道的影响,但其复杂度较高,且在聚类边界处易出现误判。提出了改进聚类中心点的K-means(Improved Center K-means,IC-Kmeans)算法,通过随...在室内可见光通信中符号间干扰和噪声会严重影响系统性能,K均值(K-means)均衡方法可以抑制光无线信道的影响,但其复杂度较高,且在聚类边界处易出现误判。提出了改进聚类中心点的K-means(Improved Center K-means,IC-Kmeans)算法,通过随机生成足够长的训练序列,然后将训练序列每一簇的均值作为K-means聚类中心,避免了传统K-means反复迭代寻找聚类中心。进一步,提出了基于神经网络的IC-Kmeans(Neural Network Based IC-Kmeans,NNIC-Kmeans)算法,使用反向传播神经网络将接收端二维数据映射至三维空间,以增加不同簇之间混合数据的距离,提高了分类准确性。蒙特卡罗误码率仿真表明,IC-Kmeans均衡和传统K-means算法的误码率性能相当,但可以显著降低复杂度,特别是在信噪比较小时。同时,在室内多径信道模型下,与IC-Kmeans和传统Kmeans均衡相比,NNIC-Kmeans均衡的光正交频分复用系统误码率性能最好。展开更多
In order to improve the problem that the filtered-x least mean square(FxLMS)algorithm cannot take into account the convergence speed,steady-state error during active noise control.A piecewise variable step size FxLMS ...In order to improve the problem that the filtered-x least mean square(FxLMS)algorithm cannot take into account the convergence speed,steady-state error during active noise control.A piecewise variable step size FxLMS algorithm based on logarithmic function(PLFxLMS)is proposed,and the genetic algorithm are introduced to optimize the parameters of logarithmic variable step size FxLMS(LFxLMS),improved logarithmic variable step size Films(IFxLMS),and PLFxLMS algorithms.Bandlimited white noise is used as the input signal,FxLMS,LFxLMS,ILFxLMS,and PLFxLMS algorithms are used to conduct active noise control simulation,and the convergence speed and steady-state characteristic of four algorithms are comparatively analyzed.Compared with the other three algorithms,the PLFxLMS algorithm proposed in this paper has the fastest convergence speed,and small steady-state error.The PLFxLMS algorithm can effectively improve the convergence speed and steady-state error of the FxLMS algorithm that cannot be controlled at the same time,and achieve the optimal effect.展开更多
针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic po...针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic policy gradient based on progressive k-means clustering,MDDPG-PK-Means)算法.在训练过程中,对每一时间步下的状态进行动作选择时,根据k-means算法判别结果辅佐行动者网络的决策,同时随训练时间步的增加,逐渐增加k-means算法类簇中心的个数.将MDDPG-PK-Means算法应用于MuJoCo仿真平台上,实验结果表明,与DDPG等算法相比,MDDPG-PK-Means算法在大多数连续任务中都具有更好的效果.展开更多
文摘在室内可见光通信中符号间干扰和噪声会严重影响系统性能,K均值(K-means)均衡方法可以抑制光无线信道的影响,但其复杂度较高,且在聚类边界处易出现误判。提出了改进聚类中心点的K-means(Improved Center K-means,IC-Kmeans)算法,通过随机生成足够长的训练序列,然后将训练序列每一簇的均值作为K-means聚类中心,避免了传统K-means反复迭代寻找聚类中心。进一步,提出了基于神经网络的IC-Kmeans(Neural Network Based IC-Kmeans,NNIC-Kmeans)算法,使用反向传播神经网络将接收端二维数据映射至三维空间,以增加不同簇之间混合数据的距离,提高了分类准确性。蒙特卡罗误码率仿真表明,IC-Kmeans均衡和传统K-means算法的误码率性能相当,但可以显著降低复杂度,特别是在信噪比较小时。同时,在室内多径信道模型下,与IC-Kmeans和传统Kmeans均衡相比,NNIC-Kmeans均衡的光正交频分复用系统误码率性能最好。
文摘In order to improve the problem that the filtered-x least mean square(FxLMS)algorithm cannot take into account the convergence speed,steady-state error during active noise control.A piecewise variable step size FxLMS algorithm based on logarithmic function(PLFxLMS)is proposed,and the genetic algorithm are introduced to optimize the parameters of logarithmic variable step size FxLMS(LFxLMS),improved logarithmic variable step size Films(IFxLMS),and PLFxLMS algorithms.Bandlimited white noise is used as the input signal,FxLMS,LFxLMS,ILFxLMS,and PLFxLMS algorithms are used to conduct active noise control simulation,and the convergence speed and steady-state characteristic of four algorithms are comparatively analyzed.Compared with the other three algorithms,the PLFxLMS algorithm proposed in this paper has the fastest convergence speed,and small steady-state error.The PLFxLMS algorithm can effectively improve the convergence speed and steady-state error of the FxLMS algorithm that cannot be controlled at the same time,and achieve the optimal effect.
文摘针对深度确定性策略梯度(deep deterministic policy gradient,DDPG)算法在一些大状态空间任务中存在学习效果不佳及波动较大等问题,提出一种基于渐近式k-means聚类算法的多行动者深度确定性策略梯度(multi-actor deep deterministic policy gradient based on progressive k-means clustering,MDDPG-PK-Means)算法.在训练过程中,对每一时间步下的状态进行动作选择时,根据k-means算法判别结果辅佐行动者网络的决策,同时随训练时间步的增加,逐渐增加k-means算法类簇中心的个数.将MDDPG-PK-Means算法应用于MuJoCo仿真平台上,实验结果表明,与DDPG等算法相比,MDDPG-PK-Means算法在大多数连续任务中都具有更好的效果.