期刊文献+
共找到291篇文章
< 1 2 15 >
每页显示 20 50 100
Classifying Multi-Lingual Reviews Sentiment Analysis in Arabic and English Languages Using the Stochastic Gradient Descent Model
1
作者 Yasser Alharbi Sarwar Shah Khan 《Computers, Materials & Continua》 2025年第4期1275-1290,共16页
Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the movie.However,the abundance of reviews and the risk o... Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the movie.However,the abundance of reviews and the risk of encountering spoilers pose challenges for efcient sentiment analysis,particularly in Arabic content.Tis study proposed a Stochastic Gradient Descent(SGD)machine learning(ML)model tailored for sentiment analysis in Arabic and English movie reviews.SGD allows for fexible model complexity adjustments,which can adapt well to the Involvement of Arabic language data.Tis adaptability ensures that the model can capture the nuances and specifc local patterns of Arabic text,leading to better performance.Two distinct language datasets were utilized,and extensive pre-processing steps were employed to optimize the datasets for analysis.Te proposed SGD model,designed to accommodate the nuances of each language,aims to surpass existing models in terms of accuracy and efciency.Te SGD model achieves an accuracy of 84.89 on the Arabic dataset and 87.44 on the English dataset,making it the top-performing model in terms of accuracy on both datasets.Tis indicates that the SGD model consistently demonstrates high accuracy levels across Arabic and English datasets.Tis study helps deepen the understanding of sentiments across various linguistic datasets.Unlike many studies that focus solely on movie reviews,the Arabic dataset utilized here includes hotel reviews,ofering a broader perspective. 展开更多
关键词 Sentiment analysis stochastic gradient descent REVIEWS English IMDb dataset Arabic dataset
在线阅读 下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:7
2
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(SGD)
在线阅读 下载PDF
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
3
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
暂未订购
Distributed stochastic mirror descent algorithm for resource allocation problem 被引量:2
4
作者 Yinghui Wang Zhipeng Tu Huashu Qin 《Control Theory and Technology》 EI CSCD 2020年第4期339-347,共9页
In this paper,we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints.Based on neighbor communication and stocha... In this paper,we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints.Based on neighbor communication and stochastic gradient,a distributed stochastic mirror descent algorithm is designed for the distributed resource allocation problem.Sublinear convergence to an optimal solution of the proposed algorithm is given when the second moments of the gradient noises are summable.A numerical example is also given to illustrate the effectiveness of the proposed algorithm. 展开更多
关键词 DISTRIBUTED Resource allocation problem stochastic gradient Mirror descent
原文传递
Stochastic Gradient Compression for Federated Learning over Wireless Network 被引量:1
5
作者 Lin Xiaohan Liu Yuan +2 位作者 Chen Fangjiong Huang Yang Ge Xiaohu 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim... As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks. 展开更多
关键词 federated learning gradient compression quantization resource allocation stochastic gradient descent(SGD)
在线阅读 下载PDF
New logarithmic step size for stochastic gradient descent 被引量:1
6
作者 Mahsa Soheil SHAMAEE Sajad Fathi HAFSHEJANI Zeinab SAEIDIAN 《Frontiers of Computer Science》 2025年第1期109-118,共10页
In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an O(1/√T) convergence ... In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an O(1/√T) convergence rate for the SGD. We conduct a comprehensive implementation to demonstrate the efficiency of the newly proposed step size on the FashionMinst, CIFAR10, and CIFAR100 datasets. Moreover, we compare our results with nine other existing approaches and demonstrate that the new logarithmic step size improves test accuracy by 0.9% for the CIFAR100 dataset when we utilize a convolutional neural network (CNN) model. 展开更多
关键词 stochastic gradient descent logarithmic step size warm restart technique
原文传递
MINI-BATCH STOCHASTIC CONJUGATE GRADIENT ALGORITHMS WITH MINIMAL VARIANCE
7
作者 Caixia Kou Feifei Gao Yu-Hong Dai 《Journal of Computational Mathematics》 2025年第5期1045-1062,共18页
Stochastic gradient descent(SGD)methods have gained widespread popularity for solving large-scale optimization problems.However,the inherent variance in SGD often leads to slow convergence rates.We introduce a family ... Stochastic gradient descent(SGD)methods have gained widespread popularity for solving large-scale optimization problems.However,the inherent variance in SGD often leads to slow convergence rates.We introduce a family of unbiased stochastic gradient estimators that encompasses existing estimators from the literature and identify a gradient estimator that not only maintains unbiasedness but also achieves minimal variance.Compared with the existing estimator used in SGD algorithms,the proposed estimator demonstrates a significant reduction in variance.By utilizing this stochastic gradient estimator to approximate the full gradient,we propose two mini-batch stochastic conjugate gradient algorithms with minimal variance.Under the assumptions of strong convexity and smoothness on the objective function,we prove that the two algorithms achieve linear convergence rates.Numerical experiments validate the effectiveness of the proposed gradient estimator in reducing variance and demonstrate that the two stochastic conjugate gradient algorithms exhibit accelerated convergence rates and enhanced stability. 展开更多
关键词 stochastic gradient descent Minimal variance stochastic conjugate gradient stochastic gradient estimator
原文传递
Convergence of Stochastic Gradient Descent in Deep Neural Network 被引量:4
8
作者 Bai-cun ZHOU Cong-ying HAN Tian-de GUO 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2021年第1期126-136,共11页
Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing paramete... Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing parameters of deep neural network for their advantages of low storage space requirement and fast computation speed.Previous studies on convergence of these algorithms were based on some traditional assumptions in optimization problems.However,the deep neural network has its unique properties.Some assumptions are inappropriate in the actual optimization process of this kind of model.In this paper,we modify the assumptions to make them more consistent with the actual optimization process of deep neural network.Based on new assumptions,we studied the convergence and convergence rate of SGD and its two common variant algorithms.In addition,we carried out numerical experiments with LeNet-5,a common network framework,on the data set MNIST to verify the rationality of our assumptions. 展开更多
关键词 stochastic gradient descent deep neural network CONVERGENCE
原文传递
A Stochastic Gradient Descent Method for Computational Design of Random Rough Surfaces in Solar Cells
9
作者 Qiang Li Gang Bao +1 位作者 Yanzhao Cao Junshan Lin 《Communications in Computational Physics》 SCIE 2023年第10期1361-1390,共30页
In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimizati... In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces. 展开更多
关键词 Optimal design random rough surface solar cell Helmholtz equation stochastic gradient descent method
原文传递
计算机网络中基于集成式图卷积神经网络的入侵检测技术 被引量:1
10
作者 范申民 王磊 张芬 《自动化与仪器仪表》 2025年第5期7-11,共5页
为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研... 为了保障网络环境的安全性,提出了基于集成式图卷积神经网络算法的网络入侵检测技术。研究方法采用随机梯度下降算法和均方根传播(Root Mean Square Propagation,RMSProp)优化器提升了检测模型的训练效率,强化了检测模型的分类效果。研究结果显示,研究模型的入侵检测准确率为96.41%~97.18%。可见经过研究模型优化后,入侵检测技术在模型训练效率和模型训练精度上都有明显提升。研究模型可以根据访问来源进行数据分类,提升了入侵检测模型对访问行为的分类效果。同时,分类效果的提升优化了计算机对攻击行为的识别效率,使计算机的防御效果增强,有效保障了用户的网络安全环境。因此,研究为网络入侵行为的检测提供了一个识别效果较好的技术方法。 展开更多
关键词 集成式图卷积神经网络 网络入侵检测 随机梯度下降 RMSProp优化器
原文传递
基于信道特征的物联网设备物理层认证
11
作者 江凌云 史秀秀 《南京邮电大学学报(自然科学版)》 北大核心 2025年第1期21-28,共8页
目前的物联网设备处在复杂的环境中且资源有限,基于信道特征的被动型物理层认证(Physical Layer Authentication,PLA)方式非常适合应用于目前的物联网设备。而传统基于信道特征的PLA采集到的是静态特征,导致现实中的时变信道认证概率较... 目前的物联网设备处在复杂的环境中且资源有限,基于信道特征的被动型物理层认证(Physical Layer Authentication,PLA)方式非常适合应用于目前的物联网设备。而传统基于信道特征的PLA采集到的是静态特征,导致现实中的时变信道认证概率较低。针对这一问题,使用支持向量机(Support Vector Machine,SVM)对时变信道下提取的信道特征进行分类认证,并使用在线学习随机梯度下降(Stochastic Gradient Descent,SGD)来更新SVM模型,实现了分类模型随着信道的变化而更新。此外,使用了鲁棒主成分分析(Robust Principal Component Analysis,RPCA)对提取的信道特征进行降维处理,降低获取SVM模型的复杂度并抑制了信道噪声的干扰。仿真结果表明,方案改善了时变信道下的认证概率,提高了鲁棒性。 展开更多
关键词 物理层认证 支持向量机 随机梯度下降 鲁棒主成分分析
在线阅读 下载PDF
不可靠通信下的联邦抗干扰模型优化方案
12
作者 李中捷 郭海榕 邱凡 《中南民族大学学报(自然科学版)》 2025年第6期826-832,共7页
针对蜂窝车联网(Cellular-Vehicle to Everything,C-V2X)通信场景下无线信道受到干扰导致通信过程中可能存在信息丢失的情况,通过对联邦分布式随机梯度下降(Federated Learning-Distributed Stochastic Gradient Descent,FL-DSGD)进行... 针对蜂窝车联网(Cellular-Vehicle to Everything,C-V2X)通信场景下无线信道受到干扰导致通信过程中可能存在信息丢失的情况,通过对联邦分布式随机梯度下降(Federated Learning-Distributed Stochastic Gradient Descent,FL-DSGD)进行抗干扰模型更新机制的优化以减少上述通信链路不可靠情况的影响.该方案首先建立车辆与基站的通信链路及传输模型参数;然后在通信链路不可靠导致模型参数在传输过程中部分缺失的情况下,根据链路可靠性混合权重矩阵,利用车辆上存储的本地模型以及基站存储的全局模型参与当前轮次联邦学习的模型更新,以填充丢失的模型参数.仿真结果表明:在通信链路不可靠的情况下,FL-DSGD方案达到90%的训练准确率以及85%的测试准确率所需的通信轮次约为分布式基线方案所需通信轮次的50%. 展开更多
关键词 联邦学习 车联网 随机梯度下降
在线阅读 下载PDF
基于数据压缩和梯度追踪的方差缩减的联邦优化算法 被引量:1
13
作者 贾泽慧 李登辉 +1 位作者 刘治宇 黄洁茹 《南京理工大学学报》 北大核心 2025年第2期155-166,共12页
为克服联邦学习中的计算成本、通信成本以及数据异质等挑战,该文提出了一种基于数据压缩和梯度追踪的方差缩减的联邦优化算法(FedCOMGATE-VR)。与传统依赖简单随机梯度估计的联邦学习算法不同,FedCOMGATE-VR通过引入方差缩减的随机梯度... 为克服联邦学习中的计算成本、通信成本以及数据异质等挑战,该文提出了一种基于数据压缩和梯度追踪的方差缩减的联邦优化算法(FedCOMGATE-VR)。与传统依赖简单随机梯度估计的联邦学习算法不同,FedCOMGATE-VR通过引入方差缩减的随机梯度估计,能够使用更大的步长,从而加速算法收敛;同时,采用数据压缩技术处理上传的模型参数,减少了通信成本;此外,结合梯度追踪技术,准确追踪局部梯度与全局梯度之间的偏差,有效应对数据异质的联邦学习场景。理论方面,该文在非凸情形下给出了算法的次线性收敛率,并在强凸情形下给出了算法的线性收敛率。此外,该文将FedCOMGATE-VR用于对Fashion-MNIST和CIFAR-10数据集进行分类训练,并与已有算法在不同参数设置(步长、本地更新次数等)下进行对比实验。实验结果表明,FedCOMGATE-VR能够适应复杂的异质数据环境,且在达到相同预设训练准确率时,该算法较FedCOMGATE通信次数降低约20%,总迭代次数降低约66%,有效降低了通信和计算成本。 展开更多
关键词 联邦学习 随机梯度下降 方差缩减 数据异质
在线阅读 下载PDF
基于无冲突并行随机梯度下降的图布局求解方法
14
作者 王智 薛明亮 +2 位作者 王一凡 钟发海 汪云海 《计算机辅助设计与图形学学报》 北大核心 2025年第6期1063-1072,共10页
应力模型是计算节点连接图布局时最常用的方法之一.随机梯度下降法由于具有很好的收敛性,常被用于求解应力模型,但该方法难以实现有效并行.虽然无锁随机梯度下降方法能大幅提高并行效率,但其求解过程中常存在线程冲突,导致结果准确性低... 应力模型是计算节点连接图布局时最常用的方法之一.随机梯度下降法由于具有很好的收敛性,常被用于求解应力模型,但该方法难以实现有效并行.虽然无锁随机梯度下降方法能大幅提高并行效率,但其求解过程中常存在线程冲突,导致结果准确性低.为了提高并行图布局的效率和准确性,提出一种无冲突的随机梯度下降的并行求解方法.首先提出一种面向应力模型的线程分配算法,将与节点j相同的点对分配到同一线程内计算,保证基于随机梯度下降方法的图布局无冲突化求解;然后仅对线程内的样本随机洗牌并减少次数,进一步提升并行效率.在16个不同规模的真实数据集上进行实验,并将所提方法应用在稀疏化应力模型的求解上,实验结果显示所提方法在求解精度上无损失且求解速度提高10倍以上,从布局质量和运行效率2个方面证明了该方法的高效性和可用性. 展开更多
关键词 图布局 随机梯度下降 并行计算 图可视化
在线阅读 下载PDF
基于稀疏平滑自蒸馏的差分隐私深度学习方法
15
作者 赵登峰 薛大暄 +1 位作者 赵素云 陈红 《电子学报》 北大核心 2025年第9期3310-3318,共9页
为了减少深度学习中隐私泄露的风险,许多研究利用差分隐私技术来训练神经网络.然而,这些隐私保护方法通常会导致模型性能显著下降.为了在隐私保护与模型效用之间实现平衡,本文提出了一种基于稀疏平滑自蒸馏的差分隐私深度学习(Different... 为了减少深度学习中隐私泄露的风险,许多研究利用差分隐私技术来训练神经网络.然而,这些隐私保护方法通常会导致模型性能显著下降.为了在隐私保护与模型效用之间实现平衡,本文提出了一种基于稀疏平滑自蒸馏的差分隐私深度学习(Differentially Private learning with sparse and smooth Self-Distillation,DP3SD)方法,通过双温度缩放机制来增强隐私保护深度学习的效用.具体而言,该方法设计了一种由稀疏分类损失和光滑蒸馏损失组成的双温度缩放损失函数.通过将较低温度应用于分类损失,能够使学生模型的类别预测分布更加锐化,从而减少低概率类别的影响,这些类别通常可能是由噪声引起的.相反,较高温度应用于蒸馏损失,能够平滑教师模型和学生模型的预测分布,从而在差分隐私约束下实现稳定和高效的知识迁移.在差分隐私随机梯度下降的严格隐私保障下,本文提出的双重缩放机制能够减轻噪声带来的扰动,提升学生模型的泛化能力.在三个公开数据集上的大量实验表明:本文提出的方法能够在确保严格数据隐私的同时,增强模型的可用性. 展开更多
关键词 深度学习 差分隐私 隐私保护 知识蒸馏 随机梯度下降
在线阅读 下载PDF
基于样本重要性的分布式深度学习通信优化策略 被引量:1
16
作者 蒙玉功 《现代电子技术》 北大核心 2025年第13期77-82,共6页
分布式深度学习中的计算节点需要频繁地与服务器进行梯度数据交换,从而产生较大的通信开销。针对上述问题,文中提出一种基于样本重要性的分布式深度学习通信优化策略。主要包括三个设计内容:首先,通过验证性实验探索数据样本的重要性分... 分布式深度学习中的计算节点需要频繁地与服务器进行梯度数据交换,从而产生较大的通信开销。针对上述问题,文中提出一种基于样本重要性的分布式深度学习通信优化策略。主要包括三个设计内容:首先,通过验证性实验探索数据样本的重要性分布;其次,通过交叉熵损失评估数据样本的重要性;最后,结合网络状态感知机制,以端到端的网络时延作为网络状态的反馈指标,计算节点动态调整传输梯度的压缩比,在保证模型收敛的同时减少网络通信量,进而提高分布式深度学习的训练效率。实验结果表明,所提方法在不同规模的分布式训练场景下能够有效提高通信效率。与现有的梯度压缩策略相比,所提方法最多可以减少40%的分布式训练时间。 展开更多
关键词 分布式深度学习 随机梯度下降 样本重要性 交叉熵 网络状态感知 动态压缩
在线阅读 下载PDF
基于卷积神经网络的电力巡检无人机自动采集图像自动配准研究
17
作者 王霞 崔霞 +1 位作者 侯丹 李博 《电子设计工程》 2025年第18期188-191,196,共5页
针对电力巡检无人机自动采集图像配准难题,研究基于卷积神经网络(CNN)的自动配准方法。通过异步随机梯度下降算法训练CNN模型,实现对电力巡检图像的深度特征提取。利用特征间欧氏距离匹配特征点对,并应用几何相似性误匹配点去除技术,确... 针对电力巡检无人机自动采集图像配准难题,研究基于卷积神经网络(CNN)的自动配准方法。通过异步随机梯度下降算法训练CNN模型,实现对电力巡检图像的深度特征提取。利用特征间欧氏距离匹配特征点对,并应用几何相似性误匹配点去除技术,确保匹配的准确性。结合仿射变换模型,基于最佳匹配特征点计算转换系数,完成图像自动配准。实验结果表明,该方法可有效提取电力巡检无人机自动采集图像特征,得到匹配特征点对;可有效剔除误匹配特征点对,实现图像自动配准;参数优化后,Dice值显著提升,旋转45°时Dice值从0.839提升至0.947,表明配准精度大幅增强;APD值普遍降低,缩放1.0倍时APD值从2.48降至1.36,说明该方法的图像自动配准精度均较高。 展开更多
关键词 卷积神经网络 随机梯度下降 欧氏距离 几何相似性 误匹配点去除 仿射模型
在线阅读 下载PDF
基于SPGD算法的GTI腔短脉冲时域相干堆积闭环控制研究
18
作者 刘必达 黄智蒙 +2 位作者 张帆 周丹丹 彭志涛 《光学与光电技术》 2025年第5期118-123,共6页
为了在短脉冲时域相干堆积系统中实现光腔相位高效闭环控制,利用一种基于扰动幅度e指数匀滑的随机并行梯度下降(Stochastic Parallel Gradient Descent Algorithm,SPGD)算法,对Gires-Tournois干涉仪(Gires-Tournois Interferometer,GTI... 为了在短脉冲时域相干堆积系统中实现光腔相位高效闭环控制,利用一种基于扰动幅度e指数匀滑的随机并行梯度下降(Stochastic Parallel Gradient Descent Algorithm,SPGD)算法,对Gires-Tournois干涉仪(Gires-Tournois Interferometer,GTI)堆积腔的相位进行闭环控制,实验研究了增益系数和扰动幅度两个主要算法参量对相干堆积效果的影响,结果表明,两个参数对堆积效果的影响规律相似,设置过小易陷入局部极值,过大会使得堆积波形发生振荡,无法稳定在最大值。通过优化控制参数选取,获得了稳定的相干堆积,合成后主、副脉冲峰值比达到6.43∶1。该结果对短脉冲时域相干堆积中的光腔相位控制具有重要的参考价值。 展开更多
关键词 光纤激光 短脉冲 脉冲相干堆积 光腔相位控制 随机并行梯度下降算法
原文传递
基于CNN的人脸识别 被引量:1
19
作者 牛曦辰 罗强 《山西电子技术》 2025年第3期19-22,25,共5页
随着人工智能的迅速发展和计算机性能的提升,机器学习算法得以广泛应用。基于此,研究基于卷积神经网络(CNN)的人脸识别方法,采用类似LeNet5的CNN模型,模型共有6层,分别为卷积层1、子采样层1、卷积层2、子采样层2、全连接层和分类层,应用... 随着人工智能的迅速发展和计算机性能的提升,机器学习算法得以广泛应用。基于此,研究基于卷积神经网络(CNN)的人脸识别方法,采用类似LeNet5的CNN模型,模型共有6层,分别为卷积层1、子采样层1、卷积层2、子采样层2、全连接层和分类层,应用于olivettifaces人脸数据库,将数据分为训练集、验证集和测试集输入CNN,并使用随机梯度下降法(SGD)和Logistic回归优化卷积网络参数,实现人脸识别功能。 展开更多
关键词 卷积神经网络 人脸识别 随机梯度下降法 LOGISTIC回归
在线阅读 下载PDF
基于改进随机梯度下降的光伏发电系统直流电弧故障定位方法研究 被引量:1
20
作者 许沛沛 尚晶晶 《科技资讯》 2025年第1期90-92,共3页
采用自回归模型(Autoregressive Model,AR)表示光伏发电系统直流输出信号后,为光伏发电系统构建了传递函数,并采用傅里叶变换对处理,使其转化为可量化的光伏发电系统功率谱;引入随机梯度下降法,设置确定直流电弧故障对应光伏发电系统功... 采用自回归模型(Autoregressive Model,AR)表示光伏发电系统直流输出信号后,为光伏发电系统构建了传递函数,并采用傅里叶变换对处理,使其转化为可量化的光伏发电系统功率谱;引入随机梯度下降法,设置确定直流电弧故障对应光伏发电系统功率谱相位的目标函数,并在原始梯度参数的基础上,设置允许波动阈值,对非故障扰动下的离散波动进行过滤,将满足目标函数的位置作为最终的定位结果。在测试结果中,当电弧故障电流在额定电流的50%以上时,均能够实现对故障位置的精准定位;当电弧故障电流在额定电流的50%以下时,具体的定位误差也仅为1个单位节点距离。 展开更多
关键词 改进随机梯度下降 光伏发电系统 直流电弧 故障定位
在线阅读 下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部