期刊文献+
共找到52篇文章
< 1 2 3 >
每页显示 20 50 100
Classifying Multi-Lingual Reviews Sentiment Analysis in Arabic and English Languages Using the Stochastic Gradient Descent Model
1
作者 Yasser Alharbi Sarwar Shah Khan 《Computers, Materials & Continua》 2025年第4期1275-1290,共16页
Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the movie.However,the abundance of reviews and the risk o... Sentiment analysis plays an important role in distilling and clarifying content from movie reviews,aiding the audience in understanding universal views towards the movie.However,the abundance of reviews and the risk of encountering spoilers pose challenges for efcient sentiment analysis,particularly in Arabic content.Tis study proposed a Stochastic Gradient Descent(SGD)machine learning(ML)model tailored for sentiment analysis in Arabic and English movie reviews.SGD allows for fexible model complexity adjustments,which can adapt well to the Involvement of Arabic language data.Tis adaptability ensures that the model can capture the nuances and specifc local patterns of Arabic text,leading to better performance.Two distinct language datasets were utilized,and extensive pre-processing steps were employed to optimize the datasets for analysis.Te proposed SGD model,designed to accommodate the nuances of each language,aims to surpass existing models in terms of accuracy and efciency.Te SGD model achieves an accuracy of 84.89 on the Arabic dataset and 87.44 on the English dataset,making it the top-performing model in terms of accuracy on both datasets.Tis indicates that the SGD model consistently demonstrates high accuracy levels across Arabic and English datasets.Tis study helps deepen the understanding of sentiments across various linguistic datasets.Unlike many studies that focus solely on movie reviews,the Arabic dataset utilized here includes hotel reviews,ofering a broader perspective. 展开更多
关键词 Sentiment analysis stochastic gradient descent REVIEWS English IMDb dataset Arabic dataset
在线阅读 下载PDF
Adaptive Time Synchronization in Time Sensitive-Wireless Sensor Networks Based on Stochastic Gradient Algorithms Framework
2
作者 Ramadan Abdul-Rashid Mohd Amiruddin Abd Rahman +1 位作者 Kar Tim Chan Arun Kumar Sangaiah 《Computer Modeling in Engineering & Sciences》 2025年第3期2585-2616,共32页
This study proposes a novel time-synchronization protocol inspired by stochastic gradient algorithms.The clock model of each network node in this synchronizer is configured as a generic adaptive filter where different... This study proposes a novel time-synchronization protocol inspired by stochastic gradient algorithms.The clock model of each network node in this synchronizer is configured as a generic adaptive filter where different stochastic gradient algorithms can be adopted for adaptive clock frequency adjustments.The study analyzes the pairwise synchronization behavior of the protocol and proves the generalized convergence of the synchronization error and clock frequency.A novel closed-form expression is also derived for a generalized asymptotic error variance steady state.Steady and convergence analyses are then presented for the synchronization,with frequency adaptations done using least mean square(LMS),the Newton search,the gradient descent(GraDes),the normalized LMS(N-LMS),and the Sign-Data LMS algorithms.Results obtained from real-time experiments showed a better performance of our protocols as compared to the Average Proportional-Integral Synchronization Protocol(AvgPISync)regarding the impact of quantization error on synchronization accuracy,precision,and convergence time.This generalized approach to time synchronization allows flexibility in selecting a suitable protocol for different wireless sensor network applications. 展开更多
关键词 Wireless sensor network time synchronization stochastic gradient algorithm MULTI-HOP
在线阅读 下载PDF
A stochastic gradient-based two-step sparse identification algorithm for multivariate ARX systems
3
作者 Yanxin Fu Wenxiao Zhao 《Control Theory and Technology》 EI CSCD 2024年第2期213-221,共9页
We consider the sparse identification of multivariate ARX systems, i.e., to recover the zero elements of the unknown parameter matrix. We propose a two-step algorithm, where in the first step the stochastic gradient (... We consider the sparse identification of multivariate ARX systems, i.e., to recover the zero elements of the unknown parameter matrix. We propose a two-step algorithm, where in the first step the stochastic gradient (SG) algorithm is applied to obtain initial estimates of the unknown parameter matrix and in the second step an optimization criterion is introduced for the sparse identification of multivariate ARX systems. Under mild conditions, we prove that by minimizing the criterion function, the zero elements of the unknown parameter matrix can be recovered with a finite number of observations. The performance of the algorithm is testified through a simulation example. 展开更多
关键词 ARX system stochastic gradient algorithm Sparse identification Support recovery Parameter estimation Strong consistency
原文传递
Online distributed optimization with stochastic gradients:high probability bound of regrets
4
作者 Yuchen Yang Kaihong Lu Long Wang 《Control Theory and Technology》 EI CSCD 2024年第3期419-430,共12页
In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate ... In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results. 展开更多
关键词 Distributed optimization Online optimization stochastic gradient High probability
原文传递
Stochastic Gradient Compression for Federated Learning over Wireless Network
5
作者 Lin Xiaohan Liu Yuan +2 位作者 Chen Fangjiong Huang Yang Ge Xiaohu 《China Communications》 SCIE CSCD 2024年第4期230-247,共18页
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim... As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks. 展开更多
关键词 federated learning gradient compression quantization resource allocation stochastic gradient descent(SGD)
在线阅读 下载PDF
L_(1)-Smooth SVM with Distributed Adaptive Proximal Stochastic Gradient Descent with Momentum for Fast Brain Tumor Detection
6
作者 Chuandong Qin Yu Cao Liqun Meng 《Computers, Materials & Continua》 SCIE EI 2024年第5期1975-1994,共20页
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga... Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%. 展开更多
关键词 Support vector machine proximal stochastic gradient descent brain tumor detection distributed computing
暂未订购
Feasibility of stochastic gradient boosting approach for predicting rockburst damage in burst-prone mines 被引量:4
7
作者 周健 史秀志 +2 位作者 黄仁东 邱贤阳 陈冲 《Transactions of Nonferrous Metals Society of China》 SCIE EI CAS CSCD 2016年第7期1938-1945,共8页
The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the... The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the ground support system capacity, the excavation span, the geological structure and the peak particle velocity of rockburst sites were analyzed. The performance of the model was evaluated using a 10 folds cross-validation (CV) procedure with 80%of original data during modeling, and an external testing set (20%) was employed to validate the prediction performance of the SGB model. Two accuracy measures for multi-class problems were employed: classification accuracy rate and Cohen’s Kappa. The accuracy analysis together with Kappa for the rockburst damage dataset reveals that the SGB model for the prediction of rockburst damage is acceptable. 展开更多
关键词 burst-prone mine rockburst damage stochastic gradient boosting method
在线阅读 下载PDF
Efficient and High-quality Recommendations via Momentum-incorporated Parallel Stochastic Gradient Descent-Based Learning 被引量:7
8
作者 Xin Luo Wen Qin +2 位作者 Ani Dong Khaled Sedraoui MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第2期402-411,共10页
A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and... A recommender system(RS)relying on latent factor analysis usually adopts stochastic gradient descent(SGD)as its learning algorithm.However,owing to its serial mechanism,an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems.Aiming at addressing this issue,this study proposes a momentum-incorporated parallel stochastic gradient descent(MPSGD)algorithm,whose main idea is two-fold:a)implementing parallelization via a novel datasplitting strategy,and b)accelerating convergence rate by integrating momentum effects into its training process.With it,an MPSGD-based latent factor(MLF)model is achieved,which is capable of performing efficient and high-quality recommendations.Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm,an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability. 展开更多
关键词 Big data industrial application industrial data latent factor analysis machine learning parallel algorithm recommender system(RS) stochastic gradient descent(SGD)
在线阅读 下载PDF
Predicted Oil Recovery Scaling-Law Using Stochastic Gradient Boosting Regression Model
9
作者 Mohamed F.El-Amin Abdulhamit Subasi +1 位作者 Mahmoud M.Selim Awad Mousa 《Computers, Materials & Continua》 SCIE EI 2021年第8期2349-2362,共14页
In the process of oil recovery,experiments are usually carried out on core samples to evaluate the recovery of oil,so the numerical data are fitted into a non-dimensional equation called scaling-law.This will be essen... In the process of oil recovery,experiments are usually carried out on core samples to evaluate the recovery of oil,so the numerical data are fitted into a non-dimensional equation called scaling-law.This will be essential for determining the behavior of actual reservoirs.The global non-dimensional time-scale is a parameter for predicting a realistic behavior in the oil field from laboratory data.This non-dimensional universal time parameter depends on a set of primary parameters that inherit the properties of the reservoir fluids and rocks and the injection velocity,which dynamics of the process.One of the practical machine learning(ML)techniques for regression/classification problems is gradient boosting(GB)regression.The GB produces a prediction model as an ensemble of weak prediction models that can be done at each iteration by matching a least-squares base-learner with the current pseudoresiduals.Using a randomization process increases the execution speed and accuracy of GB.Hence in this study,we developed a stochastic regression model of gradient boosting(SGB)to forecast oil recovery.Different nondimensional time-scales have been used to generate data to be used with machine learning techniques.The SGB method has been found to be the best machine learning technique for predicting the non-dimensional time-scale,which depends on oil/rock properties. 展开更多
关键词 Machine learning stochastic gradient boosting linear regression TIME-SCALE oil recovery
在线阅读 下载PDF
Auxiliary Model Based Multi-innovation Stochastic Gradient Identification Methods for Hammerstein Output-Error System
10
作者 冯启亮 贾立 李峰 《Journal of Donghua University(English Edition)》 EI CAS 2017年第1期53-59,共7页
Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to rea... Special input signals identification method based on the auxiliary model based multi-innovation stochastic gradient algorithm for Hammerstein output-error system was proposed.The special input signals were used to realize the identification and separation of the Hammerstein model.As a result,the identification of the dynamic linear part can be separated from the static nonlinear elements without any redundant adjustable parameters.The auxiliary model based multi-innovation stochastic gradient algorithm was applied to identifying the serial link parameters of the Hammerstein model.The auxiliary model based multi-innovation stochastic gradient algorithm can avoid the influence of noise and improve the identification accuracy by changing the innovation length.The simulation results show the efficiency of the proposed method. 展开更多
关键词 Hammerstein output-error system special input signals auxiliary model based multi-innovation stochastic gradient algorithm innovation length
在线阅读 下载PDF
Stochastic Gradient Boosting Model for Twitter Spam Detection
11
作者 K.Kiruthika Devi G.A.Sathish Kumar 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期849-859,共11页
In today’s world of connectivity there is a huge amount of data than we could imagine.The number of network users are increasing day by day and there are large number of social networks which keeps the users connecte... In today’s world of connectivity there is a huge amount of data than we could imagine.The number of network users are increasing day by day and there are large number of social networks which keeps the users connected all the time.These social networks give the complete independence to the user to post the data either political,commercial or entertainment value.Some data may be sensitive and have a greater impact on the society as a result.The trustworthiness of data is important when it comes to public social networking sites like facebook and twitter.Due to the large user base and its openness there is a huge possibility to spread spam messages in this network.Spam detection is a technique to identify and mark data as a false data value.There are lot of machine learning approaches proposed to detect spam in social networks.The efficiency of any spam detection algorithm is determined by its cost factor and accuracy.Aiming to improve the detection of spam in the social networks this study proposes using statistical based features that are modelled through the supervised boosting approach called Stochastic gradient boosting to evaluate the twitter data sets in the English language.The performance of the proposed model is evaluated using simulation results. 展开更多
关键词 TWITTER SPAM stochastic gradient boosting
在线阅读 下载PDF
New logarithmic step size for stochastic gradient descent 被引量:1
12
作者 Mahsa Soheil SHAMAEE Sajad Fathi HAFSHEJANI Zeinab SAEIDIAN 《Frontiers of Computer Science》 2025年第1期109-118,共10页
In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an O(1/√T) convergence ... In this paper, we propose a novel warm restart technique using a new logarithmic step size for the stochastic gradient descent (SGD) approach. For smooth and non-convex functions, we establish an O(1/√T) convergence rate for the SGD. We conduct a comprehensive implementation to demonstrate the efficiency of the newly proposed step size on the FashionMinst, CIFAR10, and CIFAR100 datasets. Moreover, we compare our results with nine other existing approaches and demonstrate that the new logarithmic step size improves test accuracy by 0.9% for the CIFAR100 dataset when we utilize a convolutional neural network (CNN) model. 展开更多
关键词 stochastic gradient descent logarithmic step size warm restart technique
原文传递
Performance analysis of stochastic gradient algorithms under weak conditions 被引量:15
13
作者 DING Feng YANG HuiZhong LIU Fei 《Science in China(Series F)》 2008年第9期1269-1280,共12页
By using the stochastic martingale theory, convergence properties of stochastic gradient (SG) identification algorithms are studied under weak conditions. The analysis indicates that the parameter estimates by the S... By using the stochastic martingale theory, convergence properties of stochastic gradient (SG) identification algorithms are studied under weak conditions. The analysis indicates that the parameter estimates by the SG algorithms consistently converge to the true parameters, as long as the information vector is persistently exciting (i.e., the data product moment matrix has a bounded condition number) and that the process noises are zero mean and uncorrelated. These results remove the strict assumptions, made in existing references, that the noise variances and high-order moments exist, and the processes are stationary and ergodic and the strong persis- tent excitation condition holds. This contribution greatly relaxes the convergence conditions of stochastic gradient algorithms. The simulation results with bounded and unbounded noise variances confirm the convergence conclusions proposed. 展开更多
关键词 recursive identification parameter estimation least squares stochastic gradient multivariable systems convergence properties martingale convergence theorem
原文传递
Convergence of Stochastic Gradient Descent in Deep Neural Network 被引量:4
14
作者 Bai-cun ZHOU Cong-ying HAN Tian-de GUO 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 2021年第1期126-136,共11页
Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing paramete... Stochastic gradient descent(SGD) is one of the most common optimization algorithms used in pattern recognition and machine learning.This algorithm and its variants are the preferred algorithm while optimizing parameters of deep neural network for their advantages of low storage space requirement and fast computation speed.Previous studies on convergence of these algorithms were based on some traditional assumptions in optimization problems.However,the deep neural network has its unique properties.Some assumptions are inappropriate in the actual optimization process of this kind of model.In this paper,we modify the assumptions to make them more consistent with the actual optimization process of deep neural network.Based on new assumptions,we studied the convergence and convergence rate of SGD and its two common variant algorithms.In addition,we carried out numerical experiments with LeNet-5,a common network framework,on the data set MNIST to verify the rationality of our assumptions. 展开更多
关键词 stochastic gradient descent deep neural network CONVERGENCE
原文传递
MODELING OF FREE JUMPS DOWNSTREAM SYMMETRIC AND ASYMMETRIC EXPANSIONS:THEORITICAL ANALYSIS AND METHOD OF STOCHASTIC GRADIENT BOOSTING 被引量:2
15
作者 MOHAMED A.Nassar 《Journal of Hydrodynamics》 SCIE EI CSCD 2010年第1期110-120,共11页
The general computational approach of Stochastic Gradient Boosting (SGB) is seen as one of the most powerful methods in predictive data mining. Its applications include regression analysis, classification problems w... The general computational approach of Stochastic Gradient Boosting (SGB) is seen as one of the most powerful methods in predictive data mining. Its applications include regression analysis, classification problems with/without continuous categorical predictors. The present theoretical and experimental study aims to model the free hydraulic jump created through rectangular Channels Downstream (DS) symmetric and asymmetric expansions using SGB. A theoretical model for prediction of the depth ratio of jumps is developed using the governing flow equations. At the same time, statistical models using linear regression are also developed. Three different parameters of the hydraulic jump are investigated experimentally using modified angled-guide walls. The results from the modified SGB model indicate a significant improvement on the original models. The present study shows the possibility of applying the modified SGB method in engineering designs and other practical applications. 展开更多
关键词 stochastic gradient Boosting (SGB) free jump SYMMETRIC ASYMMETRIC theoretical regression experiment
原文传递
Stochastic gradient algorithm for a dual-rate Box-Jenkins model based on auxiliary model and FIR model 被引量:2
16
作者 Jing CHEN Rui-feng DING 《Journal of Zhejiang University-Science C(Computers and Electronics)》 SCIE EI 2014年第2期147-152,共6页
Based on the work in Ding and Ding(2008),we develop a modifed stochastic gradient(SG)parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model.We simplify the complex dual-rate Box-J... Based on the work in Ding and Ding(2008),we develop a modifed stochastic gradient(SG)parameter estimation algorithm for a dual-rate Box-Jenkins model by using an auxiliary model.We simplify the complex dual-rate Box-Jenkins model to two fnite impulse response(FIR)models,present an auxiliary model to estimate the missing outputs and the unknown noise variables,and compute all the unknown parameters of the system with colored noises.Simulation results indicate that the proposed method is efective. 展开更多
关键词 Parameter estimation Auxiliary model Dual-rate system stochastic gradient Box-Jenkins model FIR model
原文传递
A Stochastic Gradient Descent Method for Computational Design of Random Rough Surfaces in Solar Cells
17
作者 Qiang Li Gang Bao +1 位作者 Yanzhao Cao Junshan Lin 《Communications in Computational Physics》 SCIE 2023年第10期1361-1390,共30页
In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimizati... In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces. 展开更多
关键词 Optimal design random rough surface solar cell Helmholtz equation stochastic gradient descent method
原文传递
Least-Squares Seismic Inversion with Stochastic Conjugate Gradient Method 被引量:2
18
作者 Wei Huang Hua-Wei Zhou 《Journal of Earth Science》 SCIE CAS CSCD 2015年第4期463-470,共8页
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion a... With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application. 展开更多
关键词 least-squares seismic inversion stochastic conjugate gradient method data fitting Kirchhoff migration.
原文传递
Decentralized Federated Learning Algorithm Under Adversary Eavesdropping
19
作者 Lei Xu Danya Xu +3 位作者 Xinlei Yi Chao Deng Tianyou Chai Tao Yang 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期448-456,共9页
In this paper, we study the decentralized federated learning problem, which involves the collaborative training of a global model among multiple devices while ensuring data privacy.In classical federated learning, the... In this paper, we study the decentralized federated learning problem, which involves the collaborative training of a global model among multiple devices while ensuring data privacy.In classical federated learning, the communication channel between the devices poses a potential risk of compromising private information. To reduce the risk of adversary eavesdropping in the communication channel, we propose TRADE(transmit difference weight) concept. This concept replaces the decentralized federated learning algorithm's transmitted weight parameters with differential weight parameters, enhancing the privacy data against eavesdropping. Subsequently, by integrating the TRADE concept with the primal-dual stochastic gradient descent(SGD)algorithm, we propose a decentralized TRADE primal-dual SGD algorithm. We demonstrate that our proposed algorithm's convergence properties are the same as those of the primal-dual SGD algorithm while providing enhanced privacy protection. We validate the algorithm's performance on fault diagnosis task using the Case Western Reserve University dataset, and image classification tasks using the CIFAR-10 and CIFAR-100 datasets,revealing model accuracy comparable to centralized federated learning. Additionally, the experiments confirm the algorithm's privacy protection capability. 展开更多
关键词 Adversary eavesdropping decentralized federated learning privacy protection stochastic gradient descent
在线阅读 下载PDF
Role of immature granulocyte and blood biomarkers in predicting perforated acute appendicitis using machine learning model
20
作者 Zeynep Kucukakcali Sami Akbulut 《World Journal of Clinical Cases》 2025年第22期25-37,共13页
BACKGROUND Acute appendicitis(AAp)is a prevalent medical condition characterized by inflammation of the appendix that frequently necessitates urgent surgical procedures.Approximately two-thirds of patients with AAp ex... BACKGROUND Acute appendicitis(AAp)is a prevalent medical condition characterized by inflammation of the appendix that frequently necessitates urgent surgical procedures.Approximately two-thirds of patients with AAp exhibit characteristic signs and symptoms;hence,negative AAp and complicated AAp are the primary concerns in research on AAp.In other terms,further investigations and algorithms are required for at least one third of patients to predict the clinical condition and distinguish them from uncomplicated patients with AAp.AIM To use a Stochastic Gradient Boosting(SGB)-based machine learning(ML)algorithm to tell the difference between AAp patients who are complicated and those who are not,and to find some important biomarkers for both types of AAp by using modeling to get variable importance values.METHODS This study analyzed an open access data set containing 140 people,including 41 healthy controls,65 individuals with uncomplicated AAp,and 34 individuals with complicated AAp.We analyzed some demographic data(age,sex)of the patients and the following biochemical blood parameters:White blood cell(WBC)count,neutrophils,lymphocytes,monocytes,platelet count,neutrophil-tolymphocyte ratio,lymphocyte-to-monocyte ratio,mean platelet volume,neutrophil-to-immature granulocyte ratio,ferritin,total bilirubin,immature granulocyte count,immature granulocyte percent,and neutrophil-to-immature granulocyte ratio.We tested the SGB model using n-fold cross-validation.It was implemented with an 80-20 training-test split.We used variable importance values to identify the variables that were most effective on the target.RESULTS The SGB model demonstrated excellent performance in distinguishing AAp from control patients with an accuracy of 96.3%,a micro aera under the curve(AUC)of 94.7%,a sensitivity of 94.7%,and a specificity of 100%.In distinguishing complicated AAp patients from uncomplicated ones,the model achieved an accuracy of 78.9%,a micro AUC of 79%,a sensitivity of 83.3%,and a specificity of 76.9%.The most useful biomarkers for confirming the AA diagnosis were WBC(100%),neutrophils(95.14%),and the lymphocyte-monocyte ratio(76.05%).On the other hand,the most useful biomarkers for accurate diagnosis of complicated AAp were total bilirubin(100%),WBC(96.90%),and the neutrophil-immature granulocytes ratio(64.05%).CONCLUSION The SGB model achieved high accuracy rates in identifying AAp patients while it showed moderate performance in distinguishing complicated AAp patients from uncomplicated AAp patients.Although the model's accuracy in the classification of complicated AAp is moderate,the high variable importance obtained is clinically significant.We need further prospective validation studies,but the integration of such ML algorithms into clinical practice may improve diagnostic processes. 展开更多
关键词 Acute appendicitis Complicated acute appendicitis Machine learning stochastic gradient boosting
暂未订购
上一页 1 2 3 下一页 到第
使用帮助 返回顶部