期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
DSparse:A Distributed Training Method for Edge Clusters Based on Sparse Update 被引量:1
1
作者 Xiao-Hui Peng Yi-Xuan Sun +1 位作者 Zheng-Hui Zhang Yi-Fan Wang 《Journal of Computer Science & Technology》 2025年第3期637-653,共17页
Edge machine learning creates a new computational paradigm by enabling the deployment of intelligent applications at the network edge.It enhances application efficiency and responsiveness by performing inference and t... Edge machine learning creates a new computational paradigm by enabling the deployment of intelligent applications at the network edge.It enhances application efficiency and responsiveness by performing inference and training tasks closer to data sources.However,it encounters several challenges in practice.The variance in hardware specifications and performance across different devices presents a major issue for the training and inference tasks.Additionally,edge devices typically possess limited network bandwidth and computing resources compared with data centers.Moreover,existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments.In this paper,we propose DSparse,a method for distributed training based on sparse update in edge clusters with various memory capacities.It aims at maximizing the utilization of memory resources across all devices within a cluster.To reduce memory consumption during the training process,we adopt sparse update to prioritize the updating of selected layers on the devices in the cluster,which not only lowers memory usage but also reduces the data volume of parameters and the time required for parameter aggregation.Furthermore,DSparse utilizes a parameter aggregation mechanism based on multi-process groups,subdividing the aggregation tasks into AllReduce and Broadcast types,thereby further reducing the communication frequency for parameter aggregation.Experimental results using the MobileNetV2 model on the CIFAR-10 dataset demonstrate that DSparse reduces memory consumption by an average of 59.6%across seven devices,with a 75.4%reduction in parameter aggregation time,while maintaining model precision. 展开更多
关键词 distributed training edge computing edge machine learning sparse update edge cluster
原文传递
A CLASS OF FACTORIZATION UPDATE ALGORITHM FOR SOLVING SYSTEMS OF SPARSE NONLINEAR EQUATIONS 被引量:2
2
作者 白中治 王德人 《Acta Mathematicae Applicatae Sinica》 SCIE CSCD 1996年第2期188-200,共13页
In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without... In this paper, we establish a class of sparse update algorithm based on matrix triangular factorizations for solving a system of sparse equations. The local Q-superlinear convergence of the algorithm is proved without introducing an m-step refactorization. We compare the numerical results of the new algorithm with those of the known algorithms, The comparison implies that the new algorithm is satisfactory. 展开更多
关键词 Quasi-Newton methods matrix factorization sparse update algorithm Qsuperlinear convergence
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部