期刊文献+
共找到519篇文章
< 1 2 26 >
每页显示 20 50 100
Dynamic access task scheduling of LEO constellation based on space-based distributed computing
1
作者 LIU Wei JIN Yifeng +2 位作者 ZHANG Lei GAO Zihe TAO Ying 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期842-854,共13页
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u... A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA. 展开更多
关键词 beam resource allocation distributed computing low Earth obbit(LEO)constellation spacecraft access task scheduling
在线阅读 下载PDF
A Multi-Objective Clustered Input Oriented Salp Swarm Algorithm in Cloud Computing
2
作者 Juliet A.Murali Brindha T. 《Computers, Materials & Continua》 SCIE EI 2024年第12期4659-4690,共32页
Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environmen... Infrastructure as a Service(IaaS)in cloud computing enables flexible resource distribution over the Internet,but achieving optimal scheduling remains a challenge.Effective resource allocation in cloud-based environments,particularly within the IaaS model,poses persistent challenges.Existing methods often struggle with slow opti-mization,imbalanced workload distribution,and inefficient use of available assets.These limitations result in longer processing times,increased operational expenses,and inadequate resource deployment,particularly under fluctuating demands.To overcome these issues,a novel Clustered Input-Oriented Salp Swarm Algorithm(CIOSSA)is introduced.This approach combines two distinct strategies:Task Splitting Agglomerative Clustering(TSAC)with an Input Oriented Salp Swarm Algorithm(IOSSA),which prioritizes tasks based on urgency,and a refined multi-leader model that accelerates optimization processes,enhancing both speed and accuracy.By continuously assessing system capacity before task distribution,the model ensures that assets are deployed effectively and costs are controlled.The dual-leader technique expands the potential solution space,leading to substantial gains in processing speed,cost-effectiveness,asset efficiency,and system throughput,as demonstrated by comprehensive tests.As a result,the suggested model performs better than existing approaches in terms of makespan,resource utilisation,throughput,and convergence speed,demonstrating that CIOSSA is scalable,reliable,and appropriate for the dynamic settings found in cloud computing. 展开更多
关键词 Cloud computing clustering resource allocation scheduling swam algorithms optimization common with in the subject discipline
在线阅读 下载PDF
Extended Balanced Scheduler with Clustering and Replication for Data Intensive Scientific Workflow Applications in Cloud Computing
3
作者 Satwinder Kaur Mehak Aggarwal 《Journal of Electronic Research and Application》 2018年第3期8-15,共8页
Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of... Cloud computing is an advance computing model using which several applications,data and countless IT services are provided over the Internet.Task scheduling plays a crucial role in cloud computing systems.The issue of task scheduling can be viewed as the finding or searching an optimal mapping/assignment of set of subtasks of different tasks over the available set of resources so that we can achieve the desired goals for tasks.With the enlargement of users of cloud the tasks need to be scheduled.Cloud’s performance depends on the task scheduling algorithms used.Numerous algorithms have been submitted in the past to solve the task scheduling problem for heterogeneous network of computers.The existing research work proposes different methods for data intensive applications which are energy and deadline aware task scheduling method.As scientific workflow is combination of fine grain and coarse grain task.Every task scheduled to VM has system overhead.If multiple fine grain task are executing in scientific workflow,it increase the scheduling overhead.To overcome the scheduling overhead,multiple small tasks has been combined to large task,which decrease the scheduling overhead and improve the execution time of the workflow.Horizontal clustering has been used to cluster the fine grained task further replication technique has been combined.The proposed scheduling algorithm improves the performance metrics such as execution time and cost.Further this research can be extended with improved clustering technique and replication methods. 展开更多
关键词 SCIENTIFIC WORKFLOW cloud computing REPLICATION clusterING scheduling
在线阅读 下载PDF
Cluster-Based Distributed Algorithms for Very Large Linear Equations
4
作者 古志民 MARTA Kwiatkowska 付引霞 《Journal of Beijing Institute of Technology》 EI CAS 2006年第1期66-70,共5页
In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot ... In many applications such as computational fluid dynamics and weather prediction, as well as image processing and state of Markov chain etc., the grade of matrix n is often very large, and any serial algorithm cannot solve the problems. A distributed cluster-based solution for very large linear equations is discussed, it includes the definitions of notations, partition of matrix, communication mechanism, and a master-slaver algorithm etc., the computing cost is O(n^3/N), the memory cost is O(n^2/N), the I/O cost is O(n^2/N), and the com- munication cost is O(Nn ), here, N is the number of computing nodes or processes. Some tests show that the solution could solve the double type of matrix under 10^6 × 10^6 effectively. 展开更多
关键词 Gaussian elimination PARTITION cluster-based distributed computing
在线阅读 下载PDF
Trusted Data Acquisition Mechanism for Cloud Resource Scheduling Based on Distributed Agents 被引量:4
5
作者 李小勇 杨月华 《China Communications》 SCIE CSCD 2011年第6期108-116,共9页
Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation... Goud computing is a new paradigm in which dynamic and virtualized computing resources are provided as services over the Internet. However, because cloud resource is open and dynamically configured, resource allocation and scheduling are extremely important challenges in cloud infrastructure. Based on distributed agents, this paper presents trusted data acquisition mechanism for efficient scheduling cloud resources to satisfy various user requests. Our mechanism defines, collects and analyzes multiple key trust targets of cloud service resources based on historical information of servers in a cloud data center. As a result, using our trust computing mechanism, cloud providers can utilize their resources efficiently and also provide highly trusted resources and services to many users. 展开更多
关键词 cloud computing trusted computing distributed agent resource scheduling
在线阅读 下载PDF
A Survey of Spark Scheduling Strategy Optimization Techniques and Development Trends
6
作者 Chuan Li Xuanlin Wen 《Computers, Materials & Continua》 2025年第6期3843-3875,共33页
Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demand... Spark performs excellently in large-scale data-parallel computing and iterative processing.However,with the increase in data size and program complexity,the default scheduling strategy has difficultymeeting the demands of resource utilization and performance optimization.Scheduling strategy optimization,as a key direction for improving Spark’s execution efficiency,has attracted widespread attention.This paper first introduces the basic theories of Spark,compares several default scheduling strategies,and discusses common scheduling performance evaluation indicators and factors affecting scheduling efficiency.Subsequently,existing scheduling optimization schemes are summarized based on three scheduling modes:load characteristics,cluster characteristics,and matching of both,and representative algorithms are analyzed in terms of performance indicators and applicable scenarios,comparing the advantages and disadvantages of different scheduling modes.The article also explores in detail the integration of Spark scheduling strategies with specific application scenarios and the challenges in production environments.Finally,the limitations of the existing schemes are analyzed,and prospects are envisioned. 展开更多
关键词 SPARK scheduling optimization load balancing resource utilization distributed computing
在线阅读 下载PDF
An Optimized Resource Scheduling Strategy for Hadoop Speculative Execution Based on Non-cooperative Game Schemes
7
作者 Yinghang Jiang Qi Liu +3 位作者 Williams Dannah Dandan Jin Xiaodong Liu Mingxu Sun 《Computers, Materials & Continua》 SCIE EI 2020年第2期713-729,共17页
Hadoop is a well-known parallel computing system for distributed computing and large-scale data processes.“Straggling”tasks,however,have a serious impact on task allocation and scheduling in a Hadoop system.Speculat... Hadoop is a well-known parallel computing system for distributed computing and large-scale data processes.“Straggling”tasks,however,have a serious impact on task allocation and scheduling in a Hadoop system.Speculative Execution(SE)is an efficient method of processing“Straggling”Tasks by monitoring real-time running status of tasks and then selectively backing up“Stragglers”in another node to increase the chance to complete the entire mission early.Present speculative execution strategies meet challenges on misjudgement of“Straggling”tasks and improper selection of backup nodes,which leads to inefficient implementation of speculative executive processes.This paper has proposed an Optimized Resource Scheduling strategy for Speculative Execution(ORSE)by introducing non-cooperative game schemes.The ORSE transforms the resource scheduling of backup tasks into a multi-party non-cooperative game problem,where the tasks are regarded as game participants,whilst total task execution time of the entire cluster as the utility function.In that case,the most benefit strategy can be implemented in each computing node when the game reaches a Nash equilibrium point,i.e.,the final resource scheduling scheme to be obtained.The strategy has been implemented in Hadoop-2.x.Experimental results depict that the ORSE can maintain the efficiency of speculative executive processes and improve fault-tolerant and computation performance under the circumstances of Normal Load,Busy Load and Busy Load with Skewed Data. 展开更多
关键词 distributed computing speculative execution resource scheduling non-cooperative game theory
在线阅读 下载PDF
Method for improving MapReduce performance by prefetching before scheduling
8
作者 张霄宏 Feng Shengzhong +1 位作者 Fan Jianping Huang Zhexue 《High Technology Letters》 EI CAS 2012年第4期343-349,共7页
In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input... In this paper, a prefetching technique is proposed to solve the performance problem caused by remote data access delay. In the technique, the map tasks which will cause the delay are predicted first and then the input data of these tasks will be preloaded before the tasks are scheduled. During the execution, the input data can be read from local nodes. Therefore, the delay can be hidden. The technique has been implemented in Hadoop-0. 20.1. The experiment results have shown that the technique reduces map tasks causing delay, and improves the performance of Hadoop MapRe- duce by 20%. 展开更多
关键词 cloud computing distributed computing PREFETCHING MAPREDUCE scheduling
在线阅读 下载PDF
Self Organization Map for Clustering and Classification in the Ecology of Agent Organizations 被引量:3
9
作者 Dimuthu Chandana Kelegama LIU Li-hua LIU Jian-qin 《Journal of Central South University》 SCIE EI CAS 2000年第1期53-56,共4页
Development of computational agent organizations or “societies” has become the domiant computing paradigm in the arena of Distributed Artificial Intelligence, and many foreseeable future applications need agent orga... Development of computational agent organizations or “societies” has become the domiant computing paradigm in the arena of Distributed Artificial Intelligence, and many foreseeable future applications need agent organizations, in which diversified agents cooperate in a distributed manner, forming teams. In such scenarios, the agents would need to know each other in order to facilitate the interactions. Moreover, agents in such an environment are not statically defined in advance but they can adaptively enter and leave an organization. This begs the question of how agents locate each other in order to cooperate in achieving organizational goals. Locating agents is a quite challenging task, especially in organizations that involve a large number of agents and where the resource avaiability is intermittent. The authors explore here an approach based on self organization map (SOM) which will serve as a clustering method in the light of the knowledge gathered about various agents. The approach begins by categorizing agents using a selected set of agent properties. These categories are used to derive various ranks and a distance matrix. The SOM algorithm uses this matrix as input to obtain clusters of agents. These clusters reduce the search space, resulting in a relatively short agent search time. 展开更多
关键词 clusterING classification AGENT organizations AGENT societies self ORGANIZING distributed computing
在线阅读 下载PDF
PRI: An Periodically Receiver-Initiated Task Scheduling Algorithm
10
作者 石威 《High Technology Letters》 EI CAS 2000年第1期10-15,共6页
Task scheduling is a key problem for the distributed computation. This thesis analyzes receiver initiated(RI) task scheduling algorithm, finds its weakness and presents an improved algorithm PRI algorithm. This algo... Task scheduling is a key problem for the distributed computation. This thesis analyzes receiver initiated(RI) task scheduling algorithm, finds its weakness and presents an improved algorithm PRI algorithm. This algorithm schedules the concurrent tasks onto network of workstation dynamically at runtime, and initiates task scheduling by the node of low load. The threshold on each node can be modified according to the system information which is periodically detected. Meanwhile, the detecting period can be adjusted in terms of the change of the system state. The result of the experiments shows that the PRI algorithm is superior to the RI algorithm. 展开更多
关键词 Task scheduling distributed computation RECEIVER initiated Network of WORKSTATIONS RUNTIME Low load THRESHOLD
在线阅读 下载PDF
DSparse:A Distributed Training Method for Edge Clusters Based on Sparse Update
11
作者 Xiao-Hui Peng Yi-Xuan Sun +1 位作者 Zheng-Hui Zhang Yi-Fan Wang 《Journal of Computer Science & Technology》 2025年第3期637-653,共17页
Edge machine learning creates a new computational paradigm by enabling the deployment of intelligent applications at the network edge.It enhances application efficiency and responsiveness by performing inference and t... Edge machine learning creates a new computational paradigm by enabling the deployment of intelligent applications at the network edge.It enhances application efficiency and responsiveness by performing inference and training tasks closer to data sources.However,it encounters several challenges in practice.The variance in hardware specifications and performance across different devices presents a major issue for the training and inference tasks.Additionally,edge devices typically possess limited network bandwidth and computing resources compared with data centers.Moreover,existing distributed training architectures often fail to consider the constraints of resources and communication efficiency in edge environments.In this paper,we propose DSparse,a method for distributed training based on sparse update in edge clusters with various memory capacities.It aims at maximizing the utilization of memory resources across all devices within a cluster.To reduce memory consumption during the training process,we adopt sparse update to prioritize the updating of selected layers on the devices in the cluster,which not only lowers memory usage but also reduces the data volume of parameters and the time required for parameter aggregation.Furthermore,DSparse utilizes a parameter aggregation mechanism based on multi-process groups,subdividing the aggregation tasks into AllReduce and Broadcast types,thereby further reducing the communication frequency for parameter aggregation.Experimental results using the MobileNetV2 model on the CIFAR-10 dataset demonstrate that DSparse reduces memory consumption by an average of 59.6%across seven devices,with a 75.4%reduction in parameter aggregation time,while maintaining model precision. 展开更多
关键词 distributed training edge computing edge machine learning sparse update edge cluster
原文传递
Optimal Scheduling of Distribution System with Edge Computing and Data-driven Modeling of Demand Response 被引量:2
12
作者 Jianpei Han Nian Liu Jiaqi Shi 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2022年第4期989-999,共11页
High penetration of renewable energies enlarge the peak-valley difference of the net load of the distribution system,which puts forward higher requirements for the operation scheduling of the distribution system.From ... High penetration of renewable energies enlarge the peak-valley difference of the net load of the distribution system,which puts forward higher requirements for the operation scheduling of the distribution system.From the perspective of leveraging demand-side adjustment capabilities,an optimal scheduling method of the distribution system with edge computing and data-driven modeling of price-based demand response(PBDR)is proposed.By introducing the edge computing paradigm,a collaborative interaction framework between the control center and the edge nodes is designed for the optimization of the distribution system.At the edge nodes,a classified XGBoost-based PBDR modeling method is proposed for large-scale differentiated users.At the control center,a two-stage optimization method integrating pre-scheduling and re-scheduling is proposed based on demand response results from all edge nodes.Through the information interaction between the control center and edge nodes,the optimized scheduling of the distribution system with large-scale users is realized.Finally,a case study is implemented on the modified IEEE 33-node system,which verifies that the proposed classified modeling method has lower errors,and it is beneficial to improve the economics of the system operation.Moreover,the simulation results show that the application of edge computing can significantly reduce the calculation time of the optimal scheduling problem with PBDR modeling of large-scale users. 展开更多
关键词 Demand response distribution system edge computing optimal scheduling XGBoost
原文传递
On/Off-Line Prediction Applied to Job Scheduling on Non-Dedicated NOWs 被引量:1
13
作者 Mauricio Hanzich Porfidio Hernandez +2 位作者 Francesc Gine Francesc Solsona Josep L.Lerida 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第1期99-116,共18页
This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. T... This paper proposes a prediction engine designed for non-dedicated clusters, which is able to estimate the turnaround time for parallel applications, even in the presence of serial workload of the workstation owner. The prediction engine can be configured to work with three different estimation kernels: a Historical kernel, a Simulation kernel based on analytical models and an integration of both, named Hybrid kernel. These estimation proposals were integrated into a scheduling system, named CISNE, which can be executed in an on-line or off-line mode. The accuracy of the proposed estimation methods was evaluated in relation to different job scheduling policies in a real and a simulated cluster environment. In both environments, we observed that the Hybrid system gives the best results because it combines the ability of a simulation engine to capture the dynamism of a non-dedicated environment together with the accuracy of the historical methods to estimate the application runtime considering the state of the resources. 展开更多
关键词 prediction method non-dedicated cluster cluster computing job scheduling simulation
原文传递
Consolidated cluster systems for data centers in the cloud age: a survey and analysis 被引量:2
14
作者 Jian LIN Li ZHA Zhiwei XU 《Frontiers of Computer Science》 SCIE EI CSCD 2013年第1期1-19,共19页
In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the chal- lenges on resource utilization and manageability to data cen- ters. Many resource and runtime management systems a... In the cloud age, heterogeneous application modes on large-scale infrastructures bring about the chal- lenges on resource utilization and manageability to data cen- ters. Many resource and runtime management systems are developed or evolved to address these challenges and rele- vant problems from different perspectives. This paper tries to identify the main motivations, key concerns, common fea- tures, and representative solutions of such systems through a survey and analysis. A typical kind of these systems is gener- alized as the consolidated cluster system, whose design goal is identified as reducing the overall costs under the quality of service premise. A survey on this kind of systems is given, and the critical issues concerned by such systems are sum- marized as resource consolidation and runtime coordination. These two issues are analyzed and classified according to the design styles and external characteristics abstracted from the surveyed work. Five representative consolidated cluster systems from both academia and industry are illustrated and compared in detail based on the analysis and classifications. We hope this survey and analysis to be conducive to both de- sign implementation and technology selection of this kind of systems, in response to the constantly emerging challenges on infrastructure and application management in data centers. 展开更多
关键词 data center cloud computing distributed re- source management consolidated cluster system resource consolidation runtime coordination
原文传递
Resource Load Prediction of Internet of Vehicles Mobile Cloud Computing
15
作者 Wenbin Bi Fang Yu +1 位作者 Ning Cao Russell Higgs 《Computers, Materials & Continua》 SCIE EI 2022年第10期165-180,共16页
Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study... Load-time series data in mobile cloud computing of Internet of Vehicles(IoV)usually have linear and nonlinear composite characteristics.In order to accurately describe the dynamic change trend of such loads,this study designs a load prediction method by using the resource scheduling model for mobile cloud computing of IoV.Firstly,a chaotic analysis algorithm is implemented to process the load-time series,while some learning samples of load prediction are constructed.Secondly,a support vector machine(SVM)is used to establish a load prediction model,and an improved artificial bee colony(IABC)function is designed to enhance the learning ability of the SVM.Finally,a CloudSim simulation platform is created to select the perminute CPU load history data in the mobile cloud computing system,which is composed of 50 vehicles as the data set;and a comparison experiment is conducted by using a grey model,a back propagation neural network,a radial basis function(RBF)neural network and a RBF kernel function of SVM.As shown in the experimental results,the prediction accuracy of the method proposed in this study is significantly higher than other models,with a significantly reduced real-time prediction error for resource loading in mobile cloud environments.Compared with single-prediction models,the prediction method proposed can build up multidimensional time series in capturing complex load time series,fit and describe the load change trends,approximate the load time variability more precisely,and deliver strong generalization ability to load prediction models for mobile cloud computing resources. 展开更多
关键词 Internet of Vehicles mobile cloud computing resource load predicting multi distributed resource computing scheduling chaos analysis algorithm improved artificial bee colony function
在线阅读 下载PDF
边缘计算网络中多核任务卸载调度和资源适配研究 被引量:2
16
作者 李金 樊腾飞 +2 位作者 高红亮 刘科孟 谢虎 《兵工自动化》 北大核心 2025年第3期29-34,共6页
为解决边缘计算网络任务卸载中的问题,对移动边缘关键技术进行研究。设计边缘节点计算分布式架构,参考量子粒子群算法和容器技术,形成基于边缘网关架构的任务卸载优化策略;对优化策略进行仿真实验,通过改变计算任务规模以及计算任务大小... 为解决边缘计算网络任务卸载中的问题,对移动边缘关键技术进行研究。设计边缘节点计算分布式架构,参考量子粒子群算法和容器技术,形成基于边缘网关架构的任务卸载优化策略;对优化策略进行仿真实验,通过改变计算任务规模以及计算任务大小,分析任务卸载时延和耗能。结果表明:该策略能够有效降低任务卸载时延和耗能,实现边缘节点资源的充分利用,达到资源的良好适配效果。 展开更多
关键词 边缘节点 边缘计算集群 分布式架构 任务卸载 资源适配
在线阅读 下载PDF
基于深度强化学习算法的分布式光伏-EV互补系统智能调度 被引量:1
17
作者 陈宁 李法社 +3 位作者 王霜 张慧聪 唐存靖 倪梓皓 《高电压技术》 北大核心 2025年第3期1454-1463,共10页
针对分布式光伏与电动汽车(electric vehicle,EV)大规模接入电网将对电力系统造成冲击的问题,通过建立分布式光伏-EV互补调度模型,以平抑光伏并网波动、增加EV用户经济性为目标,考虑光伏出力的随机性、负荷功率波动、EV接入时间及电量... 针对分布式光伏与电动汽车(electric vehicle,EV)大规模接入电网将对电力系统造成冲击的问题,通过建立分布式光伏-EV互补调度模型,以平抑光伏并网波动、增加EV用户经济性为目标,考虑光伏出力的随机性、负荷功率波动、EV接入时间及电量随机性、实时电价、电池老化成本等因素,提出采用梯度随机扰动的改进型近端策略优化算法(gradient random perturbation-proximal policy optimization algorithm,GRP-PPO)进行求解,通过对模型目标函数的调整,得到基于不同优化目标的2种实时运行策略。通过算例可知,实时调度策略可有效地平抑并网点功率波动,调度效果较传统PPO算法提高了3.48%;策略一以用户的出行需求及平抑并网点功率波动为首要目标,能够保证用户的24h用车需求,同时并网点功率稳定率达到91.84%;策略二以用户经济效益为首要优化目标,全天参与调度的EV收益可达82.6元,可起到鼓励用户参与调度的目的。 展开更多
关键词 分布式光伏 电动汽车 V2G 深度强化学习 实时调度 近端策略优化
原文传递
基于边缘计算的多集群容器云弹性资源调度方法
18
作者 李金 刘科孟 +2 位作者 高红亮 樊腾飞 谢虎 《兵工自动化》 北大核心 2025年第5期42-46,60,共6页
为解决集中式云计算技术不能实现大量边缘数据的运算带宽及不能保证应用的隐私性和实时性等问题,对边缘容器云负载在多集群条件下的时间差异及存在时延敏感性需求差异的边缘应用进行系统分析,并提供一个主从模型管理的多集群边缘云架构... 为解决集中式云计算技术不能实现大量边缘数据的运算带宽及不能保证应用的隐私性和实时性等问题,对边缘容器云负载在多集群条件下的时间差异及存在时延敏感性需求差异的边缘应用进行系统分析,并提供一个主从模型管理的多集群边缘云架构。对时延敏感性运用的相关资源调配情况进行深入研究,通过比较存在的响应式策略,能够有效实现已经提出的相关研究;关于时延敏感应用的问题,采用在负荷上沿超前扩展,抑或在负荷下行时进行滞后缩容,以切实达到应用质量的需要。研究结果表明:边缘计算模式采取分布式,可提高在实际应用周围下沉云中的相关计算能力,能降低云中心自身的运算负荷,减轻核心骨干网带宽压力。 展开更多
关键词 边缘计算 资源调度 时延敏感 分布式模型
在线阅读 下载PDF
基于EA-RL算法的分布式能源集群调度方法
19
作者 程小华 王泽夫 +2 位作者 曾君 曾婧瑶 谭豪杰 《华南理工大学学报(自然科学版)》 北大核心 2025年第1期1-9,共9页
目前对于分布式能源集群调度的研究大多局限于单一场景,同时也缺少高效、准确的算法。该文针对以上问题提出了一种基于进化算法经验指导的深度强化学习(EA-RL)的分布式能源集群多场景调度方法。分别对分布式能源集群中的电源、储能、负... 目前对于分布式能源集群调度的研究大多局限于单一场景,同时也缺少高效、准确的算法。该文针对以上问题提出了一种基于进化算法经验指导的深度强化学习(EA-RL)的分布式能源集群多场景调度方法。分别对分布式能源集群中的电源、储能、负荷进行个体建模,并基于个体调度模型建立了包含辅助调峰调频的多场景分布式能源集群优化调度模型;基于进化强化学习算法框架,提出了一种EA-RL算法,该算法融合了遗传算法(GA)与深度确定性策略梯度(DDPG)算法,以经验序列作为遗传算法个体进行交叉、变异、选择,筛选出优质经验加入DDPG算法经验池对智能体进行指导训练以提高算法的搜索效率和收敛性;根据多场景调度模型构建分布式能源集群多场景调度问题的状态空间和动作空间,再以最小化调度成本、最小化辅助服务调度指令偏差、最小化联络线越限功率以及最小化源荷功率差构建奖励函数,完成强化学习模型的建立;为验证所提算法模型的有效性,基于多场景的仿真算例对调度智能体进行离线训练,形成能够适应电网多场景的调度智能体,通过在线决策的方式进行验证,根据决策结果评估其调度决策能力,并通过与DDPG算法的对比验证算法的有效性,最后对训练完成的智能体进行了连续60d的加入不同程度扰动的在线决策测试,验证智能体的后效性和鲁棒性。 展开更多
关键词 分布式能源集群 深度强化学习 进化强化学习算法 多场景一体化调度
在线阅读 下载PDF
基于云计算的水库群实时防洪多目标风险调度模型
20
作者 陈娟 张璐 +2 位作者 孙飞飞 邓如霞 钟平安 《水利学报》 北大核心 2025年第7期885-897,共13页
水库群实时防洪调度是受诸多因素影响的不确定性调度,开展水库群实时防洪多目标风险调度研究对保障流域防洪安全具有重要意义。本文考虑水文预报误差的不确定性,以各水库最高水位最低、下游防洪断面最大过水流量最小为目标函数,建立水... 水库群实时防洪调度是受诸多因素影响的不确定性调度,开展水库群实时防洪多目标风险调度研究对保障流域防洪安全具有重要意义。本文考虑水文预报误差的不确定性,以各水库最高水位最低、下游防洪断面最大过水流量最小为目标函数,建立水库群实时防洪多目标风险调度模型;针对复杂水库群“维数灾”问题,基于云计算采用多目标金鹰优化算法(MOGEO)求解水库群实时防洪多目标风险调度模型,从智能优化算法、风险因子模拟、并行计算、云计算四个角度优化模型求解时间,满足实时防洪调度高时效性的需求;再根据非劣解集的空间分布,提出改进的点云体素下采样法提取最优调度方案。以淮河中上游史灌河流域为背景开展实例研究,结果表明:MOGEO在防洪调度方面的适应能力较强,模型求解时间由第三代非支配排序遗传算法(NSGA-Ⅲ)的1542 s缩短至830 s,快速收敛至Pareto前沿;基于改进拉丁超立方的风险因子模拟在确保抽样精度可靠的情况下,缩减了2/3的计算时间;采用云分布式集群的模型求解时间为113 s,为云服务器单机12核并行计算时间的1/6、串行计算时间的1/30,大幅度提高了模型求解效率。 展开更多
关键词 水库群 实时防洪调度 不确定性 云计算 分布式集群
在线阅读 下载PDF
上一页 1 2 26 下一页 到第
使用帮助 返回顶部