期刊文献+
共找到264,022篇文章
< 1 2 250 >
每页显示 20 50 100
锈蚀RC框架节点核心区剪切恢复力模型
1
作者 郑山锁 刘立国 +3 位作者 董立国 杨松 李健 丛峻 《建筑科学与工程学报》 北大核心 2026年第1期162-172,共11页
为满足一般大气条件下在役RC框架结构抗震分析的需求,对12榀RC框架节点进行了人工加速腐蚀和拟静力加载试验,探究了锈蚀水平、轴压比变化对节点破坏模式、滞回性能以及核心区抗剪性能的影响;基于试验结果的多参数回归分析,建立了锈蚀RC... 为满足一般大气条件下在役RC框架结构抗震分析的需求,对12榀RC框架节点进行了人工加速腐蚀和拟静力加载试验,探究了锈蚀水平、轴压比变化对节点破坏模式、滞回性能以及核心区抗剪性能的影响;基于试验结果的多参数回归分析,建立了锈蚀RC节点核心区的剪切恢复力模型,并在OpenSEES软件中利用Joint2D和纤维梁柱单元建立了锈蚀RC节点的组合体数值模型。结果表明:锈蚀RC节点的破坏模式均为节点核心区的剪切破坏,锈蚀程度与轴压比的增大会削弱RC节点及其核心区的承载能力与变形能力,导致抗震性能发生劣化;提出的剪切恢复力模型能够较全面反映不同锈蚀程度和不同轴压比RC节点核心区的剪切滞回特性;建立的锈蚀RC节点组合体数值模型的荷载模拟相对误差基本不超过10%,变形模拟相对误差基本不超过20%,最终破坏时的累积耗能相对误差也基本控制在30%以内;基于锈蚀RC节点核心区剪切恢复力模型所建立的节点组合体数值模型能够较准确模拟往复加载作用下锈蚀RC节点的滞回性能,可用于一般大气环境下RC节点及框架的抗震分析评估。 展开更多
关键词 锈蚀rc框架节点 剪切恢复力模型 抗震性能 OPENSEES
在线阅读 下载PDF
CFRP布配置率对CFRP加固无腹筋RC梁纯扭转性能及尺寸效应影响
2
作者 张江兴 李冬 +1 位作者 金浏 杜修力 《工程力学》 北大核心 2026年第2期105-114,共10页
开展混凝土结构扭转尺寸效应研究,对提高结构构件抗扭承载力的安全设计具有重要意义。该文综合考虑混凝土非均质性、及其与钢筋、与CFRP布的黏结-滑移关系,建立了CFRP加固RC梁纯扭转破坏的三维细观数值分析模型,系统探究了CFRP布配置率... 开展混凝土结构扭转尺寸效应研究,对提高结构构件抗扭承载力的安全设计具有重要意义。该文综合考虑混凝土非均质性、及其与钢筋、与CFRP布的黏结-滑移关系,建立了CFRP加固RC梁纯扭转破坏的三维细观数值分析模型,系统探究了CFRP布配置率对加固RC梁纯扭转性能及尺寸效应的影响规律。研究结果表明:提高CFRP布配置率不仅能够有效提升RC梁的名义抗扭强度,还可削弱其名义抗扭强度的尺寸效应。基于数值模拟结果,提出了能够定量描述CFRP布配置率对RC梁名义抗扭强度尺寸效应影响的纯扭转尺寸效应律。 展开更多
关键词 CFRP加固rc 扭转破坏 CFRP布配置率 尺寸效应 细观模拟
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning 被引量:1
3
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 Edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
4
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
CBBM-WARM:A Workload-Aware Meta-Heuristic for Resource Management in Cloud Computing 被引量:1
5
作者 K Nivitha P Pabitha R Praveen 《China Communications》 2025年第6期255-275,共21页
The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achievi... The rapid advent in artificial intelligence and big data has revolutionized the dynamic requirement in the demands of the computing resource for executing specific tasks in the cloud environment.The process of achieving autonomic resource management is identified to be a herculean task due to its huge distributed and heterogeneous environment.Moreover,the cloud network needs to provide autonomic resource management and deliver potential services to the clients by complying with the requirements of Quality-of-Service(QoS)without impacting the Service Level Agreements(SLAs).However,the existing autonomic cloud resource managing frameworks are not capable in handling the resources of the cloud with its dynamic requirements.In this paper,Coot Bird Behavior Model-based Workload Aware Autonomic Resource Management Scheme(CBBM-WARMS)is proposed for handling the dynamic requirements of cloud resources through the estimation of workload that need to be policed by the cloud environment.This CBBM-WARMS initially adopted the algorithm of adaptive density peak clustering for workloads clustering of the cloud.Then,it utilized the fuzzy logic during the process of workload scheduling for achieving the determining the availability of cloud resources.It further used CBBM for potential Virtual Machine(VM)deployment that attributes towards the provision of optimal resources.It is proposed with the capability of achieving optimal QoS with minimized time,energy consumption,SLA cost and SLA violation.The experimental validation of the proposed CBBMWARMS confirms minimized SLA cost of 19.21%and reduced SLA violation rate of 18.74%,better than the compared autonomic cloud resource managing frameworks. 展开更多
关键词 autonomic resource management cloud computing coot bird behavior model SLA violation cost WORKLOAD
在线阅读 下载PDF
Providing Robust and Low-Cost Edge Computing in Smart Grid:An Energy Harvesting Based Task Scheduling and Resource Management Framework 被引量:1
6
作者 Xie Zhigang Song Xin +1 位作者 Xu Siyang Cao Jing 《China Communications》 2025年第2期226-240,共15页
Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power sta... Recently,one of the main challenges facing the smart grid is insufficient computing resources and intermittent energy supply for various distributed components(such as monitoring systems for renewable energy power stations).To solve the problem,we propose an energy harvesting based task scheduling and resource management framework to provide robust and low-cost edge computing services for smart grid.First,we formulate an energy consumption minimization problem with regard to task offloading,time switching,and resource allocation for mobile devices,which can be decoupled and transformed into a typical knapsack problem.Then,solutions are derived by two different algorithms.Furthermore,we deploy renewable energy and energy storage units at edge servers to tackle intermittency and instability problems.Finally,we design an energy management algorithm based on sampling average approximation for edge computing servers to derive the optimal charging/discharging strategies,number of energy storage units,and renewable energy utilization.The simulation results show the efficiency and superiority of our proposed framework. 展开更多
关键词 edge computing energy harvesting energy storage unit renewable energy sampling average approximation task scheduling
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
7
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
Quantum Inspired Adaptive Resource Management Algorithm for Scalable and Energy Efficient Fog Computing in Internet of Things(IoT)
8
作者 Sonia Khan Naqash Younas +3 位作者 Musaed Alhussein Wahib Jamal Khan Muhammad Shahid Anwar Khursheed Aurangzeb 《Computer Modeling in Engineering & Sciences》 2025年第3期2641-2660,共20页
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc... Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments. 展开更多
关键词 Quantum computing resource management energy efficiency fog computing Internet of Things
在线阅读 下载PDF
基于机器学习的爆炸荷载下RC板最大位移响应分析
9
作者 于晓辉 陈玉琛 代旷宇 《建筑科学与工程学报》 北大核心 2026年第1期85-94,共10页
收集既有试验及数值模拟结果,建立了包含491种钢筋混凝土(RC)板在爆炸荷载作用下的位移响应数据库。采用板的长度、宽度、厚度、混凝土抗压强度、钢筋屈服强度、配筋率、边界条件、板的类型、爆炸距离和爆炸当量10个影响因素作为输入参... 收集既有试验及数值模拟结果,建立了包含491种钢筋混凝土(RC)板在爆炸荷载作用下的位移响应数据库。采用板的长度、宽度、厚度、混凝土抗压强度、钢筋屈服强度、配筋率、边界条件、板的类型、爆炸距离和爆炸当量10个影响因素作为输入参数,采用3类共9种机器学习方法,分别建立了RC板在爆炸荷载下最大位移响应预测模型。采用可解释性机器学习方法,通过特征重要性分析、单因素部分依赖分析及交互性依赖分析,对所建立的机器学习模型进行解释,并对RC板在爆炸荷载下最大位移响应的影响因素的重要性进行了分析。结果表明:基于粒子群优化-极端梯度增强方法(PSO-XGBoost)的预测模型精度最高,且精度高于规范推荐的等效单自由度模型结果;在所考虑的影响因素中,爆炸当量、爆炸距离、板的厚度及配筋率对RC板在爆炸荷载作用下的最大位移响应影响最显著;RC板的抗爆设计应保证最小板厚达到150 mm,最小配筋率达到1.5%,且混凝土强度应达到50 MPa。 展开更多
关键词 rc 爆炸荷载 最大位移响应 可解释性机器学习 PSO-XGBoost方法
在线阅读 下载PDF
Computational Offloading and Resource Allocation for Internet of Vehicles Based on UAV-Assisted Mobile Edge Computing System
10
作者 Fang Yujie Li Meng +3 位作者 Si Pengbo Yang Ruizhe Sun Enchang Zhang Yanhua 《China Communications》 2025年第9期333-351,共19页
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ... As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant. 展开更多
关键词 computational offloading Internet of Vehicles mobile edge computing resource optimization unmanned aerial vehicle
在线阅读 下载PDF
Joint offloading decision and resource allocation in vehicular edge computing networks
11
作者 Shumo Wang Xiaoqin Song +3 位作者 Han Xu Tiecheng Song Guowei Zhang Yang Yang 《Digital Communications and Networks》 2025年第1期71-82,共12页
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a... With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay. 展开更多
关键词 Computation offloading Resource allocation Vehicular edge computing Potential game Multi-agent deep deterministic policy gradient
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
12
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
A Two-Layer UAV Cooperative Computing Offloading Strategy Based on Deep Reinforcement Learning
13
作者 Zhang Jianfei Wang Zhen +1 位作者 Hu Yun Chang Zheng 《China Communications》 2025年第10期251-268,共18页
In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapi... In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapid deployment and high mobil-ity,are commonly regarded as an ideal option for con-structing temporary communication networks.Con-sidering the limited computing capability and battery power of UAVs,this paper proposes a two-layer UAV cooperative computing offloading strategy for emer-gency disaster relief scenarios.The multi-agent twin delayed deep deterministic policy gradient(MATD3)algorithm integrated with prioritized experience replay(PER)is utilized to jointly optimize the scheduling strategies of UAVs,task offloading ratios,and their mobility,aiming to diminish the energy consumption and delay of the system to the minimum.In order to address the aforementioned non-convex optimiza-tion issue,a Markov decision process(MDP)has been established.The results of simulation experiments demonstrate that,compared with the other four base-line algorithms,the algorithm introduced in this paper exhibits better convergence performance,verifying its feasibility and efficacy. 展开更多
关键词 cooperative computational offloading deep reinforcement learning mobile edge computing prioritized experience replay two-layer unmanned aerial vehicles
在线阅读 下载PDF
A Dynamic Workload Prediction and Distribution in Cloud Computing Using Deep Reinforcement Learning and LSTM
14
作者 Nampally Vijay Kumar Satarupa Mohanty Prasant Kumar Pattnaik 《Journal of Harbin Institute of Technology(New Series)》 2025年第4期64-71,共8页
Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a... Maintaining high-quality service supply and sustainability in modern cloud computing is essential to ensuring optimal system performance and energy efficiency.A novel approach is introduced in this study to decrease a system's overall delay and energy consumption by using a deep reinforcement learning(DRL)model to predict and allocate incoming workloads flexibly.The proposed methodology integrates workload prediction utilising long short-term memory(LSTM)networks with efficient load-balancing techniques led by deep Q-learning and Actor-critic algorithms.By continuously analysing current and historical data,the model can efficiently allocate resources,prioritizing speed and energy preservation.The experimental results demonstrate that our load balancing system,which utilises DRL,significantly reduces average response times and energy usage compared to traditional methods.This approach provides a scalable and adaptable strategy for enhancing cloud infrastructure performance.It consistently provides reliable and durable performance across a range of dynamic workloads. 展开更多
关键词 DRL LSTM cloud computing load balancing Q-LEARNING
在线阅读 下载PDF
Data Elements and Trustworthy Circulation:A Clearing and Settlement Architecture for Element Market Transactions Integrating Privacy Computing and Smart Contracts
15
作者 Huanjing Huang 《Journal of Electronic Research and Application》 2025年第5期86-92,共7页
This article explores the characteristics of data resources from the perspective of production factors,analyzes the demand for trustworthy circulation technology,designs a fusion architecture and related solutions,inc... This article explores the characteristics of data resources from the perspective of production factors,analyzes the demand for trustworthy circulation technology,designs a fusion architecture and related solutions,including multi-party data intersection calculation,distributed machine learning,etc.It also compares performance differences,conducts formal verification,points out the value and limitations of architecture innovation,and looks forward to future opportunities. 展开更多
关键词 Data elements Privacy computing Smart contracts
在线阅读 下载PDF
Lightweight deep reinforcement learning for dynamic resource allocation in vehicular edge computing
16
作者 Dapeng Wu Sijun Wu +4 位作者 Yaping Cui Ailing Zhong Tong Tang Ruyan Wang Xinqi Lin 《Digital Communications and Networks》 2025年第5期1530-1542,共13页
Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for r... Vehicular Edge Computing(VEC)enhances the quality of user services by deploying wealth of resources near vehicles.However,due to highly dynamic and complex nature of vehicular networks,centralized decisionmaking for resource allocation proves inadequate within VECs.Conversely,allocating resources via distributed decision-making consumes vehicular resources.To improve the quality of user service,we formulate a problem of latency minimization,further subdividing this problem into two subproblems to be solved through distributed decision-making.To mitigate the resource consumption caused by distributed decision-making,we propose Reinforcement Learning(RL)algorithm based on sequential alternating multi-agent system mechanism,which effectively reduces the dimensionality of action space without losing the informational content of action,achieving network lightweighting.We discuss the rationality,generalizability,and inherent advantages of proposed mechanism.Simulation results indicate that our proposed mechanism outperforms traditional RL algorithms in terms of stability,generalizability,and adaptability to scenarios with invalid actions,all while achieving network lightweighting. 展开更多
关键词 Vehicular edge computing Resource allocation Reinforcement learning LIGHTWEIGHT
在线阅读 下载PDF
SDVformer:A Resource Prediction Method for Cloud Computing Systems
17
作者 Shui Liu Ke Xiong +3 位作者 Yeshen Li Zhifei Zhang Yu Zhang Pingyi Fan 《Computers, Materials & Continua》 2025年第9期5077-5093,共17页
Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exh... Accurate prediction of cloud resource utilization is critical.It helps improve service quality while avoiding resource waste and shortages.However,the time series of resource usage in cloud computing systems often exhibit multidimensionality,nonlinearity,and high volatility,making the high-precision prediction of resource utilization a complex and challenging task.At present,cloud computing resource prediction methods include traditional statistical models,hybrid approaches combining machine learning and classical models,and deep learning techniques.Traditional statistical methods struggle with nonlinear predictions,hybrid methods face challenges in feature extraction and long-term dependencies,and deep learning methods incur high computational costs.The above methods are insufficient to achieve high-precision resource prediction in cloud computing systems.Therefore,we propose a new time series prediction model,called SDVformer,which is based on the Informer model by integrating the Savitzky-Golay(SG)filters,a novel Discrete-Variation Self-Attention(DVSA)mechanism,and a type-aware mixture of experts(T-MOE)framework.The SG filter is designed to reduce noise and enhance the feature representation of input data.The DVSA mechanism is proposed to optimize the selection of critical features to reduce computational complexity.The T-MOE framework is designed to adjust the model structure based on different resource characteristics,thereby improving prediction accuracy and adaptability.Experimental results show that our proposed SDVformer significantly outperforms baseline models,including Recurrent Neural Network(RNN),Long Short-Term Memory(LSTM),and Informer in terms of prediction precision,on both the Alibaba public dataset and the dataset collected by Beijing Jiaotong University(BJTU).Particularly compared with the Informer model,the average Mean Squared Error(MSE)of SDVformer decreases by about 80%,fully demonstrating its advantages in complex time series prediction tasks in cloud computing systems. 展开更多
关键词 Cloud computing time series prediction DVSA SG filter T-MOE
暂未订购
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
18
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
A Hierarchical Task Graph Parallel Computing Framework for Chemical Process Simulation
19
作者 Shifeng Qu Shaoyi Yang +3 位作者 Wenli Du Zhaoyang Duan Feng Qian Meihong Wang 《Engineering》 2025年第8期229-239,共11页
Sequential-modular-based process flowsheeting software remains an indispensable tool for process design,control,and optimization.Yet,as the process industry advances in intelligent operation and maintenance,convention... Sequential-modular-based process flowsheeting software remains an indispensable tool for process design,control,and optimization.Yet,as the process industry advances in intelligent operation and maintenance,conventional sequential-modular-based process-simulation techniques present challenges regarding computationally intensive calculations and significant central processing unit(CPU)time requirements,particularly in large-scale design and optimization tasks.To address these challenges,this paper proposes a novel process-simulation parallel computing framework(PSPCF).This framework achieves layered parallelism in recycling processes at the unit operation level.Notably,PSPCF introduces a groundbreaking concept of formulating simulation problems as task graphs and utilizes Taskflow,an advanced task graph computing system,for hierarchical parallel scheduling and the execution of unit operation tasks.PSPCF also integrates an advanced work-stealing scheme to automatically balance thread resources with the demanding workload of unit operation tasks.For evaluation,both a simpler parallel column process and a more complex cracked gas separation process were simulated on a flowsheeting platform using PSPCF.The framework demonstrates significant time savings,achieving over 60%reduction in processing time for the simpler process and a 35%–40%speed-up for the more complex separation process. 展开更多
关键词 Parallel computing Process simulation Task graph parallelism Sequential modular approach
在线阅读 下载PDF
Joint optimization of UAV aided covert edge computing via a deep reinforcement learning framework
20
作者 Wei WEI Shu FU +2 位作者 Yujie TANG Yuan WU Haijun ZHANG 《Chinese Journal of Aeronautics》 2025年第10期96-106,共11页
In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation ... In this work,we consider an Unmanned Aerial Vehicle(UAV)aided covert edge computing architecture,where multiple sensors are scattered with a certain distance on the ground.The sensor can implement several computation tasks.In an emergency scenario,the computational capabilities of sensors are often limited,as seen in vehicular networks or Internet of Things(IoT)networks.The UAV can be utilized to undertake parts of the computation tasks,i.e.,edge computing.While various studies have advanced the performance of UAV-based edge computing systems,the security of wireless transmission in future 6G networks is becoming increasingly crucial due to its inherent broadcast nature,yet it has not received adequate attention.In this paper,we improve the covert performance in a UAV aided edge computing system.Parts of the computation tasks of multiple ground sensors are offloaded to the UAV,where the sensors offload the computing tasks to the UAV,and Willie around detects the transmissions.The transmit power of sensors,the offloading proportions of sensors and the hovering height of the UAV affect the system covert performance,we propose a deep reinforcement learning framework to jointly optimize them.The proposed algorithm minimizes the system average task processing delay while guaranteeing that the transmissions of sensors are not detected by the Willie under the covertness constraint.Extensive simulations are conducted to verify the effectiveness of the proposed algorithm to decrease the average task processing delay with comparison with other algorithms. 展开更多
关键词 Covert communication Unmanned aerial vehicle Edge computing Joint optimization Deep reinforcement
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部