期刊文献+
共找到23,058篇文章
< 1 2 250 >
每页显示 20 50 100
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
1
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
2
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
3
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
4
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
Computing Power Network:A Survey 被引量:18
5
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
6
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
DeepSeek vs.ChatGPT vs.Claude:A comparative study for scientific computing and scientific machine learning tasks 被引量:1
7
作者 Qile Jiang Zhiwei Gao George Em Karniadakis 《Theoretical & Applied Mechanics Letters》 2025年第3期194-206,共13页
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ... Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed. 展开更多
关键词 Large language models(LLM) Scientific computing Scientific machine learning Physics-informed neural network
在线阅读 下载PDF
A Review of Computing with Spiking Neural Networks 被引量:1
8
作者 Jiadong Wu Yinan Wang +2 位作者 Zhiwei Li Lun Lu Qingjiang Li 《Computers, Materials & Continua》 SCIE EI 2024年第3期2909-2939,共31页
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces... Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing. 展开更多
关键词 Spiking neural networks neural networks brain-like computing artificial intelligence learning algorithm
在线阅读 下载PDF
A novel routing method for dynamic control in distributed computing power networks 被引量:2
9
作者 Lujie Guo Fengxian Guo Mugen Peng 《Digital Communications and Networks》 CSCD 2024年第6期1644-1652,共9页
Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with bo... Driven by diverse intelligent applications,computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes,forming a distributed computing power network.Tasked with both packet transmission and data processing,it requires joint optimization of communications and computing.Considering the diverse requirements of applications,we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network.Different from traditional routing protocols,additional metrics related to computing are taken into consideration in the proposed policy.Based on the multi-attribute decision theory and the fuzzy logic theory,we propose two routing selection algorithms,the Fuzzy Logic-Based Routing(FLBR)algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making(l PMADM)algorithm.Simulation results show that the proposed policy could achieve better performance in average processing delay,user satisfaction,and load balancing compared with existing works. 展开更多
关键词 computing power networks ROUTING Fuzzy logic Multi-attribute decision making
在线阅读 下载PDF
A Comprehensive Study of Resource Provisioning and Optimization in Edge Computing
10
作者 Sreebha Bhaskaran Supriya Muthuraman 《Computers, Materials & Continua》 2025年第6期5037-5070,共34页
Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ... Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities. 展开更多
关键词 Cloud computing edge computing fog computing resource provisioning resource allocation computation offloading optimization techniques software defined network
在线阅读 下载PDF
Security Implications of Edge Computing in Cloud Networks 被引量:2
11
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 Edge computing Cloud networks Artificial Intelligence Machine Learning Cloud Security
在线阅读 下载PDF
Latency minimization for multiuser computation offloading in fog-radio access networks
12
作者 Wei Zhang Shafei Wang +3 位作者 Ye Pan Qiang Li Jingran Lin Xiaoxiao Wu 《Digital Communications and Networks》 2025年第1期160-171,共12页
Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is con... Recently,the Fog-Radio Access Network(F-RAN)has gained considerable attention,because of its flexible architecture that allows rapid response to user requirements.In this paper,computational offloading in F-RAN is considered,where multiple User Equipments(UEs)offload their computational tasks to the F-RAN through fog nodes.Each UE can select one of the fog nodes to offload its task,and each fog node may serve multiple UEs.The tasks are computed by the fog nodes or further offloaded to the cloud via a capacity-limited fronhaul link.In order to compute all UEs'tasks quickly,joint optimization of UE-Fog association,radio and computation resources of F-RAN is proposed to minimize the maximum latency of all UEs.This min-max problem is formulated as a Mixed Integer Nonlinear Program(MINP).To tackle it,first,MINP is reformulated as a continuous optimization problem,and then the Majorization Minimization(MM)method is used to find a solution.The MM approach that we develop is unconventional in that each MM subproblem is solved inexactly with the same provable convergence guarantee as the exact MM,thereby reducing the complexity of MM iteration.In addition,a cooperative offloading model is considered,where the fog nodes compress-and-forward their received signals to the cloud.Under this model,a similar min-max latency optimization problem is formulated and tackled by the inexact MM.Simulation results show that the proposed algorithms outperform some offloading strategies,and that the cooperative offloading can exploit transmission diversity better than noncooperative offloading to achieve better latency performance. 展开更多
关键词 Fog-radio access network Fog computing Majorization minimization WMMSE
在线阅读 下载PDF
A Study for Inter-Satellite Cooperative Computation Offloading in LEO Satellite Networks
13
作者 Gang Yuanshuo Zhang Yuexia +2 位作者 Wu Peng Zheng Hui Fan Guangteng 《China Communications》 2025年第2期12-25,共14页
Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient int... Low Earth orbit(LEO)satellite networks have the advantages of low transmission delay and low deployment cost,playing an important role in providing reliable services to ground users.This paper studies an efficient inter-satellite cooperative computation offloading(ICCO)algorithm for LEO satellite networks.Specifically,an ICCO system model is constructed,which considers using neighboring satellites in the LEO satellite networks to collaboratively process tasks generated by ground user terminals,effectively improving resource utilization efficiency.Additionally,the optimization objective of minimizing the system task computation offloading delay and energy consumption is established,which is decoupled into two sub-problems.In terms of computational resource allocation,the convexity of the problem is proved through theoretical derivation,and the Lagrange multiplier method is used to obtain the optimal solution of computational resources.To deal with the task offloading decision,a dynamic sticky binary particle swarm optimization algorithm is designed to obtain the offloading decision by iteration.Simulation results show that the ICCO algorithm can effectively reduce the delay and energy consumption. 展开更多
关键词 computation offloading inter-satellite co-operation LEO satellite networks
在线阅读 下载PDF
A novel paradigm for solving PDEs:multi-scale neural computing
14
作者 Wei Suo Weiwei Zhang 《Acta Mechanica Sinica》 2025年第6期76-92,共17页
Numerical simulation is dominant in solving partial differential equations(PDEs),but balancing fine-grained grids with low computational costs is challenging.Recently,solving PDEs with neural networks(NNs)has gained i... Numerical simulation is dominant in solving partial differential equations(PDEs),but balancing fine-grained grids with low computational costs is challenging.Recently,solving PDEs with neural networks(NNs)has gained interest,yet cost-effectiveness and high accuracy remain a challenge.This work introduces a novel paradigm for solving PDEs,called multi-scale neural computing(MSNC),considering spectral bias of NNs and local approximation properties in the finite difference method(FDM).The MSNC decomposes the solution with a NN for efficient capture of global scale and the FDM for detailed description of local scale,aiming to balance costs and accuracy.Demonstrated advantages include higher accuracy(10 times for 1D PDEs,20 times for 2D PDEs)and lower costs(4 times for 1D PDEs,16 times for 2D PDEs)than the standard FDM.The MSNC also exhibits stable convergence and rigorous boundary condition satisfaction,showcasing the potential for hybrid of NN and numerical method. 展开更多
关键词 Neural computing Partial differential equations Hybrid strategy Numerical methods Neural networks
原文传递
A Privacy-Preserving Graph Neural Network Framework with Attention Mechanism for Computational Offloading in the Internet of Vehicles
15
作者 Aishwarya Rajasekar Vetriselvi Vetrian 《Computer Modeling in Engineering & Sciences》 2025年第4期225-254,共30页
The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle ap... The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle applications.However,these advancements also generate a surge in data processing requirements,necessitating the offloading of vehicular tasks to edge servers due to the limited computational capacity of vehicles.Despite recent advancements,the robustness and scalability of the existing approaches with respect to the number of vehicles and edge servers and their resources,as well as privacy,remain a concern.In this paper,a lightweight offloading strategy that leverages ubiquitous connectivity through the Space Air Ground Integrated Vehicular Network architecture while ensuring privacy preservation is proposed.The Internet of Vehicles(IoV)environment is first modeled as a graph,with vehicles and base stations as nodes,and their communication links as edges.Secondly,vehicular applications are offloaded to suitable servers based on latency using an attention-based heterogeneous graph neural network(HetGNN)algorithm.Subsequently,a differential privacy stochastic gradient descent trainingmechanism is employed for privacypreserving of vehicles and offloading inference.Finally,the simulation results demonstrated that the proposedHetGNN method shows good performance with 0.321 s of inference time,which is 42.68%,63.93%,30.22%,and 76.04% less than baseline methods such as Deep Deterministic Policy Gradient,Deep Q Learning,Deep Neural Network,and Genetic Algorithm,respectively. 展开更多
关键词 Internet of vehicles vehicular ad-hoc networks(VANET) multiaccess edge computing task offloading graph neural networks differential privacy
在线阅读 下载PDF
Energy Efficient and Resource Allocation in Cloud Computing Using QT-DNN and Binary Bird Swarm Optimization
16
作者 Puneet Sharma Dhirendra Prasad Yadav +2 位作者 Bhisham Sharma Surbhi B.Khan Ahlam Almusharraf 《Computers, Materials & Continua》 2025年第10期2179-2193,共15页
The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework t... The swift expansion of cloud computing has heightened the demand for energy-efficient and high-performance resource allocation solutions across extensive systems.This research presents an innovative hybrid framework that combines a Quantum Tensor-based Deep Neural Network(QT-DNN)with Binary Bird Swarm Optimization(BBSO)to enhance resource allocation while preserving Quality of Service(QoS).In contrast to conventional approaches,the QT-DNN accurately predicts task-resource mappings using tensor-based task representation,significantly minimizing computing overhead.The BBSO allocates resources dynamically,optimizing energy efficiency and task distribution.Experimental results from extensive simulations indicate the efficacy of the suggested strategy;the proposed approach demonstrates the highest level of accuracy,reaching 98.1%.This surpasses the GA-SVM model,which achieves an accuracy of 96.3%,and the ART model,which achieves an accuracy of 95.4%.The proposed method performs better in terms of response time with 1.598 as compared to existing methods Energy-Focused Dynamic Task Scheduling(EFDTS)and Federated Energy-efficient Scheduler for Task Allocation in Large-scale environments(FESTAL)with 2.31 and 2.04,moreover,the proposed method performs better in terms of makespan with 12 as compared to Round Robin(RR)and Recurrent Attention-based Summarization Algorithm(RASA)with 20 and 14.The hybrid method establishes a new standard for sustainable and efficient administration of cloud computing resources by explicitly addressing scalability and real-time performance. 展开更多
关键词 Cloud computing quality of service virtual machine ALLOCATION deep neural network
在线阅读 下载PDF
Computation graph pruning based on critical path retention in evolvable networks
17
作者 XIE Xiaoyan YANG Tianjiao +4 位作者 ZHU Yun LUO Xing JIN Luochen YU Jinhao REN Xun 《High Technology Letters》 2025年第3期266-272,共7页
The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heig... The dynamic routing mechanism in evolvable networks enables adaptive reconfiguration of topol-ogical structures and transmission pathways based on real-time task requirements and data character-istics.However,the heightened architectural complexity and expanded parameter dimensionality in evolvable networks present significant implementation challenges when deployed in resource-con-strained environments.Due to the critical paths ignored,traditional pruning strategies cannot get a desired trade-off between accuracy and efficiency.For this reason,a critical path retention pruning(CPRP)method is proposed.By deeply traversing the computational graph,the dependency rela-tionship among nodes is derived.Then the nodes are grouped and sorted according to their contribu-tion value.The redundant operations are removed as much as possible while ensuring that the criti-cal path is not affected.As a result,computational efficiency is improved while a higher accuracy is maintained.On the CIFAR benchmark,the experimental results demonstrate that CPRP-induced pruning incurs accuracy degradation below 4.00%,while outperforming traditional feature-agnostic grouping methods by an average 8.98%accuracy improvement.Simultaneously,the pruned model attains a 2.41 times inference acceleration while achieving 48.92%parameter compression and 53.40%floating-point operations(FLOPs)reduction. 展开更多
关键词 evolvable network computation graph traversing dynamic routing critical path retention pruning
在线阅读 下载PDF
Neuromorphic Computing in the Era of Large Models
18
作者 Haoxuan SHAN Chiyue WEI +4 位作者 Nicolas RAMOS Xiaoxuan YANG Cong GUO Hai(Helen)LI Yiran CHEN 《Artificial Intelligence Science and Engineering》 2025年第1期17-30,共14页
The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language mod... The rapid advancement of deep learning and the emergence of largescale neural models,such as bidirectional encoder representations from transformers(BERT),generative pre-trained transformer(GPT),and large language model Meta AI(LLaMa),have brought significant computational and energy challenges.Neuromorphic computing presents a biologically inspired approach to addressing these issues,leveraging event-driven processing and in-memory computation for enhanced energy efficiency.This survey explores the intersection of neuromorphic computing and large-scale deep learning models,focusing on neuromorphic models,learning methods,and hardware.We highlight transferable techniques from deep learning to neuromorphic computing and examine the memoryrelated scalability limitations of current neuromorphic systems.Furthermore,we identify potential directions to enable neuromorphic systems to meet the growing demands of modern AI workloads. 展开更多
关键词 neuromorphic computing spiking neural networks large deep learning models
在线阅读 下载PDF
Computation and wireless resource management in 6G space-integrated-ground access networks
19
作者 Ning Hui Qian Sun +2 位作者 Lin Tian Yuanyuan Wang Yiqing Zhou 《Digital Communications and Networks》 2025年第3期768-777,共10页
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces... In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks. 展开更多
关键词 Space-integrated-ground Radio access network MEC-based computation resource management Mixed numerology-based wireless resource management
在线阅读 下载PDF
A leap forward in compute-in-memory system for neural network inference
20
作者 Liang Chu Wenjun Li 《Journal of Semiconductors》 2025年第4期5-7,共3页
Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall&quo... Developing efficient neural network(NN)computing systems is crucial in the era of artificial intelligence(AI).Traditional von Neumann architectures have both the issues of"memory wall"and"power wall",limiting the data transfer between memory and processing units[1,2].Compute-in-memory(CIM)technologies,particularly analogue CIM with memristor crossbars,are promising because of their high energy efficiency,computational parallelism,and integration density for NN computations[3].In practical applications,analogue CIM excels in tasks like speech recognition and image classification,revealing its unique advantages.For instance,it efficiently processes vast amounts of audio data in speech recognition,achieving high accuracy with minimal power consumption.In image classification,the high parallelism of analogue CIM significantly speeds up feature extraction and reduces processing time.With the boosting development of AI applications,the demands for computational accuracy and task complexity are rising continually.However,analogue CIM systems are limited in handling complex regression tasks with needs of precise floating-point(FP)calculations.They are primarily suited for the classification tasks with low data precision and a limited dynamic range[4]. 展开更多
关键词 neural network von neumann architectures compute memory INFERENCE MEMRISTOR artificial intelligence ai traditional memristor crossbarsare analogue cim
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部