期刊文献+
共找到23,277篇文章
< 1 2 250 >
每页显示 20 50 100
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
1
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
2
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
3
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
4
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
On an Ultra-Dense LEO-Satellite-Based Computing Network Constellation Design
5
作者 Yijing Sun Boya Di +1 位作者 Ruoqi Deng Lingyang Song 《Engineering》 2025年第11期103-114,共12页
Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust ... Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation(6G)users,growing attention has been focused on LEO-satellite-based computing networks,to which ground users can offload computation tasks.However,how to design a LEO satellite constellation for computing networks,while considering discrepancies in the computing requirements of different regions,remains an open question.In this paper,we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals(UTs)offload part of their computing tasks to satellites.We formulate the ultra-dense constellation design problem as a multi-objective optimization problem(MOOP)to maximize the average coverage rate,transmission capacity,and computational capability,while minimizing the number of satellites.In order to depict the connectivity characteristics of satellite-based computing networks,we propose a terrestrial-satellite connectivity model to determine the coverage rate in different regions.We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP.Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment,given quality-of-service(QoS)requirements.For the same number of deployed LEO satellites,the proposed constellation outperforms its existing counterparts;in particular,it achieves 25%-45%performance improvements in the average coverage rate. 展开更多
关键词 Low-Earth-orbit satellite constellation Satellite-based computing network Multi-objective optimization
在线阅读 下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:54
6
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
在线阅读 下载PDF
Computing Power Network:A Survey 被引量:22
7
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
在线阅读 下载PDF
Wireless Acoustic Sensor Networks and Edge Computing for Rapid Acoustic Monitoring 被引量:7
8
作者 Zhengguo Sheng Saskia Pfersich +3 位作者 Alice Eldridge Jianshan Zhou Daxin Tian Victor C.M.Leung 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期64-74,共11页
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford... Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions. 展开更多
关键词 ACOUSTIC sensor networks EDGE computing energy EFFICIENCY
在线阅读 下载PDF
An Offloading Scheme Leveraging on Neighboring Node Resources for Edge Computing over Fiber-Wireless (FiWi) Access Networks 被引量:3
9
作者 Wei Chang Yihong Hu +2 位作者 Guochu Shou Yaqiong Liu Zhigang Guo 《China Communications》 SCIE CSCD 2019年第11期107-119,共13页
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc... The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks. 展开更多
关键词 EDGE computing OFFLOADING Fiber-wireless access networks delay
在线阅读 下载PDF
Joint Computing and Communication Resource Allocation for Satellite Communication Networks with Edge Computing 被引量:14
10
作者 Shanghong Zhang Gaofeng Cui +1 位作者 Yating Long Weidong Wang 《China Communications》 SCIE CSCD 2021年第7期236-252,共17页
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int... Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method. 展开更多
关键词 satellite communication networks edge computing resource allocation matching theory
在线阅读 下载PDF
Joint Allocation of Wireless Resource and Computing Capability in MEC-Enabled Vehicular Network 被引量:10
11
作者 Yanzhao Hou Chengrui Wang +3 位作者 Min Zhu Xiaodong Xu Xiaofeng Tao Xunchao Wu 《China Communications》 SCIE CSCD 2021年第6期64-76,共13页
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we... In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network. 展开更多
关键词 vehicular network delay optimization wireless resource allocation matrix spectral radius MEC computation resource allocation
在线阅读 下载PDF
Security Model Research Based on Trusted Computing in Ad Hoc Network 被引量:2
12
作者 林筑英 刘晓杰 +2 位作者 卢林 师蕾 谢刚 《China Communications》 SCIE CSCD 2011年第4期1-10,共10页
With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technolog... With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model. 展开更多
关键词 Ad Hoc network trusted computing network security
在线阅读 下载PDF
A Broad Learning-Driven Network Traffic Analysis System Based on Fog Computing Paradigm 被引量:3
13
作者 Xiting Peng Kaoru Ota Mianxiong Dong 《China Communications》 SCIE CSCD 2020年第2期1-13,共13页
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide... The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN). 展开更多
关键词 traffic analysis fog computing broad learning radio access networks
在线阅读 下载PDF
All-optical computing based on convolutional neural networks 被引量:10
14
作者 Kun Liao Ye Chen +7 位作者 Zhongcheng Yu Xiaoyong Hu Xingyuan Wang Cuicui Lu Hongtao Lin Qingyang Du Juejun Hu Qihuang Gong 《Opto-Electronic Advances》 SCIE 2021年第11期46-54,共9页
The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,whi... The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,which use elec-trons as information carriers and possess von Neumann architecture featured by physical separation of storage and pro-cessing.The scaling of computing speed is limited not only by data transfer between memory and processing units,but also by RC delay associated with integrated circuits.Moreover,excessive heating due to Ohmic losses is becoming a severe bottleneck for both speed and power consumption scaling.Using photons as information carriers is a promising alternative.Owing to the weak third-order optical nonlinearity of conventional materials,building integrated photonic com-puting chips under traditional von Neumann architecture has been a challenge.Here,we report a new all-optical comput-ing framework to realize ultrafast and ultralow-energy-consumption all-optical computing based on convolutional neural networks.The device is constructed from cascaded silicon Y-shaped waveguides with side-coupled silicon waveguide segments which we termed“weight modulators”to enable complete phase and amplitude control in each waveguide branch.The generic device concept can be used for equation solving,multifunctional logic operations as well as many other mathematical operations.Multiple computing functions including transcendental equation solvers,multifarious logic gate operators,and half-adders were experimentally demonstrated to validate the all-optical computing performances.The time-of-flight of light through the network structure corresponds to an ultrafast computing time of the order of several picoseconds with an ultralow energy consumption of dozens of femtojoules per bit.Our approach can be further expan-ded to fulfill other complex computing tasks based on non-von Neumann architectures and thus paves a new way for on-chip all-optical computing. 展开更多
关键词 convolutional neural networks all-optical computing mathematical operations cascaded silicon waveguides
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
15
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
Virtualization Technology in Cloud Computing Based Radio Access Networks:A Primer 被引量:2
16
作者 ZHANG Xian PENG Mugen 《ZTE Communications》 2017年第4期47-66,共20页
Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has bee... Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs. 展开更多
关键词 network VIRTUALIZATION CC-RAN RAN SLICING FOG computing
在线阅读 下载PDF
Joint Resource Allocation Using Evolutionary Algorithms in Heterogeneous Mobile Cloud Computing Networks 被引量:10
17
作者 Weiwei Xia Lianfeng Shen 《China Communications》 SCIE CSCD 2018年第8期189-204,共16页
The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility ... The problem of joint radio and cloud resources allocation is studied for heterogeneous mobile cloud computing networks. The objective of the proposed joint resource allocation schemes is to maximize the total utility of users as well as satisfy the required quality of service(QoS) such as the end-to-end response latency experienced by each user. We formulate the problem of joint resource allocation as a combinatorial optimization problem. Three evolutionary approaches are considered to solve the problem: genetic algorithm(GA), ant colony optimization with genetic algorithm(ACO-GA), and quantum genetic algorithm(QGA). To decrease the time complexity, we propose a mapping process between the resource allocation matrix and the chromosome of GA, ACO-GA, and QGA, search the available radio and cloud resource pairs based on the resource availability matrixes for ACOGA, and encode the difference value between the allocated resources and the minimum resource requirement for QGA. Extensive simulation results show that our proposed methods greatly outperform the existing algorithms in terms of running time, the accuracy of final results, the total utility, resource utilization and the end-to-end response latency guaranteeing. 展开更多
关键词 heterogeneous mobile cloud computing networks resource allocation genetic algorithm ant colony optimization quantum genetic algorithm
在线阅读 下载PDF
Analysis and Optimization on Partition-Based Caching and Delivery in Satellite-Terrestrial Edge Computing Networks 被引量:4
18
作者 Peng Wang Xing Zhang +2 位作者 Jiaxin Zhang Shuang Zheng Wenhao Liu 《China Communications》 SCIE CSCD 2023年第3期252-285,共34页
As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file tra... As a viable component of 6G wireless communication architecture,satellite-terrestrial networks support efficient file delivery by leveraging the innate broadcast ability of satellite and the enhanced powerful file transmission approaches of multi-tier terrestrial networks.In the paper,we introduce edge computing technology into the satellite-terrestrial network and propose a partition-based cache and delivery strategy to make full use of the integrated resources and reducing the backhaul load.Focusing on the interference effect from varied nodes in different geographical distances,we derive the file successful transmission probability of the typical user and by utilizing the tool of stochastic geometry.Considering the constraint of nodes cache space and file sets parameters,we propose a near-optimal partition-based cache and delivery strategy by optimizing the asymptotic successful transmission probability of the typical user.The complex nonlinear programming problem is settled by jointly utilizing standard particle-based swarm optimization(PSO)method and greedy based multiple knapsack choice problem(MKCP)optimization method.Numerical results show that compared with the terrestrial only cache strategy,Ground Popular Strategy,Satellite Popular Strategy,and Independent and identically distributed popularity strategy,the performance of the proposed scheme improve by 30.5%,9.3%,12.5%and 13.7%. 展开更多
关键词 edge computing satellite terrestrial net-works caching deployment stochastic geometry 6G networks
在线阅读 下载PDF
Computation and wireless resource management in 6G space-integrated-ground access networks 被引量:1
19
作者 Ning Hui Qian Sun +2 位作者 Lin Tian Yuanyuan Wang Yiqing Zhou 《Digital Communications and Networks》 2025年第3期768-777,共10页
In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this neces... In 6th Generation Mobile Networks(6G),the Space-Integrated-Ground(SIG)Radio Access Network(RAN)promises seamless coverage and exceptionally high Quality of Service(QoS)for diverse services.However,achieving this necessitates effective management of computation and wireless resources tailored to the requirements of various services.The heterogeneity of computation resources and interference among shared wireless resources pose significant coordination and management challenges.To solve these problems,this work provides an overview of multi-dimensional resource management in 6G SIG RAN,including computation and wireless resource.Firstly it provides with a review of current investigations on computation and wireless resource management and an analysis of existing deficiencies and challenges.Then focusing on the provided challenges,the work proposes an MEC-based computation resource management scheme and a mixed numerology-based wireless resource management scheme.Furthermore,it outlines promising future technologies,including joint model-driven and data-driven resource management technology,and blockchain-based resource management technology within the 6G SIG network.The work also highlights remaining challenges,such as reducing communication costs associated with unstable ground-to-satellite links and overcoming barriers posed by spectrum isolation.Overall,this comprehensive approach aims to pave the way for efficient and effective resource management in future 6G networks. 展开更多
关键词 Space-integrated-ground Radio access network MEC-based computation resource management Mixed numerology-based wireless resource management
在线阅读 下载PDF
DeepSeek vs.ChatGPT vs.Claude:A comparative study for scientific computing and scientific machine learning tasks 被引量:1
20
作者 Qile Jiang Zhiwei Gao George Em Karniadakis 《Theoretical & Applied Mechanics Letters》 2025年第3期194-206,共13页
Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different ... Large language models(LLMs)have emerged as powerful tools for addressing a wide range of problems,including those in scientific computing,particularly in solving partial differential equations(PDEs).However,different models exhibit distinct strengths and preferences,resulting in varying levels of performance.In this paper,we compare the capabilities of the most advanced LLMs—DeepSeek,ChatGPT,and Claude—along with their reasoning-optimized versions in addressing computational challenges.Specifically,we evaluate their proficiency in solving traditional numerical problems in scientific computing as well as leveraging scientific machine learning techniques for PDE-based problems.We designed all our experiments so that a nontrivial decision is required,e.g,defining the proper space of input functions for neural operator learning.Our findings show that reasoning and hybrid-reasoning models consistently and significantly outperform non-reasoning ones in solving challenging problems,with ChatGPT o3-mini-high generally offering the fastest reasoning speed. 展开更多
关键词 Large language models(LLM) Scientific computing Scientific machine learning Physics-informed neural network
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部