期刊文献+
共找到23,426篇文章
< 1 2 250 >
每页显示 20 50 100
Intelligent Resource Allocation for Multiaccess Edge Computing in 5G Ultra-Dense Slicing Network Using Federated Multiagent DDPG Algorithm
1
作者 Gong Yu Gong Pengwei +3 位作者 Jiang He Xie Wen Wang Chenxi Xu Peijun 《China Communications》 2026年第1期273-289,共17页
Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources... Nowadays,advances in communication technology and cloud computing have spawned a variety of smart mobile devices,which will generate a great amount of computing-intensive businesses,and require corresponding resources of computation and communication.Multiaccess edge computing(MEC)can offload computing-intensive tasks to the nearby edge servers,which alleviates the pressure of devices.Ultra-dense network(UDN)can provide effective spectrum resources by deploying a large number of micro base stations.Furthermore,network slicing can support various applications in different communication scenarios.Therefore,this paper integrates the ultra-dense network slicing and the MEC technology,and introduces a hybrid computing offloading strategy in order to satisfy various quality of service(QoS)of edge devices.In order to dynamically allocate limited resources,the above problem is formulated as multiagent distributed deep reinforcement learning(DRL),which will achieve low overhead computation offloading strategy and real-time resource allocation decisions.In this context,federated learning is added to train DRL agents in a distributed manner,where each agent is dedicated to exploring actions composed of offloading decisions and allocating resources,so as to jointly optimize system delay and energy consumption.Simulation results show that the proposed learning algorithm has better performance compared with other strategies in literature. 展开更多
关键词 federated learning multiaccess edge computing mutiagent deep reinforcement learning resource allocation ultra-dense slicing network
在线阅读 下载PDF
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network 被引量:1
2
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Physics-Informed Neural Networks:Current Progress and Challenges in Computational Solid and Structural Mechanics
3
作者 Itthidet Thawon Duy Vo +6 位作者 Tinh QuocBui Kanya Rattanamongkhonkun Chakkapong Chamroon Nakorn Tippayawong Yuttana Mona Ramnarong Wanison Pana Suttakul 《Computer Modeling in Engineering & Sciences》 2026年第2期48-86,共39页
Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce different... Physics-informed neural networks(PINNs)have emerged as a promising class of scientific machine learning techniques that integrate governing physical laws into neural network training.Their ability to enforce differential equations,constitutive relations,and boundary conditions within the loss function provides a physically grounded alternative to traditional data-driven models,particularly for solid and structural mechanics,where data are often limited or noisy.This review offers a comprehensive assessment of recent developments in PINNs,combining bibliometric analysis,theoretical foundations,application-oriented insights,and methodological innovations.A biblio-metric survey indicates a rapid increase in publications on PINNs since 2018,with prominent research clusters focused on numerical methods,structural analysis,and forecasting.Building upon this trend,the review consolidates advance-ments across five principal application domains,including forward structural analysis,inverse modeling and parameter identification,structural and topology optimization,assessment of structural integrity,and manufacturing processes.These applications are propelled by substantial methodological advancements,encompassing rigorous enforcement of boundary conditions,modified loss functions,adaptive training,domain decomposition strategies,multi-fidelity and transfer learning approaches,as well as hybrid finite element–PINN integration.These advances address recurring challenges in solid mechanics,such as high-order governing equations,material heterogeneity,complex geometries,localized phenomena,and limited experimental data.Despite remaining challenges in computational cost,scalability,and experimental validation,PINNs are increasingly evolving into specialized,physics-aware tools for practical solid and structural mechanics applications. 展开更多
关键词 Artificial Intelligence physics-informed neural networks computational mechanics bibliometric analysis solid mechanics structural mechanics
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
4
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Intelligent Management of Resources for Smart Edge Computing in 5G Heterogeneous Networks Using Blockchain and Deep Learning
5
作者 Mohammad Tabrez Quasim Khair Ul Nisa +3 位作者 Mohammad Shahid Husain Abakar Ibraheem Abdalla Aadam Mohammed Waseequ Sheraz Mohammad Zunnun Khan 《Computers, Materials & Continua》 2025年第7期1169-1187,共19页
Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing... Smart edge computing(SEC)is a novel paradigm for computing that could transfer cloud-based applications to the edge network,supporting computation-intensive services like face detection and natural language processing.A core feature of mobile edge computing,SEC improves user experience and device performance by offloading local activities to edge processors.In this framework,blockchain technology is utilized to ensure secure and trustworthy communication between edge devices and servers,protecting against potential security threats.Additionally,Deep Learning algorithms are employed to analyze resource availability and optimize computation offloading decisions dynamically.IoT applications that require significant resources can benefit from SEC,which has better coverage.Although access is constantly changing and network devices have heterogeneous resources,it is not easy to create consistent,dependable,and instantaneous communication between edge devices and their processors,specifically in 5G Heterogeneous Network(HN)situations.Thus,an Intelligent Management of Resources for Smart Edge Computing(IMRSEC)framework,which combines blockchain,edge computing,and Artificial Intelligence(AI)into 5G HNs,has been proposed in this paper.As a result,a unique dual schedule deep reinforcement learning(DS-DRL)technique has been developed,consisting of a rapid schedule learning process and a slow schedule learning process.The primary objective is to minimize overall unloading latency and system resource usage by optimizing computation offloading,resource allocation,and application caching.Simulation results demonstrate that the DS-DRL approach reduces task execution time by 32%,validating the method’s effectiveness within the IMRSEC framework. 展开更多
关键词 Smart edge computing heterogeneous networks blockchain 5G network internet of things artificial intelligence
在线阅读 下载PDF
Optimized Resource Allocation for Dual-Band Cooperation-Based Edge Computing Vehicular Network
6
作者 Cheng Kaijun Fang Xuming 《China Communications》 2025年第9期352-367,共16页
With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further impro... With miscellaneous applications gener-ated in vehicular networks,the computing perfor-mance cannot be satisfied owing to vehicles’limited processing capabilities.Besides,the low-frequency(LF)band cannot further improve network perfor-mance due to its limited spectrum resources.High-frequency(HF)band has plentiful spectrum resources which is adopted as one of the operating bands in 5G.To achieve low latency and sustainable development,a task processing scheme is proposed in dual-band cooperation-based vehicular network where tasks are processed at local side,or at macro-cell base station or at road side unit through LF or HF band to achieve sta-ble and high-speed task offloading.Moreover,a utility function including latency and energy consumption is minimized by optimizing computing and spectrum re-sources,transmission power and task scheduling.Ow-ing to its non-convexity,an iterative optimization algo-rithm is proposed to solve it.Numerical results eval-uate the performance and superiority of the scheme,proving that it can achieve efficient edge computing in vehicular networks. 展开更多
关键词 dual-band cooperation edge computing resource allocation task processing vehicular network
在线阅读 下载PDF
Spatiotemporal multiplexed photonic reservoir computing:parallel prediction for the high-dimensional dynamics of complex semiconductor laser network
7
作者 Tong Yang Li-Yue Zhang +3 位作者 Song-Sui Li Wei Pan Xi-Hua Zou Lian-Shan Yan 《Opto-Electronic Advances》 2025年第12期42-58,共17页
Accurately forecasting the high-dimensional chaotic dynamics of semiconductor laser(SL)networks is essential in photonics research.In this study,we propose a spatiotemporal multiplexed photonic reservoir computing(STM... Accurately forecasting the high-dimensional chaotic dynamics of semiconductor laser(SL)networks is essential in photonics research.In this study,we propose a spatiotemporal multiplexed photonic reservoir computing(STM-PRC)architecture,specifically designed for parallel prediction of the high-dimensional chaotic dynamics in complex SL networks.This is accomplished by decomposing the prediction task into multiple simplified reservoirs,leveraging the intrinsic topological characteristics of the network.Additionally,we introduce a dimensionality reduction technique for high-dimensional chaotic datasets,which exploits the symmetrical properties of the network topology and cluster synchronization patterns derived from complex network theory.This approach further simplifies the prediction process and enhances the computational efficiency of the parallel STM-PRC system.The feasibility and effectiveness of the proposed framework are demonstrated through numerical simulations and corroborated by experimental validation.Our results expand the application potential of SL networks in all-optical communication systems and suggest new directions for optical information processing. 展开更多
关键词 photonic reservoir computing complex network semiconductor lasers
在线阅读 下载PDF
On an Ultra-Dense LEO-Satellite-Based Computing Network Constellation Design
8
作者 Yijing Sun Boya Di +1 位作者 Ruoqi Deng Lingyang Song 《Engineering》 2025年第11期103-114,共12页
Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust ... Commercial ultra-dense low-Earth-orbit(LEO)satellite constellations have recently been deployed to provide seamless global Internet services.To improve the satellite network transmission efficiency and provide robust wide-coverage computing services for future sixth-generation(6G)users,growing attention has been focused on LEO-satellite-based computing networks,to which ground users can offload computation tasks.However,how to design a LEO satellite constellation for computing networks,while considering discrepancies in the computing requirements of different regions,remains an open question.In this paper,we investigate an ultra-dense LEO-satellite-based computing network to which ground user terminals(UTs)offload part of their computing tasks to satellites.We formulate the ultra-dense constellation design problem as a multi-objective optimization problem(MOOP)to maximize the average coverage rate,transmission capacity,and computational capability,while minimizing the number of satellites.In order to depict the connectivity characteristics of satellite-based computing networks,we propose a terrestrial-satellite connectivity model to determine the coverage rate in different regions.We design a priority-adaptive algorithm to design the optimal inclined-orbit constellation by solving this MOOP.Simulation results verify the accuracy of our theoretical connectivity model and show the optimal constellation deployment,given quality-of-service(QoS)requirements.For the same number of deployed LEO satellites,the proposed constellation outperforms its existing counterparts;in particular,it achieves 25%-45%performance improvements in the average coverage rate. 展开更多
关键词 Low-Earth-orbit satellite constellation Satellite-based computing network Multi-objective optimization
在线阅读 下载PDF
VHO Algorithm for Heterogeneous Networks of UAV-Hangar Cluster Based on GA Optimization and Edge Computing
9
作者 Siliang Chen Dongri Shan Yansheng Niu 《Computers, Materials & Continua》 2025年第12期5263-5286,共24页
With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability ... With the increasing deployment of Unmanned Aerial Vehicle-Hangar(UAV-H)clusters in dynamic environments such as disaster response and precision agriculture,existing networking schemes often struggle with adaptability to complex scenarios,while traditional Vertical Handoff(VHO)algorithms fail to fully address the unique challenges of UAV-H systems,including high-speed mobility and limited computational resources.To bridge this gap,this paper proposes a heterogeneous network architecture integrating 5th Generation Mobile Communication Technology(5G)cellular networks and self-organizing mesh networks for UAV-H clusters,accompanied by a novel VHO algorithm.The proposed algorithm leverages Multi-Attribute Decision-Making(MADM)theory combined with Genetic Algorithm(GA)optimization,incorporating edge computing to enable real-time decision-making and offload computational tasks efficiently.By constructing a utility function through attribute and weight matrices,the algorithm ensures UAV-H clusters dynamically select the optimal network access with the highest utility value.Simulation results demonstrate that the proposed method reduces network handoff times by 26.13%compared to the Decision Tree VHO(DT-VHO),effectively mitigating the ping-pong effect,and enhancing total system throughput by 19.99%under the same conditions.In terms of handoff delay,it outperforms the Artificial Neural Network VHO(ANN-VHO),significantly improving the Quality of Service(QoS).Finally,real-world hardware platform experiments validate the algorithm’s feasibility and superior performance in practical UAV-H cluster operations.This work provides a robust solution for seamless network connectivity in high-mobility UAV clusters,offering critical support for emerging applications requiring reliable and efficient wireless communication. 展开更多
关键词 Vertical handoff heterogeneous networks genetic algorithm multiple-attribute decision-making unmanned aerial vehicle edge computing
在线阅读 下载PDF
Computing Power Network:The Architecture of Convergence of Computing and Networking towards 6G Requirement 被引量:55
10
作者 Xiongyan Tang Chang Cao +4 位作者 Youxiang Wang Shuai Zhang Ying Liu Mingxuan Li Tao He 《China Communications》 SCIE CSCD 2021年第2期175-185,共11页
In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computi... In 6G era,service forms in which computing power acts as the core will be ubiquitous in the network.At the same time,the collaboration among edge computing,cloud computing and network is needed to support edge computing service with strong demand for computing power,so as to realize the optimization of resource utilization.Based on this,the article discusses the research background,key techniques and main application scenarios of computing power network.Through the demonstration,it can be concluded that the technical solution of computing power network can effectively meet the multi-level deployment and flexible scheduling needs of the future 6G business for computing,storage and network,and adapt to the integration needs of computing power and network in various scenarios,such as user oriented,government enterprise oriented,computing power open and so on. 展开更多
关键词 6G edge computing cloud computing convergence of cloud and network computing power network
在线阅读 下载PDF
Computing Power Network:A Survey 被引量:23
11
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
在线阅读 下载PDF
Wireless Acoustic Sensor Networks and Edge Computing for Rapid Acoustic Monitoring 被引量:7
12
作者 Zhengguo Sheng Saskia Pfersich +3 位作者 Alice Eldridge Jianshan Zhou Daxin Tian Victor C.M.Leung 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期64-74,共11页
Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the afford... Passive acoustic monitoring is emerging as a promising solution to the urgent, global need for new biodiversity assessment methods. The ecological relevance of the soundscape is increasingly recognised, and the affordability of robust hardware for remote audio recording is stimulating international interest in the potential for acoustic methods for biodiversity monitoring.The scale of the data involved requires automated methods,however, the development of acoustic sensor networks capable of sampling the soundscape across time and space and relaying the data to an accessible storage location remains a significant technical challenge, with power management at its core. Recording and transmitting large quantities of audio data is power intensive,hampering long-term deployment in remote, off-grid locations of key ecological interest. Rather than transmitting heavy audio data, in this paper, we propose a low-cost and energy efficient wireless acoustic sensor network integrated with edge computing structure for remote acoustic monitoring and in situ analysis.Recording and computation of acoustic indices are carried out directly on edge devices built from low noise primo condenser microphones and Teensy microcontrollers, using internal FFT hardware support. Resultant indices are transmitted over a ZigBee-based wireless mesh network to a destination server.Benchmark tests of audio quality, indices computation and power consumption demonstrate acoustic equivalence and significant power savings over current solutions. 展开更多
关键词 ACOUSTIC sensor networks EDGE computing energy EFFICIENCY
在线阅读 下载PDF
An Offloading Scheme Leveraging on Neighboring Node Resources for Edge Computing over Fiber-Wireless (FiWi) Access Networks 被引量:3
13
作者 Wei Chang Yihong Hu +2 位作者 Guochu Shou Yaqiong Liu Zhigang Guo 《China Communications》 SCIE CSCD 2019年第11期107-119,共13页
The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resourc... The computation resources at a single node in Edge Computing(EC)are commonly limited,which cannot execute large scale computation tasks.To face the challenge,an Offloading scheme leveraging on NEighboring node Resources(ONER)for EC over Fiber-Wireless(FiWi)access networks is proposed in this paper.In the ONER scheme,the FiWi network connects edge computing nodes with fiber and converges wireless and fiber connections seamlessly,so that it can support the offloading transmission with low delay and wide bandwidth.Based on the ONER scheme supported by FiWi networks,computation tasks can be offloaded to edge computing nodes in a wider range of area without increasing wireless hops(e.g.,just one wireless hop),which achieves low delay.Additionally,an efficient Computation Resource Scheduling(CRS)algorithm based on the ONER scheme is also proposed to make offloading decision.The results show that more offloading requests can be satisfied and the average completion time of computation tasks decreases significantly with the ONER scheme and the CRS algorithm.Therefore,the ONER scheme and the CRS algorithm can schedule computation resources at neighboring edge computing nodes for offloading to meet the challenge of large scale computation tasks. 展开更多
关键词 EDGE computing OFFLOADING Fiber-wireless access networks delay
在线阅读 下载PDF
Joint Computing and Communication Resource Allocation for Satellite Communication Networks with Edge Computing 被引量:14
14
作者 Shanghong Zhang Gaofeng Cui +1 位作者 Yating Long Weidong Wang 《China Communications》 SCIE CSCD 2021年第7期236-252,共17页
Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-int... Benefit from the enhanced onboard processing capacities and high-speed satellite-terrestrial links,satellite edge computing has been regarded as a promising technique to facilitate the execution of the computation-intensive applications for satellite communication networks(SCNs).By deploying edge computing servers in satellite and gateway stations,SCNs can achieve significant performance gains of the computing capacities at the expense of extending the dimensions and complexity of resource management.Therefore,in this paper,we investigate the joint computing and communication resource management problem for SCNs to minimize the execution latency of the computation-intensive applications,while two different satellite edge computing scenarios and local execution are considered.Furthermore,the joint computing and communication resource allocation problem for the computation-intensive services is formulated as a mixed-integer programming problem.A game-theoretic and many-to-one matching theorybased scheme(JCCRA-GM)is proposed to achieve an approximate optimal solution.Numerical results show that the proposed method with low complexity can achieve almost the same weight-sum latency as the Brute-force method. 展开更多
关键词 satellite communication networks edge computing resource allocation matching theory
在线阅读 下载PDF
Joint Allocation of Wireless Resource and Computing Capability in MEC-Enabled Vehicular Network 被引量:11
15
作者 Yanzhao Hou Chengrui Wang +3 位作者 Min Zhu Xiaodong Xu Xiaofeng Tao Xunchao Wu 《China Communications》 SCIE CSCD 2021年第6期64-76,共13页
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we... In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network. 展开更多
关键词 vehicular network delay optimization wireless resource allocation matrix spectral radius MEC computation resource allocation
在线阅读 下载PDF
Security Model Research Based on Trusted Computing in Ad Hoc Network 被引量:2
16
作者 林筑英 刘晓杰 +2 位作者 卢林 师蕾 谢刚 《China Communications》 SCIE CSCD 2011年第4期1-10,共10页
With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technolog... With the rapid development of wireless networks,the Ad Hoc networks are widely used in many fields,but the current network security solutions for the Ad Hoc network are not competitive enough.So the critical technology of Ad Hoc network applications shall be how to implement the security scheme.Here the discussions are focused on the specific solution against the security threats which the Ad Hoc networks will face,the methodology of a management model which uses trusted computing technology to solve Ad Hoc network security problems,and the analysis and verification for the security of this model. 展开更多
关键词 Ad Hoc network trusted computing network security
在线阅读 下载PDF
A Broad Learning-Driven Network Traffic Analysis System Based on Fog Computing Paradigm 被引量:3
17
作者 Xiting Peng Kaoru Ota Mianxiong Dong 《China Communications》 SCIE CSCD 2020年第2期1-13,共13页
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide... The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN). 展开更多
关键词 traffic analysis fog computing broad learning radio access networks
在线阅读 下载PDF
All-optical computing based on convolutional neural networks 被引量:10
18
作者 Kun Liao Ye Chen +7 位作者 Zhongcheng Yu Xiaoyong Hu Xingyuan Wang Cuicui Lu Hongtao Lin Qingyang Du Juejun Hu Qihuang Gong 《Opto-Electronic Advances》 SCIE 2021年第11期46-54,共9页
The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,whi... The rapid development of information technology has fueled an ever-increasing demand for ultrafast and ultralow-en-ergy-consumption computing.Existing computing instruments are pre-dominantly electronic processors,which use elec-trons as information carriers and possess von Neumann architecture featured by physical separation of storage and pro-cessing.The scaling of computing speed is limited not only by data transfer between memory and processing units,but also by RC delay associated with integrated circuits.Moreover,excessive heating due to Ohmic losses is becoming a severe bottleneck for both speed and power consumption scaling.Using photons as information carriers is a promising alternative.Owing to the weak third-order optical nonlinearity of conventional materials,building integrated photonic com-puting chips under traditional von Neumann architecture has been a challenge.Here,we report a new all-optical comput-ing framework to realize ultrafast and ultralow-energy-consumption all-optical computing based on convolutional neural networks.The device is constructed from cascaded silicon Y-shaped waveguides with side-coupled silicon waveguide segments which we termed“weight modulators”to enable complete phase and amplitude control in each waveguide branch.The generic device concept can be used for equation solving,multifunctional logic operations as well as many other mathematical operations.Multiple computing functions including transcendental equation solvers,multifarious logic gate operators,and half-adders were experimentally demonstrated to validate the all-optical computing performances.The time-of-flight of light through the network structure corresponds to an ultrafast computing time of the order of several picoseconds with an ultralow energy consumption of dozens of femtojoules per bit.Our approach can be further expan-ded to fulfill other complex computing tasks based on non-von Neumann architectures and thus paves a new way for on-chip all-optical computing. 展开更多
关键词 convolutional neural networks all-optical computing mathematical operations cascaded silicon waveguides
在线阅读 下载PDF
Near‑Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides 被引量:1
19
作者 Zhihao Ren Zixuan Zhang +4 位作者 Yangyang Zhuge Zian Xiao Siyu Xu Jingkai Zhou Chengkuo Lee 《Nano-Micro Letters》 2025年第11期1-20,共20页
The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language proc... The rise of large-scale artificial intelligence(AI)models,such as ChatGPT,Deep-Seek,and autonomous vehicle systems,has significantly advanced the boundaries of AI,enabling highly complex tasks in natural language processing,image recognition,and real-time decisionmaking.However,these models demand immense computational power and are often centralized,relying on cloud-based architectures with inherent limitations in latency,privacy,and energy efficiency.To address these challenges and bring AI closer to real-world applications,such as wearable health monitoring,robotics,and immersive virtual environments,innovative hardware solutions are urgently needed.This work introduces a near-sensor edge computing(NSEC)system,built on a bilayer AlN/Si waveguide platform,to provide real-time,energy-efficient AI capabilities at the edge.Leveraging the electro-optic properties of AlN microring resonators for photonic feature extraction,coupled with Si-based thermo-optic Mach-Zehnder interferometers for neural network computations,the system represents a transformative approach to AI hardware design.Demonstrated through multimodal gesture and gait analysis,the NSEC system achieves high classification accuracies of 96.77%for gestures and 98.31%for gaits,ultra-low latency(<10 ns),and minimal energy consumption(<0.34 pJ).This groundbreaking system bridges the gap between AI models and real-world applications,enabling efficient,privacy-preserving AI solutions for healthcare,robotics,and next-generation human-machine interfaces,marking a pivotal advancement in edge computing and AI deployment. 展开更多
关键词 Photonic integrated circuits Edge computing Aluminum nitride Neural networks Wearable sensors
在线阅读 下载PDF
Virtualization Technology in Cloud Computing Based Radio Access Networks:A Primer 被引量:2
20
作者 ZHANG Xian PENG Mugen 《ZTE Communications》 2017年第4期47-66,共20页
Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has bee... Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs. 展开更多
关键词 network VIRTUALIZATION CC-RAN RAN SLICING FOG computing
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部