Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating ...Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.展开更多
Layer pseudospins,exhibiting quantum coherence and precise multistate controllability,present significant potential for the advancement of future computing technologies.In this work,we propose an in-memory probabilist...Layer pseudospins,exhibiting quantum coherence and precise multistate controllability,present significant potential for the advancement of future computing technologies.In this work,we propose an in-memory probabilistic computing scheme based on the electrical manipulation of layer pseudospins in layered materials,by exploiting the interaction between real spins and layer pseudospins.展开更多
The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machin...The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.展开更多
The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up ...The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.展开更多
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti...Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.展开更多
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Photon-counting computed tomography(PCCT)represents a significant advancement in pediatric cardiovascular imaging.Traditional CT systems employ energy-integrating detectors that convert X-ray photons into visible ligh...Photon-counting computed tomography(PCCT)represents a significant advancement in pediatric cardiovascular imaging.Traditional CT systems employ energy-integrating detectors that convert X-ray photons into visible light,whereas PCCT utilizes photon-counting detectors that directly transform X-ray photons into electric signals.This direct conversion allows photon-counting detectors to sort photons into discrete energy levels,thereby enhancing image quality through superior noise reduction,improved spatial and contrast resolution,and reduced artifacts.In pediatric applications,PCCT offers substantial benefits,including lower radiation doses,which may help reduce the risk of malignancy in pediatric patients,with perhaps greater potential to benefit those with repeated exposure from a young age.Enhanced spatial resolution facilitates better visualization of small structures,vital for diagnosing congenital heart defects.Additionally,PCCT’s spectral capabilities improve tissue characterization and enable the creation of virtual monoenergetic images,which enhance soft-tissue contrast and potentially reduce contrast media doses.Initial clinical results indicate that PCCT provides superior image quality and diagnostic accuracy compared to conven-tional CT,particularly in challenging pediatric cardiovascular cases.As PCCT technology matures,further research and standardized protocols will be essential to fully integrate it into pediatric imaging practices,ensuring optimized diagnostic outcomes and patient safety.展开更多
Caustic ingestion is a relatively rare but potentially catastrophic gastroentero-logical emergency.Upper gastrointestinal(GI)endoscopy is currently regarded as the gold standard modality not only to assess the depth a...Caustic ingestion is a relatively rare but potentially catastrophic gastroentero-logical emergency.Upper gastrointestinal(GI)endoscopy is currently regarded as the gold standard modality not only to assess the depth and the extension of GI caustic injury,but also to guide the appropriate treatment.Intriguingly,contrast-enhanced computed tomography(CECT)has recently emerged as a promising non-invasive and more accurate alternative to endoscopy in this setting.However,to date,evidence concerning the role of CECT as an alternative or complementary diagnostic tool to endoscopy in caustic ingestion is still limited.The aim of our review was to summarize and discuss the current evidence concerning the role of CECT in the emergency diagnosis of caustic ingestion and its value in assessing injury severity among non-pediatric patients.展开更多
Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resourc...Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.展开更多
Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength...Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research.展开更多
Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing be...Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.展开更多
With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomem...With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomemristor based on C_(27)H_(30)O_(15)/FeOx heterostructure that presents a highly sensitive to the light stimuli and artificial optic synaptic features such as short-and long-term plasticity(STP and LTP),enabling the developed optomemristor to implement complex analogy signal processing through building a real-physical dynamic-based in-sensing reservoir computing algorithm and yielding an accuracy of 94.88%for speech recognition.The charge trapping and detrapping mediated by the optic active layer of C_(27)H_(30)O_(15) that is extracted from the lotus flower is response for the positive photoconductance memory in the prepared optomemristor.This work provides a feasible organic−inorganic heterostructure as well as an optic in-sensing vision computing for an advanced optic computing system in future complex signal processing.展开更多
As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational ...As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.展开更多
In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep compu...In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep computing unit)heterogeneous computing platform.Multiple hybrid parallel schemes are assessed using a range of model systems,including those with up to 1200 atoms and 10000 basis func-tions.The findings of our research reveal that,during Hartree-Fock(HF)calculations,a single DCU ex-hibits 33.6 speedups over 32 C86 CPU cores.Compared with the efficiency of Wuhan Electronic Structure Package on Intel X86 and NVIDIA A100 computing platform,the Hygon platform exhibits good cost-effective-ness,showing great potential in quantum chemistry calculation and other high-performance scientific computations.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial veh...The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers ...1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.展开更多
In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapi...In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapid deployment and high mobil-ity,are commonly regarded as an ideal option for con-structing temporary communication networks.Con-sidering the limited computing capability and battery power of UAVs,this paper proposes a two-layer UAV cooperative computing offloading strategy for emer-gency disaster relief scenarios.The multi-agent twin delayed deep deterministic policy gradient(MATD3)algorithm integrated with prioritized experience replay(PER)is utilized to jointly optimize the scheduling strategies of UAVs,task offloading ratios,and their mobility,aiming to diminish the energy consumption and delay of the system to the minimum.In order to address the aforementioned non-convex optimiza-tion issue,a Markov decision process(MDP)has been established.The results of simulation experiments demonstrate that,compared with the other four base-line algorithms,the algorithm introduced in this paper exhibits better convergence performance,verifying its feasibility and efficacy.展开更多
文摘Efficient resource provisioning,allocation,and computation offloading are critical to realizing lowlatency,scalable,and energy-efficient applications in cloud,fog,and edge computing.Despite its importance,integrating Software Defined Networks(SDN)for enhancing resource orchestration,task scheduling,and traffic management remains a relatively underexplored area with significant innovation potential.This paper provides a comprehensive review of existing mechanisms,categorizing resource provisioning approaches into static,dynamic,and user-centric models,while examining applications across domains such as IoT,healthcare,and autonomous systems.The survey highlights challenges such as scalability,interoperability,and security in managing dynamic and heterogeneous infrastructures.This exclusive research evaluates how SDN enables adaptive policy-based handling of distributed resources through advanced orchestration processes.Furthermore,proposes future directions,including AI-driven optimization techniques and hybrid orchestrationmodels.By addressing these emerging opportunities,thiswork serves as a foundational reference for advancing resource management strategies in next-generation cloud,fog,and edge computing ecosystems.This survey concludes that SDN-enabled computing environments find essential guidance in addressing upcoming management opportunities.
基金supported by the National Natural Science Foundation of China(Grant Nos.12322407,62122036,and 62034004)the Natural Science Foundation of Jiangsu Province(Grant No.BK20233001)+5 种基金the National Key R&D Program of China(Grant Nos.2023YFF0718400 and 2023YFF1203600)the Leading-edge Technology Program of Jiangsu Natural Science Foundation(Grant No.BK20232004)the Strategic Priority Research Program of the Chinese Academy of Sciences(Grant No.XDB44000000)Innovation Program for Quantum Science and Technologysupport from the Fundamental Research Funds for the Central Universities(Grant Nos.020414380227,020414380240,and 020414380242)the e-Science Center of Collaborative Innovation Center of Advanced Microstructures。
文摘Layer pseudospins,exhibiting quantum coherence and precise multistate controllability,present significant potential for the advancement of future computing technologies.In this work,we propose an in-memory probabilistic computing scheme based on the electrical manipulation of layer pseudospins in layered materials,by exploiting the interaction between real spins and layer pseudospins.
文摘The emergence of different computing methods such as cloud-,fog-,and edge-based Internet of Things(IoT)systems has provided the opportunity to develop intelligent systems for disease detection.Compared to other machine learning models,deep learning models have gained more attention from the research community,as they have shown better results with a large volume of data compared to shallow learning.However,no comprehensive survey has been conducted on integrated IoT-and computing-based systems that deploy deep learning for disease detection.This study evaluated different machine learning and deep learning algorithms and their hybrid and optimized algorithms for IoT-based disease detection,using the most recent papers on IoT-based disease detection systems that include computing approaches,such as cloud,edge,and fog.Their analysis focused on an IoT deep learning architecture suitable for disease detection.It also recognizes the different factors that require the attention of researchers to develop better IoT disease detection systems.This study can be helpful to researchers interested in developing better IoT-based disease detection and prediction systems based on deep learning using hybrid algorithms.
基金funded by the Hong Kong-Macao-Taiwan Science and Technology Cooperation Project of the Science and Technology Innovation Action Plan in Shanghai,China(23510760200)the Oriental Talent Youth Program of Shanghai,China(No.Y3DFRCZL01)+1 种基金the Outstanding Program of the Youth Innovation Promotion Association of the Chinese Academy of Sciences(No.Y2023080)the Strategic Priority Research Program of the Chinese Academy of Sciences Category A(No.XDA0360404).
文摘The number of satellites,especially those operating in Low-Earth Orbit(LEO),has been exploding in recent years.Additionally,the burgeoning development of Artificial Intelligence(AI)software and hardware has opened up new industrial opportunities in both air and space,with satellite-powered computing emerging as a new computing paradigm:Orbital Edge Computing(OEC).Compared to terrestrial edge computing,the mobility of LEO satellites and their limited communication,computation,and storage resources pose challenges in designing task-specific scheduling algorithms.Previous survey papers have largely focused on terrestrial edge computing or the integration of space and ground technologies,lacking a comprehensive summary of OEC architecture,algorithms,and case studies.This paper conducts a comprehensive survey and analysis of OEC's system architecture,applications,algorithms,and simulation tools,providing a solid background for researchers in the field.By discussing OEC use cases and the challenges faced,potential research directions for future OEC research are proposed.
基金supported by National Key Research and Development Program of China(Grant No.2022YFA1405600)Beijing Natural Science Foundation(Grant No.Z210006)+3 种基金National Natural Science Foundation of China—Young Scientists Fund(Grant No.12104051,62122004)Hong Kong Research Grant Council(Grant Nos.27206321,17205922,17212923 and C1009-22GF)Shenzhen Science and Technology Innovation Commission(SGDX20220530111405040)partially supported by ACCESS—AI Chip Center for Emerging Smart Systems,sponsored by Innovation and Technology Fund(ITF),Hong Kong SAR。
文摘Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
文摘Photon-counting computed tomography(PCCT)represents a significant advancement in pediatric cardiovascular imaging.Traditional CT systems employ energy-integrating detectors that convert X-ray photons into visible light,whereas PCCT utilizes photon-counting detectors that directly transform X-ray photons into electric signals.This direct conversion allows photon-counting detectors to sort photons into discrete energy levels,thereby enhancing image quality through superior noise reduction,improved spatial and contrast resolution,and reduced artifacts.In pediatric applications,PCCT offers substantial benefits,including lower radiation doses,which may help reduce the risk of malignancy in pediatric patients,with perhaps greater potential to benefit those with repeated exposure from a young age.Enhanced spatial resolution facilitates better visualization of small structures,vital for diagnosing congenital heart defects.Additionally,PCCT’s spectral capabilities improve tissue characterization and enable the creation of virtual monoenergetic images,which enhance soft-tissue contrast and potentially reduce contrast media doses.Initial clinical results indicate that PCCT provides superior image quality and diagnostic accuracy compared to conven-tional CT,particularly in challenging pediatric cardiovascular cases.As PCCT technology matures,further research and standardized protocols will be essential to fully integrate it into pediatric imaging practices,ensuring optimized diagnostic outcomes and patient safety.
文摘Caustic ingestion is a relatively rare but potentially catastrophic gastroentero-logical emergency.Upper gastrointestinal(GI)endoscopy is currently regarded as the gold standard modality not only to assess the depth and the extension of GI caustic injury,but also to guide the appropriate treatment.Intriguingly,contrast-enhanced computed tomography(CECT)has recently emerged as a promising non-invasive and more accurate alternative to endoscopy in this setting.However,to date,evidence concerning the role of CECT as an alternative or complementary diagnostic tool to endoscopy in caustic ingestion is still limited.The aim of our review was to summarize and discuss the current evidence concerning the role of CECT in the emergency diagnosis of caustic ingestion and its value in assessing injury severity among non-pediatric patients.
基金funded by Researchers Supporting Project Number(RSPD2025R947)King Saud University,Riyadh,Saudi Arabia.
文摘Effective resource management in the Internet of Things and fog computing is essential for efficient and scalable networks.However,existing methods often fail in dynamic and high-demand environments,leading to resource bottlenecks and increased energy consumption.This study aims to address these limitations by proposing the Quantum Inspired Adaptive Resource Management(QIARM)model,which introduces novel algorithms inspired by quantum principles for enhanced resource allocation.QIARM employs a quantum superposition-inspired technique for multi-state resource representation and an adaptive learning component to adjust resources in real time dynamically.In addition,an energy-aware scheduling module minimizes power consumption by selecting optimal configurations based on energy metrics.The simulation was carried out in a 360-minute environment with eight distinct scenarios.This study introduces a novel quantum-inspired resource management framework that achieves up to 98%task offload success and reduces energy consumption by 20%,addressing critical challenges of scalability and efficiency in dynamic fog computing environments.
基金supported by James M.Cox Foundation,National Institute on Deafness and Other Communication Disorders(grant no.1R21DC021029-01A1)Cox Enterprises Inc.,National Institute of Child Health and Human Development(grant no.AWD-006196-G1)Thrasher Research Fund Early Career Award Program.
文摘Bluetooth low energy(BLE)-based indoor localization has been extensively researched due to its cost-effectiveness,low power consumption,and ubiquity.Despite these advantages,the variability of received signal strength indicator(RSSI)measurements,influenced by physical obstacles,human presence,and electronic interference,poses a significant challenge to accurate localization.In this work,we present an optimised method to enhance indoor localization accuracy by utilising multiple BLE beacons in a radio frequency(RF)-dense modern building environment.Through a proof-of-concept study,we demonstrate that using three BLE beacons reduces localization error from a worst-case distance of 9.09-2.94 m,whereas additional beacons offer minimal incremental benefit in such settings.Furthermore,our framework for BLE-based localization,implemented on an edge network of Raspberry Pies,has been released under an open-source license,enabling broader application and further research.
基金supported by NSF China(No.T2421002,62061146002,62020106005)。
文摘Distributed computing is an important topic in the field of wireless communications and networking,and its high efficiency in handling large amounts of data is particularly noteworthy.Although distributed computing benefits from its ability of processing data in parallel,the communication burden between different servers is incurred,thereby the computation process is detained.Recent researches have applied coding in distributed computing to reduce the communication burden,where repetitive computation is utilized to enable multicast opportunities so that the same coded information can be reused across different servers.To handle the computation tasks in practical heterogeneous systems,we propose a novel coding scheme to effectively mitigate the "straggling effect" in distributed computing.We assume that there are two types of servers in the system and the only difference between them is their computational capabilities,the servers with lower computational capabilities are called stragglers.Given any ratio of fast servers to slow servers and any gap of computational capabilities between them,we achieve approximately the same computation time for both fast and slow servers by assigning different amounts of computation tasks to them,thus reducing the overall computation time.Furthermore,we investigate the informationtheoretic lower bound of the inter-communication load and show that the lower bound is within a constant multiplicative gap to the upper bound achieved by our scheme.Various simulations also validate the effectiveness of the proposed scheme.
基金supported by the Key Project of Chongqing Natural Science Foundation Joint Fund[CSTB2023NSCQ-LZX0103,(G.Z.)]Chongqing Natural Science Foundation[CSTB2024NSCQ-MSX0012,(C.L.)]+1 种基金Fundamental Research Funds for the Central Universities[SWUZLPY03,(G.Z.)]Fundamental Research Funds for the Central Universities[Swu020019,(G.Z.):SWU-XDJH202319,(G.Z.)1].
文摘With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomemristor based on C_(27)H_(30)O_(15)/FeOx heterostructure that presents a highly sensitive to the light stimuli and artificial optic synaptic features such as short-and long-term plasticity(STP and LTP),enabling the developed optomemristor to implement complex analogy signal processing through building a real-physical dynamic-based in-sensing reservoir computing algorithm and yielding an accuracy of 94.88%for speech recognition.The charge trapping and detrapping mediated by the optic active layer of C_(27)H_(30)O_(15) that is extracted from the lotus flower is response for the positive photoconductance memory in the prepared optomemristor.This work provides a feasible organic−inorganic heterostructure as well as an optic in-sensing vision computing for an advanced optic computing system in future complex signal processing.
基金in part by the National Natural Science Foundation of China(NSFC)under Grant 62371012in part by the Beijing Natural Science Foundation under Grant 4252001.
文摘As an essential element of intelligent trans-port systems,Internet of vehicles(IoV)has brought an immersive user experience recently.Meanwhile,the emergence of mobile edge computing(MEC)has enhanced the computational capability of the vehicle which reduces task processing latency and power con-sumption effectively and meets the quality of service requirements of vehicle users.However,there are still some problems in the MEC-assisted IoV system such as poor connectivity and high cost.Unmanned aerial vehicles(UAVs)equipped with MEC servers have become a promising approach for providing com-munication and computing services to mobile vehi-cles.Hence,in this article,an optimal framework for the UAV-assisted MEC system for IoV to minimize the average system cost is presented.Through joint consideration of computational offloading decisions and computational resource allocation,the optimiza-tion problem of our proposed architecture is presented to reduce system energy consumption and delay.For purpose of tackling this issue,the original non-convex issue is converted into a convex issue and the alternat-ing direction method of multipliers-based distributed optimal scheme is developed.The simulation results illustrate that the presented scheme can enhance the system performance dramatically with regard to other schemes,and the convergence of the proposed scheme is also significant.
基金supported by the National Natural Science Foundation of China(No.22373112 to Ji Qi,No.22373111 and 21921004 to Minghui Yang)GH-fund A(No.202107011790)。
文摘In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep computing unit)heterogeneous computing platform.Multiple hybrid parallel schemes are assessed using a range of model systems,including those with up to 1200 atoms and 10000 basis func-tions.The findings of our research reveal that,during Hartree-Fock(HF)calculations,a single DCU ex-hibits 33.6 speedups over 32 C86 CPU cores.Compared with the efficiency of Wuhan Electronic Structure Package on Intel X86 and NVIDIA A100 computing platform,the Hygon platform exhibits good cost-effective-ness,showing great potential in quantum chemistry calculation and other high-performance scientific computations.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金funded by Shandong University of Technology Doctoral Program in Science and Technology,grant number 4041422007.
文摘The rapid advance of Connected-Automated Vehicles(CAVs)has led to the emergence of diverse delaysensitive and energy-constrained vehicular applications.Given the high dynamics of vehicular networks,unmanned aerial vehicles-assisted mobile edge computing(UAV-MEC)has gained attention in providing computing resources to vehicles and optimizing system costs.We model the computing offloading problem as a multi-objective optimization challenge aimed at minimizing both task processing delay and energy consumption.We propose a three-stage hybrid offloading scheme called Dynamic Vehicle Clustering Game-based Multi-objective Whale Optimization Algorithm(DVCG-MWOA)to address this problem.A novel dynamic clustering algorithm is designed based on vehiclemobility and task offloading efficiency requirements,where each UAV independently serves as the cluster head for a vehicle cluster and adjusts its position at the end of each timeslot in response to vehiclemovement.Within eachUAV-led cluster,cooperative game theory is applied to allocate computing resourceswhile respecting delay constraints,ensuring efficient resource utilization.To enhance offloading efficiency,we improve the multi-objective whale optimization algorithm(MOWOA),resulting in the MWOA.This enhanced algorithm determines the optimal allocation of pending tasks to different edge computing devices and the resource utilization ratio of each device,ultimately achieving a Pareto-optimal solution set for delay and energy consumption.Experimental results demonstrate that the proposed joint offloading scheme significantly reduces both delay and energy consumption compared to existing approaches,offering superior performance for vehicular networks.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
文摘1 Summary Mathematical modeling has become a cornerstone in understanding the complex dynamics of infectious diseases and chronic health conditions.With the advent of more refined computational techniques,researchers are now able to incorporate intricate features such as delays,stochastic effects,fractional dynamics,variable-order systems,and uncertainty into epidemic models.These advancements not only improve predictive accuracy but also enable deeper insights into disease transmission,control,and policy-making.Tashfeen et al.
基金supported by the Basic Scientific Research Business Fund Project of Higher Education Institutions in Heilongjiang Province(145409601)the First Batch of Experimental Teaching and Teaching Laboratory Construction Research Projects in Heilongjiang Province(SJGZ20240038).
文摘In the wake of major natural disasters or human-made disasters,the communication infrastruc-ture within disaster-stricken areas is frequently dam-aged.Unmanned aerial vehicles(UAVs),thanks to their merits such as rapid deployment and high mobil-ity,are commonly regarded as an ideal option for con-structing temporary communication networks.Con-sidering the limited computing capability and battery power of UAVs,this paper proposes a two-layer UAV cooperative computing offloading strategy for emer-gency disaster relief scenarios.The multi-agent twin delayed deep deterministic policy gradient(MATD3)algorithm integrated with prioritized experience replay(PER)is utilized to jointly optimize the scheduling strategies of UAVs,task offloading ratios,and their mobility,aiming to diminish the energy consumption and delay of the system to the minimum.In order to address the aforementioned non-convex optimiza-tion issue,a Markov decision process(MDP)has been established.The results of simulation experiments demonstrate that,compared with the other four base-line algorithms,the algorithm introduced in this paper exhibits better convergence performance,verifying its feasibility and efficacy.