In this paper,we study the computative structure of computable function-a structure of computative tree,and,by analysis on it,got the most general algorithm and model for computation on computable functions.
Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal sca...Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el...As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.展开更多
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f...High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.展开更多
The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle...The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.展开更多
Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monit...Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.展开更多
The issue of resistance reduction through hull ventilation is of particular interest in contemporary research.This paper presents multiphase computational fluid dynamics(CFD)simulations with 2-DOF motion of a planing ...The issue of resistance reduction through hull ventilation is of particular interest in contemporary research.This paper presents multiphase computational fluid dynamics(CFD)simulations with 2-DOF motion of a planing hull.The original hull was modified by introducing a step to allow air ventilation.Following an assessment of the hull performance,a simulation campaign in calm water was conducted to characterize the hull at various forward speeds and air insufflation rates for a defined single step geometry.Geometric analysis of the air layer thickness beneath the hull for each simulated condition was performed using a novel method for visualizing local air thickness.Additionally,two new parameters were introduced to understand the influence of spray rails on the air volume beneath the hull and to indicate the primary direction of ventilated air escape.A validation campaign and an assessment of uncertainty of the simulation has been conducted.The features offered by the CFD methodology include the evaluation of the air layer thickness as a function of hull velocity and injection flow rate and the air volume distribution beneath the hull.The air injection velocity can be adjusted across various operating conditions,thereby preventing performance or efficiency loss during navigation.Based on these findings,the study highlights the benefits of air insufflation in reducing hull resistance for high-speed planing vessels.This work lays a robust foundation for future research and new promising topics,as the exploration of air insufflation continues to be a topic of contemporary interest within naval architecture and hydrodynamics.展开更多
Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based met...Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.展开更多
Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study...Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study introduces a novel Fuzzy Nonlinear Additive Regression(FNAR)model to predict monthly GWL in an unconfined aquifer in eastern Iran,using a 19-year(1998–2017)dataset from 11 piezometric wells.Under three distinct scenarios with progressively increasing input complexity,the study utilized readily available climate data,including Precipitation(Prc),Temperature(Tave),Relative Humidity(RH),and Evapotranspiration(ETo).The dataset was split into training(70%)and validation(30%)subsets.Results showed that among three input scenarios,Scenario 3(Sc3,incorporating all four variables)achieved the best predictive performance,with RMSE ranging from 0.305 m to 0.768 m,MAE from 0.203 m to 0.522 m,NSE from 0.661 to 0.980,and PBIAS from 0.771%to 0.981%,indicating low bias and high reliability.However,Sc2(excluding ETo)with RMSE ranging from 0.4226 m to 0.9909 m,MAE from 0.3418 m to 0.8173 m,NSE from 0.2831 to 0.9674,and PBIAS from−0.598%to 0.968%across different months offers practical advantages in data-scarce settings.The FNAR model outperforms conventional Fuzzy Least Squares Regression(FLSR)and holds promise for GWL forecasting in data-scarce regions where physical or numerical models are impractical.Future research should focus on integrating FNAR with deep learning algorithms and real-time data assimilation expanding applications across diverse hydrogeological settings.展开更多
BACKGROUND Early screening,preoperative staging,and diagnosis of lymph node metastasis are crucial for improving the prognosis of gastric cancer(GC).AIM To evaluate the diagnostic value of combined multidetector compu...BACKGROUND Early screening,preoperative staging,and diagnosis of lymph node metastasis are crucial for improving the prognosis of gastric cancer(GC).AIM To evaluate the diagnostic value of combined multidetector computed tomography(MDCT)and gastrointestinal endoscopy for GC screening,preoperative staging,and lymph node metastasis detection,thereby providing a reference for clinical diagnosis and treatment.METHODS In this retrospective study clinical and imaging data of 134 patients with suspected GC who were admitted between January 2023 and October 2024 were initially reviewed.According to the inclusion and exclusion criteria,102 patients were finally enrolled in the analysis.All enrolled patients had undergone both MDCT and gastrointestinal endoscopy examinations prior to surgical intervention.Preoperative clinical staging and lymph node metastasis findings were compared with pathological results.RESULTS The combined use of MDCT and gastrointestinal endoscopy demonstrated a sensitivity of 98.53%,specificity of 97.06%,accuracy of 98.04%,positive predictive value of 98.53%,and negative predictive value of 97.06%for diagnosing GC.These factors were all significantly higher than those of MDCT or endoscopy alone(P<0.05).The accuracy rates of the combined approach for detecting clinical T and N stages were 97.06%and 92.65%,respectively,outperforming MDCT alone(86.76% and 79.41%)and endoscopy alone(85.29% and 70.59%)(P<0.05).Among 68 patients with confirmed GC,50(73.53%)were pathologically diagnosed with lymph node metastasis.The accuracy for detecting lymph node metastasis was 66.00%with endoscopy,76.00%with MDCT,and 92.00% with the combined approach,all with statistically significant differences(P<0.05).CONCLUSION The combined application of MDCT and gastrointestinal endoscopy enhanced diagnostic accuracy for GC,provided greater consistency in preoperative staging,and improved the detection of lymph node metastasis,thereby demonstrating significant clinical utility.展开更多
Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,r...Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware.展开更多
As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays...As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.展开更多
Colorectal cancer(CRC)with lung oligometastases,particularly in the presence of extrapulmonary disease,poses considerable therapeutic challenges in clinical practice.We have carefully studied the multicenter study by ...Colorectal cancer(CRC)with lung oligometastases,particularly in the presence of extrapulmonary disease,poses considerable therapeutic challenges in clinical practice.We have carefully studied the multicenter study by Hu et al,which evaluated the survival outcomes of patients with metastatic CRC who received image-guided thermal ablation(IGTA).These findings provide valuable clinical evidence supporting IGTA as a feasible,minimally invasive approach and underscore the prognostic significance of metastatic distribution.However,the study by Hu et al has several limitations,including that not all pulmonary lesions were pathologically confirmed,postoperative follow-up mainly relied on dynamic contrast-enhanced computed tomography,no comparative analysis was performed with other local treatments,and the impact of other imaging features on efficacy and prognosis was not evaluated.Future studies should include complete pathological confirmation,integrate functional imaging and radiomics,and use prospective multicenter collaboration to optimize patient selection standards for IGTA treatment,strengthen its clinical evidence base,and ultimately promote individualized decision-making for patients with metastatic CRC.展开更多
Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom a...Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom arrays,scalability is primarily constrained by the capacity to generate large numbers of optical tweezers,and conventional techniques using acousto-optic deflectors or spatial light modulators struggle to produce arrays much beyond∼10,000 tweezers.Moreover,these methods require additional microscope objectives to focus the light into micrometer-sized spots,which further complicates system integration and scalability.Here,we demonstrate the experimental generation of an optical tweezer array containing 280×280 spots using a metasurface,nearly an order of magnitude more than most existing systems.The metasurface leverages a large number of subwavelength phase-control pixels to engineer the wavefront of the incident light,enabling both large-scale tweezer generation and direct focusing into micron-scale spots without the need for a microscope.This result shifts the scalability bottleneck for atom arrays from the tweezer generation hardware to the available laser power.Furthermore,the array shows excellent intensity uniformity exceeding 90%,making it suitable for homogeneous single-atom loading and paving the way for trapping arrays of more than 10,000 atoms in the near future.展开更多
In underwater target search path planning,the accuracy of sonar models directly dictates the accurate assessment of search coverage.In contrast to physics-informed sonar models,traditional geometric sonar models fail ...In underwater target search path planning,the accuracy of sonar models directly dictates the accurate assessment of search coverage.In contrast to physics-informed sonar models,traditional geometric sonar models fail to accurately characterize the complex influence of marine environments.To overcome these challenges,we propose an acoustic physics-informed intelligent path planning framework for underwater target search,integrating three core modules:The acoustic-physical modeling module adopts 3D ray-tracing theory and the active sonar equation to construct a physics-driven sonar detection model,explicitly accounting for environmental factors that influence sonar performance across heterogeneous spaces.The hybrid parallel computing module adopts a message passing interface(MPI)/open multi-processing(Open MP)hybrid strategy for large-scale acoustic simulations,combining computational domain decomposition and physics-intensive task acceleration.The search path optimization module adopts the covariance matrix adaptation evolution algorithm to solve continuous optimization problems of heading angles,which ensures maximum search coverage for targets.Largescale experiments conducted in the Pacific and Atlantic Oceans demonstrate the framework's effectiveness:(1)Precise capture of sonar detection range variations from 5.45 km to 50 km in heterogeneous marine environments.(2)Significant speedup of 453.43×for acoustic physics modeling through hybrid parallelization.(3)Notable improvements of 7.23%in detection coverage and 15.86%reduction in optimization time compared to the optimal baseline method.The framework provides a robust solution for underwater search missions in complex marine environments.展开更多
This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagno...This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.展开更多
Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and v...Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.展开更多
The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often...The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities.展开更多
基金supported by National Natural Science Foundation of China.
文摘In this paper,we study the computative structure of computable function-a structure of computative tree,and,by analysis on it,got the most general algorithm and model for computation on computable functions.
基金supported by the National Key Research and Development Program of China under Grant 2022YFB3608300in part by the National Nature Science Foundation of China(NSFC)under Grants 62404050,U2341218,62574056,62204052。
文摘Organic electrochemical transistor(OECT)devices demonstrate great promising potential for reservoir computing(RC)systems,but their lack of tunable dynamic characteristics limits their application in multi-temporal scale tasks.In this study,we report an OECT-based neuromorphic device with tunable relaxation time(τ)by introducing an additional vertical back-gate electrode into a planar structure.The dual-gate design enablesτreconfiguration from 93 to 541 ms.The tunable relaxation behaviors can be attributed to the combined effects of planar-gate induced electrochemical doping and back-gateinduced electrostatic coupling,as verified by electrochemical impedance spectroscopy analysis.Furthermore,we used theτ-tunable OECT devices as physical reservoirs in the RC system for intelligent driving trajectory prediction,achieving a significant improvement in prediction accuracy from below 69%to 99%.The results demonstrate that theτ-tunable OECT shows a promising candidate for multi-temporal scale neuromorphic computing applications.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051,ZR2025QB50)+6 种基金Guangdong Basic and Applied Basic Research Foundation(2025A1515011191)the Shanghai Sailing Program(23YF1402200,23YF1402400)funded by Basic Research Program of Jiangsu(BK20240424)Open Research Fund of State Key Laboratory of Crystal Materials(KF2406)Taishan Scholar Foundation of Shandong Province(tsqn202408006,tsqn202507058)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University。
文摘As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.
基金financially supported by the National Natural Science Foundation of China(Grant No.12172093)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515012607)。
文摘High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051)+5 种基金Open Research Fund of State Key Laboratory of Materials for Integrated Circuits(SKLJC-K2024-12)the Shanghai Sailing Program(23YF1402200,23YF1402400)Natural Science Foundation of Jiangsu Province(BK20240424)Taishan Scholar Foundation of Shandong Province(tsqn202408006)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University.
文摘The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.
基金supported by the Deanship of Graduate Studies and Scientific Research at Jouf University.
文摘Due to the growth of smart cities,many real-time systems have been developed to support smart cities using Internet of Things(IoT)and emerging technologies.They are formulated to collect the data for environment monitoring and automate the communication process.In recent decades,researchers have made many efforts to propose autonomous systems for manipulating network data and providing on-time responses in critical operations.However,the widespread use of IoT devices in resource-constrained applications and mobile sensor networks introduces significant research challenges for cybersecurity.These systems are vulnerable to a variety of cyberattacks,including unauthorized access,denial-of-service attacks,and data leakage,which compromise the network’s security.Additionally,uneven load balancing between mobile IoT devices,which frequently experience link interferences,compromises the trustworthiness of the system.This paper introduces a Multi-Agent secured framework using lightweight edge computing to enhance cybersecurity for sensor networks,aiming to leverage artificial intelligence for adaptive routing and multi-metric trust evaluation to achieve data privacy and mitigate potential threats.Moreover,it enhances the efficiency of distributed sensors for energy consumption through intelligent data analytics techniques,resulting in highly consistent and low-latency network communication.Using simulations,the proposed framework reveals its significant performance compared to state-of-the-art approaches for energy consumption by 43%,latency by 46%,network throughput by 51%,packet loss rate by 40%,and denial of service attacks by 42%.
基金supported by European Union funding(PON“Ricerca e Innovazione”2014‒2020).
文摘The issue of resistance reduction through hull ventilation is of particular interest in contemporary research.This paper presents multiphase computational fluid dynamics(CFD)simulations with 2-DOF motion of a planing hull.The original hull was modified by introducing a step to allow air ventilation.Following an assessment of the hull performance,a simulation campaign in calm water was conducted to characterize the hull at various forward speeds and air insufflation rates for a defined single step geometry.Geometric analysis of the air layer thickness beneath the hull for each simulated condition was performed using a novel method for visualizing local air thickness.Additionally,two new parameters were introduced to understand the influence of spray rails on the air volume beneath the hull and to indicate the primary direction of ventilated air escape.A validation campaign and an assessment of uncertainty of the simulation has been conducted.The features offered by the CFD methodology include the evaluation of the air layer thickness as a function of hull velocity and injection flow rate and the air volume distribution beneath the hull.The air injection velocity can be adjusted across various operating conditions,thereby preventing performance or efficiency loss during navigation.Based on these findings,the study highlights the benefits of air insufflation in reducing hull resistance for high-speed planing vessels.This work lays a robust foundation for future research and new promising topics,as the exploration of air insufflation continues to be a topic of contemporary interest within naval architecture and hydrodynamics.
基金supported by the National Research Foundation of Korea(NRF)funded by the Ministry of Science and ICT(MSIT)(No.RS-2022-00143178)the Ministry of Education(MOE)(Nos.2022R1A6A3A13053896 and 2022R1F1A1074616),Republic of Korea.
文摘Beam-tracking simulations have been extensively utilized in the study of collective beam instabilities in circular accelerators.Traditionally,many simulation codes have relied on central processing unit(CPU)-based methods,tracking on a single CPU core,or parallelizing the computation across multiple cores via the message passing interface(MPI).Although these approaches work well for single-bunch tracking,scaling them to multiple bunches significantly increases the computational load,which often necessitates the use of a dedicated multi-CPU cluster.To address this challenge,alternative methods leveraging General-Purpose computing on Graphics Processing Units(GPGPU)have been proposed,enabling tracking studies on a standalone desktop personal computer(PC).However,frequent CPU-GPU interactions,including data transfers and synchronization operations during tracking,can introduce communication overheads,potentially reducing the overall effectiveness of GPU-based computations.In this study,we propose a novel approach that eliminates this overhead by performing the entire tracking simulation process exclusively on the GPU,thereby enabling the simultaneous processing of all bunches and their macro-particles.Specifically,we introduce MBTRACK2-CUDA,a Compute Unified Device Architecture(CUDA)ported version of MBTRACK2,which facilitates efficient tracking of single-and multi-bunch collective effects by leveraging the full GPU-resident computation.
基金supported by the Iran National Science Foundation(INSF)the University of Birjand under grant number 4034771.
文摘Groundwater modeling remains challenging due to heterogeneity and complexity of aquifer systems,necessitating endeavors to quantify Groundwater Levels(GWL)dynamics to inform policymakers and hydrogeologists.This study introduces a novel Fuzzy Nonlinear Additive Regression(FNAR)model to predict monthly GWL in an unconfined aquifer in eastern Iran,using a 19-year(1998–2017)dataset from 11 piezometric wells.Under three distinct scenarios with progressively increasing input complexity,the study utilized readily available climate data,including Precipitation(Prc),Temperature(Tave),Relative Humidity(RH),and Evapotranspiration(ETo).The dataset was split into training(70%)and validation(30%)subsets.Results showed that among three input scenarios,Scenario 3(Sc3,incorporating all four variables)achieved the best predictive performance,with RMSE ranging from 0.305 m to 0.768 m,MAE from 0.203 m to 0.522 m,NSE from 0.661 to 0.980,and PBIAS from 0.771%to 0.981%,indicating low bias and high reliability.However,Sc2(excluding ETo)with RMSE ranging from 0.4226 m to 0.9909 m,MAE from 0.3418 m to 0.8173 m,NSE from 0.2831 to 0.9674,and PBIAS from−0.598%to 0.968%across different months offers practical advantages in data-scarce settings.The FNAR model outperforms conventional Fuzzy Least Squares Regression(FLSR)and holds promise for GWL forecasting in data-scarce regions where physical or numerical models are impractical.Future research should focus on integrating FNAR with deep learning algorithms and real-time data assimilation expanding applications across diverse hydrogeological settings.
文摘BACKGROUND Early screening,preoperative staging,and diagnosis of lymph node metastasis are crucial for improving the prognosis of gastric cancer(GC).AIM To evaluate the diagnostic value of combined multidetector computed tomography(MDCT)and gastrointestinal endoscopy for GC screening,preoperative staging,and lymph node metastasis detection,thereby providing a reference for clinical diagnosis and treatment.METHODS In this retrospective study clinical and imaging data of 134 patients with suspected GC who were admitted between January 2023 and October 2024 were initially reviewed.According to the inclusion and exclusion criteria,102 patients were finally enrolled in the analysis.All enrolled patients had undergone both MDCT and gastrointestinal endoscopy examinations prior to surgical intervention.Preoperative clinical staging and lymph node metastasis findings were compared with pathological results.RESULTS The combined use of MDCT and gastrointestinal endoscopy demonstrated a sensitivity of 98.53%,specificity of 97.06%,accuracy of 98.04%,positive predictive value of 98.53%,and negative predictive value of 97.06%for diagnosing GC.These factors were all significantly higher than those of MDCT or endoscopy alone(P<0.05).The accuracy rates of the combined approach for detecting clinical T and N stages were 97.06%and 92.65%,respectively,outperforming MDCT alone(86.76% and 79.41%)and endoscopy alone(85.29% and 70.59%)(P<0.05).Among 68 patients with confirmed GC,50(73.53%)were pathologically diagnosed with lymph node metastasis.The accuracy for detecting lymph node metastasis was 66.00%with endoscopy,76.00%with MDCT,and 92.00% with the combined approach,all with statistically significant differences(P<0.05).CONCLUSION The combined application of MDCT and gastrointestinal endoscopy enhanced diagnostic accuracy for GC,provided greater consistency in preoperative staging,and improved the detection of lymph node metastasis,thereby demonstrating significant clinical utility.
基金supported in part by the Chinese Academy of Sciences(No.XDA0330302)NSFC program(No.22127901)。
文摘Neuromorphic devices have garnered significant attention as potential building blocks for energy-efficient hardware systems owing to their capacity to emulate the computational efficiency of the brain.In this regard,reservoir computing(RC)framework,which leverages straightforward training methods and efficient temporal signal processing,has emerged as a promising scheme.While various physical reservoir devices,including ferroelectric,optoelectronic,and memristor-based systems,have been demonstrated,many still face challenges related to compatibility with mainstream complementary metal oxide semiconductor(CMOS)integration processes.This study introduced a silicon-based schottky barrier metal-oxide-semiconductor field effect transistor(SB-MOSFET),which was fabricated under low thermal budget and compatible with back-end-of-line(BEOL).The device demonstrated short-term memory characteristics,facilitated by the modulation of schottky barriers and charge trapping.Utilizing these characteristics,a RC system for temporal data processing was constructed,and its performance was validated in a 5×4 digital classification task,achieving an accuracy exceeding 98%after 50 training epochs.Furthermore,the system successfully processed temporal signal in waveform classification and prediction tasks using time-division multiplexing.Overall,the SB-MOSFET's high compatibility with CMOS technology provides substantial advantages for large-scale integration,enabling the development of energy-efficient reservoir computing hardware.
基金supported by Youth Talent Project of Scientific Research Program of Hubei Provincial Department of Education under Grant Q20241809Doctoral Scientific Research Foundation of Hubei University of Automotive Technology under Grant 202404.
文摘As Internet of Things(IoT)applications expand,Mobile Edge Computing(MEC)has emerged as a promising architecture to overcome the real-time processing limitations of mobile devices.Edge-side computation offloading plays a pivotal role in MEC performance but remains challenging due to complex task topologies,conflicting objectives,and limited resources.This paper addresses high-dimensional multi-objective offloading for serial heterogeneous tasks in MEC.We jointly consider task heterogeneity,high-dimensional objectives,and flexible resource scheduling,modeling the problem as a Many-objective optimization.To solve it,we propose a flexible framework integrating an improved cooperative co-evolutionary algorithm based on decomposition(MOCC/D)and a flexible scheduling strategy.Experimental results on benchmark functions and simulation scenarios show that the proposed method outperforms existing approaches in both convergence and solution quality.
文摘Colorectal cancer(CRC)with lung oligometastases,particularly in the presence of extrapulmonary disease,poses considerable therapeutic challenges in clinical practice.We have carefully studied the multicenter study by Hu et al,which evaluated the survival outcomes of patients with metastatic CRC who received image-guided thermal ablation(IGTA).These findings provide valuable clinical evidence supporting IGTA as a feasible,minimally invasive approach and underscore the prognostic significance of metastatic distribution.However,the study by Hu et al has several limitations,including that not all pulmonary lesions were pathologically confirmed,postoperative follow-up mainly relied on dynamic contrast-enhanced computed tomography,no comparative analysis was performed with other local treatments,and the impact of other imaging features on efficacy and prognosis was not evaluated.Future studies should include complete pathological confirmation,integrate functional imaging and radiomics,and use prospective multicenter collaboration to optimize patient selection standards for IGTA treatment,strengthen its clinical evidence base,and ultimately promote individualized decision-making for patients with metastatic CRC.
基金supported by the National Natural Science Foundation of China (Grant No.92576208)Tsinghua University Initiative Scientific Research Program+1 种基金Beijing Science and Technology Planning ProjectTsinghua University Dushi Program。
文摘Scalability remains a major challenge in building practical fault-tolerant quantum computers.Currently,the largest number of qubits achieved across leading quantum platforms ranges from hundreds to thousands.In atom arrays,scalability is primarily constrained by the capacity to generate large numbers of optical tweezers,and conventional techniques using acousto-optic deflectors or spatial light modulators struggle to produce arrays much beyond∼10,000 tweezers.Moreover,these methods require additional microscope objectives to focus the light into micrometer-sized spots,which further complicates system integration and scalability.Here,we demonstrate the experimental generation of an optical tweezer array containing 280×280 spots using a metasurface,nearly an order of magnitude more than most existing systems.The metasurface leverages a large number of subwavelength phase-control pixels to engineer the wavefront of the incident light,enabling both large-scale tweezer generation and direct focusing into micron-scale spots without the need for a microscope.This result shifts the scalability bottleneck for atom arrays from the tweezer generation hardware to the available laser power.Furthermore,the array shows excellent intensity uniformity exceeding 90%,making it suitable for homogeneous single-atom loading and paving the way for trapping arrays of more than 10,000 atoms in the near future.
基金supported by Natural Science Foundation of Hu'nan Province(2024JJ5409)。
文摘In underwater target search path planning,the accuracy of sonar models directly dictates the accurate assessment of search coverage.In contrast to physics-informed sonar models,traditional geometric sonar models fail to accurately characterize the complex influence of marine environments.To overcome these challenges,we propose an acoustic physics-informed intelligent path planning framework for underwater target search,integrating three core modules:The acoustic-physical modeling module adopts 3D ray-tracing theory and the active sonar equation to construct a physics-driven sonar detection model,explicitly accounting for environmental factors that influence sonar performance across heterogeneous spaces.The hybrid parallel computing module adopts a message passing interface(MPI)/open multi-processing(Open MP)hybrid strategy for large-scale acoustic simulations,combining computational domain decomposition and physics-intensive task acceleration.The search path optimization module adopts the covariance matrix adaptation evolution algorithm to solve continuous optimization problems of heading angles,which ensures maximum search coverage for targets.Largescale experiments conducted in the Pacific and Atlantic Oceans demonstrate the framework's effectiveness:(1)Precise capture of sonar detection range variations from 5.45 km to 50 km in heterogeneous marine environments.(2)Significant speedup of 453.43×for acoustic physics modeling through hybrid parallelization.(3)Notable improvements of 7.23%in detection coverage and 15.86%reduction in optimization time compared to the optimal baseline method.The framework provides a robust solution for underwater search missions in complex marine environments.
文摘This study proposes a lightweight rice disease detection model optimized for edge computing environments.The goal is to enhance the You Only Look Once(YOLO)v5 architecture to achieve a balance between real-time diagnostic performance and computational efficiency.To this end,a total of 3234 high-resolution images(2400×1080)were collected from three major rice diseases Rice Blast,Bacterial Blight,and Brown Spot—frequently found in actual rice cultivation fields.These images served as the training dataset.The proposed YOLOv5-V2 model removes the Focus layer from the original YOLOv5s and integrates ShuffleNet V2 into the backbone,thereby resulting in both model compression and improved inference speed.Additionally,YOLOv5-P,based on PP-PicoDet,was configured as a comparative model to quantitatively evaluate performance.Experimental results demonstrated that YOLOv5-V2 achieved excellent detection performance,with an mAP 0.5 of 89.6%,mAP 0.5–0.95 of 66.7%,precision of 91.3%,and recall of 85.6%,while maintaining a lightweight model size of 6.45 MB.In contrast,YOLOv5-P exhibited a smaller model size of 4.03 MB,but showed lower performance with an mAP 0.5 of 70.3%,mAP 0.5–0.95 of 35.2%,precision of 62.3%,and recall of 74.1%.This study lays a technical foundation for the implementation of smart agriculture and real-time disease diagnosis systems by proposing a model that satisfies both accuracy and lightweight requirements.
基金supported by the Major Project for the Integration of ScienceEducation and Industry (Grant No.2025ZDZX02)。
文摘Classical computation of electronic properties in large-scale materials remains challenging.Quantum computation has the potential to offer advantages in memory footprint and computational scaling.However,general and viable quantum algorithms for simulating large-scale materials are still limited.We propose and implement random-state quantum algorithms to calculate electronic-structure properties of real materials.Using a random state circuit on a small number of qubits,we employ real-time evolution with first-order Trotter decomposition and Hadamard test to obtain electronic density of states,and we develop a modified quantum phase estimation algorithm to calculate real-space local density of states via direct quantum measurements.Furthermore,we validate these algorithms by numerically computing the density of states and spatial distributions of electronic states in graphene,twisted bilayer graphene quasicrystals,and fractal lattices,covering system sizes from hundreds to thousands of atoms.Our results manifest that the random-state quantum algorithms provide a general and qubit-efficient route to scalable simulations of electronic properties in large-scale periodic and aperiodic materials.
基金The researchers would like to thank the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025)。
文摘The evolution of cities into digitally managed environments requires computational systems that can operate in real time while supporting predictive and adaptive infrastructure management.Earlier approaches have often advanced one dimension—such as Internet of Things(IoT)-based data acquisition,Artificial Intelligence(AI)-driven analytics,or digital twin visualization—without fully integrating these strands into a single operational loop.As a result,many existing solutions encounter bottlenecks in responsiveness,interoperability,and scalability,while also leaving concerns about data privacy unresolved.This research introduces a hybrid AI–IoT–Digital Twin framework that combines continuous sensing,distributed intelligence,and simulation-based decision support.The design incorporates multi-source sensor data,lightweight edge inference through Convolutional Neural Networks(CNN)and Long ShortTerm Memory(LSTM)models,and federated learning enhanced with secure aggregation and differential privacy to maintain confidentiality.A digital twin layer extends these capabilities by simulating city assets such as traffic flows and water networks,generating what-if scenarios,and issuing actionable control signals.Complementary modules,including model compression and synchronization protocols,are embedded to ensure reliability in bandwidth-constrained and heterogeneous urban environments.The framework is validated in two urban domains:traffic management,where it adapts signal cycles based on real-time congestion patterns,and pipeline monitoring,where it anticipates leaks through pressure and vibration data.Experimental results show a 28%reduction in response time,a 35%decrease in maintenance costs,and a marked reduction in false positives relative to conventional baselines.The architecture also demonstrates stability across 50+edge devices under federated training and resilience to uneven node participation.The proposed system provides a scalable and privacy-aware foundation for predictive urban infrastructure management.By closing the loop between sensing,learning,and control,it reduces operator dependence,enhances resource efficiency,and supports transparent governance models for emerging smart cities.