Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing o...Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing offers a transformative approach to solve ADN planning.To fully leverage the potential of quantum computing,this paper proposes a photonic quantum acceleration algorithm.First,a quantum-accelerated framework for ADN planning is proposed on the basis of coherent photonic quantum computers.The ADN planning model is then formulated and decomposed into discrete master problems and continuous subproblems to facilitate the quantum optimization process.The photonic quantum-embedded adaptive alternating direction method of multipliers(PQA-ADMM)algorithm is subsequently proposed to equivalently map the discrete master problem onto a quantum-interpretable model,enabling its deployment on a photonic quantum computer.Finally,a comparative analysis with various solvers,including Gurobi,demonstrates that the proposed PQA-ADMM algorithm achieves significant speedup on the modified IEEE 33-node and IEEE 123-node systems,highlighting its effectiveness.展开更多
In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “...In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “divide” and applied it in construction of solution space. Then, by application of a sticker based parallel algorithm using biological operations, independent set problem was resolved in polynomial time.展开更多
Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when...Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when Virchow’s triad of hypercoagulability, changes in blood flow (e.g. stasis) and endothelial dysfunction, is fulfilled. Although such immobilisation is most often seen in bedbound patients and travellers on long distance flights, there is increasing evidence that prolonged periods of work or leisure related to using computers while seated at work desks, is an independent risk factor. In this report, we present two cases of “e-thrombosis” from prolonged sitting while using a computer.展开更多
Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousa...Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.展开更多
We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this m...We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this mean for the future? Are these children lucky or not?展开更多
In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot n...In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot necessarily unitary.The duality mode provides a natural link between classical computing and quantum computing.In addition,the duality mode provides a new tool for quantum algorithm design.展开更多
A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R...A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.展开更多
Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate...Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate-scale nonadiabatic holo-nomic quantum computers is clearly necessary.Given that measurements are the sole means of extracting information,they play an indispensable role in nonadiabatic holonomic quantum computers.Accordingly,developing methods to reduce measurement errors in nonadiabatic holonomic quantum computers is of great importance.However,while much attention has been given to the research on nonadiabatic holonomic gates,the research on reducing measurement errors in nonadiabatic holonomic quantum computers is severely lacking.In this study,we propose a measurement error reduction method tailored for intermediate-scale nonadiabatic holonomic quantum computers.The reason we say this is because our method can not only reduce the measurement errors in the computer but also be useful in mitigating errors originating from nonadiabatic holonomic gates.Given these features,our method significantly advances the construction of accurate intermediate-scale nonadiabatic holonomic quantum computers.展开更多
We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the s...We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the simplest or most complex biological system. Evolution is manifested by land mark discoveries which deeply affect our social life. Demographic pressure, demand for improved living standards and devastation of the natural environment pose new and complex challenges. We believe that the implementation of new computational models based on the latest scientific methodology can provide a reasonable chance of overcoming today's social problems. To ensure this goal, however, we need a change of mindset, placing findings obtained from modern science above traditional concepts and beliefs. In particular, the type of modeling used with success in computational sciences must be extended to allow simulations of novel models for social life.展开更多
.The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizati....The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously.Graphics processing units(GPUs)have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements.A central challenge in implementing GMG on GPUs,though,is that computational work on coarse levels cannot fully utilize the capacity of a GPU.In this work,we perform numerical studies of GMG on CPU–GPU heterogeneous computers.Furthermore,we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver,Fast Fourier Transform,in the cuFFT library developed by NVIDIA.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network e...In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.展开更多
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el...As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.展开更多
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f...High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 52307134the Fundamental Research Funds for the Central Universities(xzy012025022)。
文摘Active distribution network(ADN)planning is crucial for achieving a cost-effective transition to modern power systems,yet it poses significant challenges as the system scale increases.The advent of quantum computing offers a transformative approach to solve ADN planning.To fully leverage the potential of quantum computing,this paper proposes a photonic quantum acceleration algorithm.First,a quantum-accelerated framework for ADN planning is proposed on the basis of coherent photonic quantum computers.The ADN planning model is then formulated and decomposed into discrete master problems and continuous subproblems to facilitate the quantum optimization process.The photonic quantum-embedded adaptive alternating direction method of multipliers(PQA-ADMM)algorithm is subsequently proposed to equivalently map the discrete master problem onto a quantum-interpretable model,enabling its deployment on a photonic quantum computer.Finally,a comparative analysis with various solvers,including Gurobi,demonstrates that the proposed PQA-ADMM algorithm achieves significant speedup on the modified IEEE 33-node and IEEE 123-node systems,highlighting its effectiveness.
文摘In this paper, the sticker based DNA computing was used for solving the independent set problem. At first, solution space was constructed by using appropriate DNA memory complexes. We defined a new operation called “divide” and applied it in construction of solution space. Then, by application of a sticker based parallel algorithm using biological operations, independent set problem was resolved in polynomial time.
文摘Deep vein thrombosis (DVT) is a common and potentially fatal vascular event when it leads to pulmonary embolism. Occurring as part of the broader phenomenon of Venous Thromboembolism (VTE), DVT classically arises when Virchow’s triad of hypercoagulability, changes in blood flow (e.g. stasis) and endothelial dysfunction, is fulfilled. Although such immobilisation is most often seen in bedbound patients and travellers on long distance flights, there is increasing evidence that prolonged periods of work or leisure related to using computers while seated at work desks, is an independent risk factor. In this report, we present two cases of “e-thrombosis” from prolonged sitting while using a computer.
文摘Although AI and quantum computing (QC) are fast emerging as key enablers of the future Internet, experts believe they pose an existential threat to humanity. Responding to the frenzied release of ChatGPT/GPT-4, thousands of alarmed tech leaders recently signed an open letter to pause AI research to prepare for the catastrophic threats to humanity from uncontrolled AGI (Artificial General Intelligence). Perceived as an “epistemological nightmare”, AGI is believed to be on the anvil with GPT-5. Two computing rules appear responsible for these risks. 1) Mandatory third-party permissions that allow computers to run applications at the expense of introducing vulnerabilities. 2) The Halting Problem of Turing-complete AI programming languages potentially renders AGI unstoppable. The double whammy of these inherent weaknesses remains invincible under the legacy systems. A recent cybersecurity breakthrough shows that banning all permissions reduces the computer attack surface to zero, delivering a new zero vulnerability computing (ZVC) paradigm. Deploying ZVC and blockchain, this paper formulates and supports a hypothesis: “Safe, secure, ethical, controllable AGI/QC is possible by conquering the two unassailable rules of computability.” Pursued by a European consortium, testing/proving the proposed hypothesis will have a groundbreaking impact on the future digital infrastructure when AGI/QC starts powering the 75 billion internet devices by 2025.
文摘We are already familiar with computers——computers work for us at home, in offices and in factories. But it is also true that many children today are using computers at schools before they can write. What does this mean for the future? Are these children lucky or not?
基金the National Fundamental Research Program under Grant No.2006CB921106National Natural Science Foundation of China under Grant Nos.10325521 and 60433050
文摘In this letter,we propose a duality computing mode,which resembles particle-wave duality property whena quantum system such as a quantum computer passes through a double-slit.In this mode,computing operations arenot necessarily unitary.The duality mode provides a natural link between classical computing and quantum computing.In addition,the duality mode provides a new tool for quantum algorithm design.
文摘A new approach for the implementation of variogram models and ordinary kriging using the R statistical language, in conjunction with Fortran, the MPI (Message Passing Interface), and the "pbdDMAT" package within R on the Bridges and Stampede Supercomputers will be described. This new technique has led to great improvements in timing as compared to those in R alone, or R with C and MPI. These improvements include processing and forecasting vectors of size 25,000 in an average time of 6 minutes on the Stampede Supercomputer and 2.5 minutes on the Bridges Supercomputer as compared to previous processing times of 3.5 hours.
基金supported by the National Natural Science Foundation of China(Grant No.12174224)。
文摘Nonadiabatic holonomic quantum computers serve as the physical platform for nonadiabatic holonomic quantum computation.As quantum computation has entered the noisy intermediate-scale era,building accurate intermediate-scale nonadiabatic holo-nomic quantum computers is clearly necessary.Given that measurements are the sole means of extracting information,they play an indispensable role in nonadiabatic holonomic quantum computers.Accordingly,developing methods to reduce measurement errors in nonadiabatic holonomic quantum computers is of great importance.However,while much attention has been given to the research on nonadiabatic holonomic gates,the research on reducing measurement errors in nonadiabatic holonomic quantum computers is severely lacking.In this study,we propose a measurement error reduction method tailored for intermediate-scale nonadiabatic holonomic quantum computers.The reason we say this is because our method can not only reduce the measurement errors in the computer but also be useful in mitigating errors originating from nonadiabatic holonomic gates.Given these features,our method significantly advances the construction of accurate intermediate-scale nonadiabatic holonomic quantum computers.
文摘We consider the relevance of computer hardware and simulations not only to science and technology but also to social life. Evolutionary processes are part of all we know, from the physical and inanimate world to the simplest or most complex biological system. Evolution is manifested by land mark discoveries which deeply affect our social life. Demographic pressure, demand for improved living standards and devastation of the natural environment pose new and complex challenges. We believe that the implementation of new computational models based on the latest scientific methodology can provide a reasonable chance of overcoming today's social problems. To ensure this goal, however, we need a change of mindset, placing findings obtained from modern science above traditional concepts and beliefs. In particular, the type of modeling used with success in computational sciences must be extended to allow simulations of novel models for social life.
基金the assistance provided by Mr.Xiaoqiang Yue and Mr.Zheng Li from Xiangtan University in regard in our numerical experiments.Feng is partially supported by the NSFC Grant 11201398Program for Changjiang Scholars and Innovative Research Team in University of China Grant IRT1179+4 种基金Specialized research Fund for the Doctoral Program of Higher Education of China Grant 20124301110003Shu is partially supported by NSFC Grant 91130002 and 11171281the Scientific Research Fund of the Hunan Provincial Education Department of China Grant 12A138Xu is partially supported by NSFC Grant 91130011 and NSF DMS-1217142.Zhang is partially supported by the Dean Startup Fund,Academy of Mathematics and System Sciences,and by NSFC Grant 91130011.
文摘.The geometric multigrid method(GMG)is one of the most efficient solving techniques for discrete algebraic systems arising from elliptic partial differential equations.GMG utilizes a hierarchy of grids or discretizations and reduces the error at a number of frequencies simultaneously.Graphics processing units(GPUs)have recently burst onto the scientific computing scene as a technology that has yielded substantial performance and energy-efficiency improvements.A central challenge in implementing GMG on GPUs,though,is that computational work on coarse levels cannot fully utilize the capacity of a GPU.In this work,we perform numerical studies of GMG on CPU–GPU heterogeneous computers.Furthermore,we compare our implementation with an efficient CPU implementation of GMG and with the most popular fast Poisson solver,Fast Fourier Transform,in the cuFFT library developed by NVIDIA.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China(62202215)Liaoning Province Applied Basic Research Program(Youth Special Project,2023JH2/101600038)+4 种基金Shenyang Youth Science and Technology Innovation Talent Support Program(RC220458)Guangxuan Program of Shenyang Ligong University(SYLUGXRC202216)the Basic Research Special Funds for Undergraduate Universities in Liaoning Province(LJ212410144067)the Natural Science Foundation of Liaoning Province(2024-MS-113)the science and technology funds from Liaoning Education Department(LJKZ0242).
文摘In the field of edge computing,achieving low-latency computational task offloading with limited resources is a critical research challenge,particularly in resource-constrained and latency-sensitive vehicular network environments where rapid response is mandatory for safety-critical applications.In scenarios where edge servers are sparsely deployed,the lack of coordination and information sharing often leads to load imbalance,thereby increasing system latency.Furthermore,in regions without edge server coverage,tasks must be processed locally,which further exacerbates latency issues.To address these challenges,we propose a novel and efficient Deep Reinforcement Learning(DRL)-based approach aimed at minimizing average task latency.The proposed method incorporates three offloading strategies:local computation,direct offloading to the edge server in local region,and device-to-device(D2D)-assisted offloading to edge servers in other regions.We formulate the task offloading process as a complex latency minimization optimization problem.To solve it,we propose an advanced algorithm based on the Dueling Double Deep Q-Network(D3QN)architecture and incorporating the Prioritized Experience Replay(PER)mechanism.Experimental results demonstrate that,compared with existing offloading algorithms,the proposed method significantly reduces average task latency,enhances user experience,and offers an effective strategy for latency optimization in future edge computing systems under dynamic workloads.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051,ZR2025QB50)+6 种基金Guangdong Basic and Applied Basic Research Foundation(2025A1515011191)the Shanghai Sailing Program(23YF1402200,23YF1402400)funded by Basic Research Program of Jiangsu(BK20240424)Open Research Fund of State Key Laboratory of Crystal Materials(KF2406)Taishan Scholar Foundation of Shandong Province(tsqn202408006,tsqn202507058)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University。
文摘As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.
基金financially supported by the National Natural Science Foundation of China(Grant No.12172093)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515012607)。
文摘High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.