The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,fle...The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.展开更多
Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging two...Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging twodimensional material,stands out as an ideal candidate for fabricating neuromorphic devices.Its exceptional electrical performance and robust mechanical properties make it an ideal choice for this purpose.This review aims to uncover the advantages and properties of MXene-Ti_(3)C_(2)T_(x)in neuromorphic devices and to promote its further development.Firstly,we categorize several core physical mechanisms present in MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and summarize in detail the reasons for their formation.Then,this work systematically summarizes and classifies advanced techniques for the three main optimization pathways of MXene-Ti_(3)C_(2)T_(x),such as doping engineering,interface engineering,and structural engineering.Significantly,this work highlights innovative applications of MXene-Ti_(3)C_(2)T_(x)neuromorphic devices in cutting-edge computing paradigms,particularly near-sensor computing and in-sensor computing.Finally,this review carefully compiles a table that integrates almost all research results involving MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and discusses the challenges,development prospects,and feasibility of MXene-Ti_(3)C_(2)T_(x)-based neuromorphic devices in practical applications,aiming to lay a solid theoretical foundation and provide technical support for further exploration and application of MXene-Ti_(3)C_(2)T_(x)in the field of neuromorphic devices.展开更多
An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This ...An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.展开更多
Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural n...Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural networks have led to promising neuromorphic systems.However,developing compact parallel computing technology for integrating artificial neural networks into traditional hardware remains a challenge.Organic computational materials offer affordable,biocompatible neuromorphic devices with exceptional adjustability and energy-efficient switching.Here,the review investigates the advancements made in the development of organic neuromorphic devices.This review explores resistive switching mechanisms such as interface-regulated filament growth,molecular-electronic dynamics,nanowire-confined filament growth,and vacancy-assisted ion migration,while proposing methodologies to enhance state retention and conductance adjustment.The survey examines the challenges faced in implementing low-power neuromorphic computing,e.g.,reducing device size and improving switching time.The review analyses the potential of these materials in adjustable,flexible,and low-power consumption applications,viz.biohybrid spiking circuits interacting with biological systems,systems that respond to specific events,robotics,intelligent agents,neuromorphic computing,neuromorphic bioelectronics,neuroscience,and other applications,and prospects of this technology.展开更多
As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and el...As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.展开更多
High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic f...High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.展开更多
With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored t...With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.展开更多
In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as we...In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.展开更多
Networks are composed with servers and rather larger amounts of terminals and most menace of attack and virus come from terminals. Eliminating malicious code and ac cess or breaking the conditions only under witch att...Networks are composed with servers and rather larger amounts of terminals and most menace of attack and virus come from terminals. Eliminating malicious code and ac cess or breaking the conditions only under witch attack or virus can be invoked in those terminals would be the most effec tive way to protect information systems. The concept of trusted computing was first introduced into terminal virus immunity. Then a model of security domain mechanism based on trusted computing to protect computers from proposed from abstracting the general information systems. The principle of attack resistant and venture limitation of the model was demonstrated by means of mathematical analysis, and the realization of the model was proposed.展开更多
在任务计算密集型和延迟敏感型的场景下,无人机辅助的移动边缘计算由于其高机动性和放置成本低的特点而被广泛研究.然而,无人机的能耗限制导致其无法长时间工作并且卸载任务内的不同模块往往存在着依赖关系.针对这种情况,以有向无环图(d...在任务计算密集型和延迟敏感型的场景下,无人机辅助的移动边缘计算由于其高机动性和放置成本低的特点而被广泛研究.然而,无人机的能耗限制导致其无法长时间工作并且卸载任务内的不同模块往往存在着依赖关系.针对这种情况,以有向无环图(direct acyclic graph,DAG)为基础对任务内部模块的依赖关系进行建模,综合考虑系统时延和能耗的影响,以最小化系统成本为优化目标得到最优的卸载策略.为了解决这一优化问题,提出了一种基于亚群、高斯变异和反向学习的二进制灰狼优化算法(binary grey wolf optimization algorithm based on subpopulation,Gaussian mutation,and reverse learning,BGWOSGR).仿真结果表明,所提出算法计算出的系统成本比其他4种对比方法分别降低了约19%、27%、16%、13%,并且收敛速度更快.展开更多
Trusted Computing technology is quickly developing in recent years. This technology manages to improve the computer security and archive a trusted computing environment. The core of trusted computing technology is cry...Trusted Computing technology is quickly developing in recent years. This technology manages to improve the computer security and archive a trusted computing environment. The core of trusted computing technology is cryptology. In this paper, we analyze the key and credential mechanism which is two basic aspects in the cryptology application of trusted computing. We give an example application to illustrate that the TPM enabled key and credential mechanism can improve the security of computer system.展开更多
With the development of Internet of Things(IoT),the delay caused by network transmission has led to low data processing efficiency.At the same time,the limited computing power and available energy consumption of IoT t...With the development of Internet of Things(IoT),the delay caused by network transmission has led to low data processing efficiency.At the same time,the limited computing power and available energy consumption of IoT terminal devices are also the important bottlenecks that would restrict the application of blockchain,but edge computing could solve this problem.The emergence of edge computing can effectively reduce the delay of data transmission and improve data processing capacity.However,user data in edge computing is usually stored and processed in some honest-but-curious authorized entities,which leads to the leakage of users’privacy information.In order to solve these problems,this paper proposes a location data collection method that satisfies the local differential privacy to protect users’privacy.In this paper,a Voronoi diagram constructed by the Delaunay method is used to divide the road network space and determine the Voronoi grid region where the edge nodes are located.A random disturbance mechanism that satisfies the local differential privacy is utilized to disturb the original location data in each Voronoi grid.In addition,the effectiveness of the proposed privacy-preserving mechanism is verified through comparison experiments.Compared with the existing privacy-preserving methods,the proposed privacy-preserving mechanism can not only better meet users’privacy needs,but also have higher data availability.展开更多
The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular n...The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.展开更多
Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource ...Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.展开更多
With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has att...With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much attention among researchers.To improve the Quality of Service(QoS),this study focuses on computation offloading in MEC.We consider the QoS from the perspective of computational cost,dimensional disaster,user privacy and catastrophic forgetting of new users.The QoS model is established based on the delay and energy consumption and is based on DDQN and a Federated Learning(FL)adaptive task offloading algorithm in MEC.The proposed algorithm combines the QoS model and deep reinforcement learning algorithm to obtain an optimal offloading policy according to the local link and node state information in the channel coherence time to address the problem of time-varying transmission channels and reduce the computing energy consumption and task processing delay.To solve the problems of privacy and catastrophic forgetting,we use FL to make distributed use of multiple users’data to obtain the decision model,protect data privacy and improve the model universality.In the process of FL iteration,the communication delay of individual devices is too large,which affects the overall delay cost.Therefore,we adopt a communication delay optimization algorithm based on the unary outlier detection mechanism to reduce the communication delay of FL.The simulation results indicate that compared with existing schemes,the proposed method significantly reduces the computation cost on a device and improves the QoS when handling complex tasks.展开更多
Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate diff...Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate different tasks that should be analyzed and processed in some given period of time.They send the tasks to the cloud servers but these sending operations increase bandwidth consumption and latency.Fog computing is a simple cloud at the network edge that is used to process the jobs in a short period of time instead of sending them to cloud computing facilities.In some situations,fog computing cannot execute some tasks due to lack of resources.Thus,in these situations it transfers them to cloud computing that leads to an increase in latency and bandwidth occupation again.Moreover,several fog servers may be fuelled while other servers are empty.This implies an unfair distribution of jobs.In this research study,we shall merge the software defined network(SDN)with IoV and fog computing and use the parked vehicle as assistant fog computing node.This can improve the capabilities of the fog computing layer and help in decreasing the number of migrated tasks to the cloud servers.This increases the ratio of time sensitive tasks that meet the deadline.In addition,a new load balancing strategy is proposed.It works proactively to balance the load locally and globally by the local fog managers and SDN controller,respectively.The simulation experiments show that the proposed system is more efficient than VANET-Fog-Cloud and IoV-Fog-Cloud frameworks in terms of average response time and percentage of bandwidth consumption,meeting the deadline,and resource utilization.展开更多
In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to im...In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.展开更多
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051)+5 种基金Open Research Fund of State Key Laboratory of Materials for Integrated Circuits(SKLJC-K2024-12)the Shanghai Sailing Program(23YF1402200,23YF1402400)Natural Science Foundation of Jiangsu Province(BK20240424)Taishan Scholar Foundation of Shandong Province(tsqn202408006)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University.
文摘The advancement of flexible memristors has significantly promoted the development of wearable electronic for emerging neuromorphic computing applications.Inspired by in-memory computing architecture of human brain,flexible memristors exhibit great application potential in emulating artificial synapses for highefficiency and low power consumption neuromorphic computing.This paper provides comprehensive overview of flexible memristors from perspectives of development history,material system,device structure,mechanical deformation method,device performance analysis,stress simulation during deformation,and neuromorphic computing applications.The recent advances in flexible electronics are summarized,including single device,device array and integration.The challenges and future perspectives of flexible memristor for neuromorphic computing are discussed deeply,paving the way for constructing wearable smart electronics and applications in large-scale neuromorphic computing and high-order intelligent robotics.
基金supported by the National Science Foundation for Distinguished Young Scholars of China(Grant No.12425209)the National Natural Science Foundation of China(Grant No.U20A20390,11827803,12172034,11822201,62004056,62104058,62271269).
文摘Neuromorphic devices have shown great potential in simulating the function of biological neurons due to their efficient parallel information processing and low energy consumption.MXene-Ti_(3)C_(2)T_(x),an emerging twodimensional material,stands out as an ideal candidate for fabricating neuromorphic devices.Its exceptional electrical performance and robust mechanical properties make it an ideal choice for this purpose.This review aims to uncover the advantages and properties of MXene-Ti_(3)C_(2)T_(x)in neuromorphic devices and to promote its further development.Firstly,we categorize several core physical mechanisms present in MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and summarize in detail the reasons for their formation.Then,this work systematically summarizes and classifies advanced techniques for the three main optimization pathways of MXene-Ti_(3)C_(2)T_(x),such as doping engineering,interface engineering,and structural engineering.Significantly,this work highlights innovative applications of MXene-Ti_(3)C_(2)T_(x)neuromorphic devices in cutting-edge computing paradigms,particularly near-sensor computing and in-sensor computing.Finally,this review carefully compiles a table that integrates almost all research results involving MXene-Ti_(3)C_(2)T_(x)neuromorphic devices and discusses the challenges,development prospects,and feasibility of MXene-Ti_(3)C_(2)T_(x)-based neuromorphic devices in practical applications,aiming to lay a solid theoretical foundation and provide technical support for further exploration and application of MXene-Ti_(3)C_(2)T_(x)in the field of neuromorphic devices.
基金supported in part by the Aeronautical Science Foundation of China under Grant 2022Z005057001the Joint Research Fund of Shanghai Commercial Aircraft System Engineering Science and Technology Innovation Center under CASEF-2023-M19.
文摘An aileron is a crucial control surface for rolling.Any jitter or shaking caused by the aileron mechatronics could have catastrophic consequences for the aircraft’s stability,maneuverability,safety,and lifespan.This paper presents a robust solution in the form of a fast flutter suppression digital control logic of edge computing aileron mechatronics(ECAM).We have effectively eliminated passive and active oscillating response biases by integrating nonlinear functional parameters and an antiphase hysteresis Schmitt trigger.Our findings demonstrate that self-tuning nonlinear parameters can optimize stability,robustness,and accuracy.At the same time,the antiphase hysteresis Schmitt trigger effectively rejects flutters without the need for collaborative navigation and guidance.Our hardware-in-the-loop simulation results confirm that this approach can eliminate aircraft jitter and shaking while ensuring expected stability and maneuverability.In conclusion,this nonlinear aileron mechatronics with a Schmitt positive feedback mechanism is a highly effective solution for distributed flight control and active flutter rejection.
基金financially supported by the Ministry of Education(Singapore)(MOE-T2EP50220-0022)SUTD-MIT International Design Center(Singapore)+3 种基金SUTD-ZJU IDEA Grant Program(SUTD-ZJU(VP)201903)SUTD Kickstarter Initiative(SKI 2021_02_03,SKI 2021_02_17,SKI 2021_01_04)Agency of Science,Technology and Research(Singapore)(A20G9b0135)National Supercomputing Centre(Singapore)(15001618)。
文摘Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural networks have led to promising neuromorphic systems.However,developing compact parallel computing technology for integrating artificial neural networks into traditional hardware remains a challenge.Organic computational materials offer affordable,biocompatible neuromorphic devices with exceptional adjustability and energy-efficient switching.Here,the review investigates the advancements made in the development of organic neuromorphic devices.This review explores resistive switching mechanisms such as interface-regulated filament growth,molecular-electronic dynamics,nanowire-confined filament growth,and vacancy-assisted ion migration,while proposing methodologies to enhance state retention and conductance adjustment.The survey examines the challenges faced in implementing low-power neuromorphic computing,e.g.,reducing device size and improving switching time.The review analyses the potential of these materials in adjustable,flexible,and low-power consumption applications,viz.biohybrid spiking circuits interacting with biological systems,systems that respond to specific events,robotics,intelligent agents,neuromorphic computing,neuromorphic bioelectronics,neuroscience,and other applications,and prospects of this technology.
基金supported by the NSFC(12474071)Natural Science Foundation of Shandong Province(ZR2024YQ051,ZR2025QB50)+6 种基金Guangdong Basic and Applied Basic Research Foundation(2025A1515011191)the Shanghai Sailing Program(23YF1402200,23YF1402400)funded by Basic Research Program of Jiangsu(BK20240424)Open Research Fund of State Key Laboratory of Crystal Materials(KF2406)Taishan Scholar Foundation of Shandong Province(tsqn202408006,tsqn202507058)Young Talent of Lifting engineering for Science and Technology in Shandong,China(SDAST2024QTB002)the Qilu Young Scholar Program of Shandong University。
文摘As emerging two-dimensional(2D)materials,carbides and nitrides(MXenes)could be solid solutions or organized structures made up of multi-atomic layers.With remarkable and adjustable electrical,optical,mechanical,and electrochemical characteristics,MXenes have shown great potential in brain-inspired neuromorphic computing electronics,including neuromorphic gas sensors,pressure sensors and photodetectors.This paper provides a forward-looking review of the research progress regarding MXenes in the neuromorphic sensing domain and discussed the critical challenges that need to be resolved.Key bottlenecks such as insufficient long-term stability under environmental exposure,high costs,scalability limitations in large-scale production,and mechanical mismatch in wearable integration hinder their practical deployment.Furthermore,unresolved issues like interfacial compatibility in heterostructures and energy inefficiency in neu-romorphic signal conversion demand urgent attention.The review offers insights into future research directions enhance the fundamental understanding of MXene properties and promote further integration into neuromorphic computing applications through the convergence with various emerging technologies.
基金financially supported by the National Natural Science Foundation of China(Grant No.12172093)the Guangdong Basic and Applied Basic Research Foundation(Grant No.2021A1515012607)。
文摘High-entropy oxides(HEOs)have emerged as a promising class of memristive materials,characterized by entropy-stabilized crystal structures,multivalent cation coordination,and tunable defect landscapes.These intrinsic features enable forming-free resistive switching,multilevel conductance modulation,and synaptic plasticity,making HEOs attractive for neuromorphic computing.This review outlines recent progress in HEO-based memristors across materials engineering,switching mechanisms,and synaptic emulation.Particular attention is given to vacancy migration,phase transitions,and valence-state dynamics—mechanisms that underlie the switching behaviors observed in both amorphous and crystalline systems.Their relevance to neuromorphic functions such as short-term plasticity and spike-timing-dependent learning is also examined.While encouraging results have been achieved at the device level,challenges remain in conductance precision,variability control,and scalable integration.Addressing these demands a concerted effort across materials design,interface optimization,and task-aware modeling.With such integration,HEO memristors offer a compelling pathway toward energy-efficient and adaptable brain-inspired electronics.
基金the National S&T Major Project (No. 2018ZX03001011)the National Key R&D Program(No.2018YFB1801102)+1 种基金the National Natural Science Foundation of China (No. 61671072)the Beijing Natural Science Foundation (No. L192025)
文摘With the increasing maritime activities and the rapidly developing maritime economy, the fifth-generation(5G) mobile communication system is expected to be deployed at the ocean. New technologies need to be explored to meet the requirements of ultra-reliable and low latency communications(URLLC) in the maritime communication network(MCN). Mobile edge computing(MEC) can achieve high energy efficiency in MCN at the cost of suffering from high control plane latency and low reliability. In terms of this issue, the mobile edge communications, computing, and caching(MEC3) technology is proposed to sink mobile computing, network control, and storage to the edge of the network. New methods that enable resource-efficient configurations and reduce redundant data transmissions can enable the reliable implementation of computing-intension and latency-sensitive applications. The key technologies of MEC3 to enable URLLC are analyzed and optimized in MCN. The best response-based offloading algorithm(BROA) is adopted to optimize task offloading. The simulation results show that the task latency can be decreased by 26.5’ ms, and the energy consumption in terminal users can be reduced to 66.6%.
基金This work was supported in part by the National Key R&D Program of China under Grant 2019YFE0114000in part by the National Natural Science Foundation of China under Grant 61701042+1 种基金in part by the 111 Project of China(Grant No.B16006)the research foundation of Ministry of EducationChina Mobile under Grant MCM20180101.
文摘In MEC-enabled vehicular network with limited wireless resource and computation resource,stringent delay and high reliability requirements are challenging issues.In order to reduce the total delay in the network as well as ensure the reliability of Vehicular UE(VUE),a Joint Allocation of Wireless resource and MEC Computing resource(JAWC)algorithm is proposed.The JAWC algorithm includes two steps:V2X links clustering and MEC computation resource scheduling.In the V2X links clustering,a Spectral Radius based Interference Cancellation scheme(SR-IC)is proposed to obtain the optimal resource allocation matrix.By converting the calculation of SINR into the calculation of matrix maximum row sum,the accumulated interference of VUE can be constrained and the the SINR calculation complexity can be effectively reduced.In the MEC computation resource scheduling,by transforming the original optimization problem into a convex problem,the optimal task offloading proportion of VUE and MEC computation resource allocation can be obtained.The simulation further demonstrates that the JAWC algorithm can significantly reduce the total delay as well as ensure the communication reliability of VUE in the MEC-enabled vehicular network.
基金Supported by the National High-TechnologyResearch and Development Programof China (2002AA1Z2101)
文摘Networks are composed with servers and rather larger amounts of terminals and most menace of attack and virus come from terminals. Eliminating malicious code and ac cess or breaking the conditions only under witch attack or virus can be invoked in those terminals would be the most effec tive way to protect information systems. The concept of trusted computing was first introduced into terminal virus immunity. Then a model of security domain mechanism based on trusted computing to protect computers from proposed from abstracting the general information systems. The principle of attack resistant and venture limitation of the model was demonstrated by means of mathematical analysis, and the realization of the model was proposed.
文摘在任务计算密集型和延迟敏感型的场景下,无人机辅助的移动边缘计算由于其高机动性和放置成本低的特点而被广泛研究.然而,无人机的能耗限制导致其无法长时间工作并且卸载任务内的不同模块往往存在着依赖关系.针对这种情况,以有向无环图(direct acyclic graph,DAG)为基础对任务内部模块的依赖关系进行建模,综合考虑系统时延和能耗的影响,以最小化系统成本为优化目标得到最优的卸载策略.为了解决这一优化问题,提出了一种基于亚群、高斯变异和反向学习的二进制灰狼优化算法(binary grey wolf optimization algorithm based on subpopulation,Gaussian mutation,and reverse learning,BGWOSGR).仿真结果表明,所提出算法计算出的系统成本比其他4种对比方法分别降低了约19%、27%、16%、13%,并且收敛速度更快.
基金Supported by the National Natural Science Foun-dation of China (60373087 ,60473023 ,90104005) HP Laborato-ry of China
文摘Trusted Computing technology is quickly developing in recent years. This technology manages to improve the computer security and archive a trusted computing environment. The core of trusted computing technology is cryptology. In this paper, we analyze the key and credential mechanism which is two basic aspects in the cryptology application of trusted computing. We give an example application to illustrate that the TPM enabled key and credential mechanism can improve the security of computer system.
文摘With the development of Internet of Things(IoT),the delay caused by network transmission has led to low data processing efficiency.At the same time,the limited computing power and available energy consumption of IoT terminal devices are also the important bottlenecks that would restrict the application of blockchain,but edge computing could solve this problem.The emergence of edge computing can effectively reduce the delay of data transmission and improve data processing capacity.However,user data in edge computing is usually stored and processed in some honest-but-curious authorized entities,which leads to the leakage of users’privacy information.In order to solve these problems,this paper proposes a location data collection method that satisfies the local differential privacy to protect users’privacy.In this paper,a Voronoi diagram constructed by the Delaunay method is used to divide the road network space and determine the Voronoi grid region where the edge nodes are located.A random disturbance mechanism that satisfies the local differential privacy is utilized to disturb the original location data in each Voronoi grid.In addition,the effectiveness of the proposed privacy-preserving mechanism is verified through comparison experiments.Compared with the existing privacy-preserving methods,the proposed privacy-preserving mechanism can not only better meet users’privacy needs,but also have higher data availability.
基金supported in part by the National Science Foundationof China under Grant number 61431001the Beijing Talents Fund under Grant number 2015000021223ZK31
文摘The centralized radio access cellular network infrastructure based on centralized Super Base Station(CSBS) is a promising solution to reduce the high construction cost and energy consumption of conventional cellular networks. With CSBS, the computing resource for communication protocol processing could be managed flexibly according the protocol load to improve the resource efficiency. Since the protocol load changes frequently and may exceed the capacity of processors, load balancing is needed. However, existing load balancing mechanisms used in data centers cannot satisfy the real-time requirement of the communication protocol processing. Therefore, a new computing resource adjustment scheme is proposed for communication protocol processing in the CSBS architecture. First of all, the main principles of protocol processing resource adjustment is concluded, followed by the analysis on the processing resource outage probability that the computing resource becomes inadequate for protocol processing as load changes. Following the adjustment principles, the proposed scheme is designed to reduce the processing resource outage probability based onthe optimized connected graph which is constructed by the approximate Kruskal algorithm. Simulation re-sults show that compared with the conventional load balancing mechanisms, the proposed scheme can reduce the occurrence number of inadequate processing resource and the additional resource consumption of adjustment greatly.
基金supported in part by the National Science and technology support program of P.R.China(No.2014BAH29F05)
文摘Because of cloud computing's high degree of polymerization calculation mode, it can't give full play to the resources of the edge device such as computing, storage, etc. Fog computing can improve the resource utilization efficiency of the edge device, and solve the problem about service computing of the delay-sensitive applications. This paper researches on the framework of the fog computing, and adopts Cloud Atomization Technology to turn physical nodes in different levels into virtual machine nodes. On this basis, this paper uses the graph partitioning theory to build the fog computing's load balancing algorithm based on dynamic graph partitioning. The simulation results show that the framework of the fog computing after Cloud Atomization can build the system network flexibly, and dynamic load balancing mechanism can effectively configure system resources as well as reducing the consumption of node migration brought by system changes.
基金supported by the National Natural Science Foundation of China(62032013,62072094Liaoning Province Science and Technology Fund Project(2020MS086)+1 种基金Shenyang Science and Technology Plan Project(20206424)the Fundamental Research Funds for the Central Universities(N2116014,N180101028)CERNET Innovation Project(NGII20190504).
文摘With the arrival of 5G,latency-sensitive applications are becoming increasingly diverse.Mobile Edge Computing(MEC)technology has the characteristics of high bandwidth,low latency and low energy consumption,and has attracted much attention among researchers.To improve the Quality of Service(QoS),this study focuses on computation offloading in MEC.We consider the QoS from the perspective of computational cost,dimensional disaster,user privacy and catastrophic forgetting of new users.The QoS model is established based on the delay and energy consumption and is based on DDQN and a Federated Learning(FL)adaptive task offloading algorithm in MEC.The proposed algorithm combines the QoS model and deep reinforcement learning algorithm to obtain an optimal offloading policy according to the local link and node state information in the channel coherence time to address the problem of time-varying transmission channels and reduce the computing energy consumption and task processing delay.To solve the problems of privacy and catastrophic forgetting,we use FL to make distributed use of multiple users’data to obtain the decision model,protect data privacy and improve the model universality.In the process of FL iteration,the communication delay of individual devices is too large,which affects the overall delay cost.Therefore,we adopt a communication delay optimization algorithm based on the unary outlier detection mechanism to reduce the communication delay of FL.The simulation results indicate that compared with existing schemes,the proposed method significantly reduces the computation cost on a device and improves the QoS when handling complex tasks.
文摘Internet of Vehicles(IoV)is a new style of vehicular ad hoc network that is used to connect the sensors of each vehicle with each other and with other vehicles’sensors through the internet.These sensors generate different tasks that should be analyzed and processed in some given period of time.They send the tasks to the cloud servers but these sending operations increase bandwidth consumption and latency.Fog computing is a simple cloud at the network edge that is used to process the jobs in a short period of time instead of sending them to cloud computing facilities.In some situations,fog computing cannot execute some tasks due to lack of resources.Thus,in these situations it transfers them to cloud computing that leads to an increase in latency and bandwidth occupation again.Moreover,several fog servers may be fuelled while other servers are empty.This implies an unfair distribution of jobs.In this research study,we shall merge the software defined network(SDN)with IoV and fog computing and use the parked vehicle as assistant fog computing node.This can improve the capabilities of the fog computing layer and help in decreasing the number of migrated tasks to the cloud servers.This increases the ratio of time sensitive tasks that meet the deadline.In addition,a new load balancing strategy is proposed.It works proactively to balance the load locally and globally by the local fog managers and SDN controller,respectively.The simulation experiments show that the proposed system is more efficient than VANET-Fog-Cloud and IoV-Fog-Cloud frameworks in terms of average response time and percentage of bandwidth consumption,meeting the deadline,and resource utilization.
基金supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) (No.2021R1C1C1013133)supported by the Institute of Information and Communications Technology Planning and Evaluation (IITP)grant funded by the Korea Government (MSIT) (RS-2022-00167197,Development of Intelligent 5G/6G Infrastructure Technology for The Smart City)supported by the Soonchunhyang University Research Fund.
文摘In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method.