The rise of time-sensitive applications with broad geographical scope drives the development of time-sensitive networking(TSN)from intra-domain to inter-domain to ensure overall end-to-end connectivity requirements in...The rise of time-sensitive applications with broad geographical scope drives the development of time-sensitive networking(TSN)from intra-domain to inter-domain to ensure overall end-to-end connectivity requirements in heterogeneous deployments.When multiple TSN networks interconnect over non-TSN networks,all devices in the network need to be syn-chronized by sharing a uniform time reference.How-ever,most non-TSN networks are best-effort.Path delay asymmetry and random noise accumulation can introduce unpredictable time errors during end-to-end time synchronization.These factors can degrade syn-chronization performance.Therefore,cross-domain time synchronization becomes a challenging issue for multiple TSN networks interconnected by non-TSN networks.This paper presents a cross-domain time synchronization scheme that follows the software-defined TSN(SD-TSN)paradigm.It utilizes a com-bined control plane constructed by a coordinate con-troller and a domain controller for centralized control and management of cross-domain time synchroniza-tion.The general operation flow of the cross-domain time synchronization process is designed.The mecha-nism of cross-domain time synchronization is revealed by introducing a synchronization model and an error compensation method.A TSN cross-domain proto-type testbed is constructed for verification.Results show that the scheme can achieve end-to-end high-precision time synchronization with accuracy and sta-bility.展开更多
Accurate early classification of elephant flows(elephants)is important for network management and resource optimization.Elephant models,mainly based on the byte count of flows,can always achieve high accuracy,but not ...Accurate early classification of elephant flows(elephants)is important for network management and resource optimization.Elephant models,mainly based on the byte count of flows,can always achieve high accuracy,but not in a time-efficient manner.The time efficiency becomes even worse when the flows to be classified are sampled by flow entry timeout over Software-Defined Networks(SDNs)to achieve a better resource efficiency.This paper addresses this situation by combining co-training and Reinforcement Learning(RL)to enable a closed-loop classification approach that divides the entire classification process into episodes,each involving two elephant models.One predicts elephants and is retrained by a selection of flows automatically labeled online by the other.RL is used to formulate a reward function that estimates the values of the possible actions based on the current states of both models and further adjusts the ratio of flows to be labeled in each phase.Extensive evaluation based on real traffic traces shows that the proposed approach can stably predict elephants using the packets received in the first 10% of their lifetime with an accuracy of over 80%,and using only about 10% more control channel bandwidth than the baseline over the evolved SDNs.展开更多
Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on trai...Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on training methodologies,hardware implementations,and practical applications.We analyze four major training paradigms:ANN-to-SNN conversion,direct gradient-based training,spike-timing-dependent plasticity(STDP),and hybrid approaches.Our review encompasses major specialized hardware platforms:Intel Loihi,IBM TrueNorth,SpiNNaker,and BrainScaleS,analyzing their capabilities and constraints.We survey applications spanning computer vision,robotics,edge computing,and brain-computer interfaces,identifying where SNN provide compelling advantages.Our comparative analysis reveals SNN offer significant energy efficiency improvements(1000-10000×reduction)and natural temporal processing,while facing challenges in scalability and training complexity.We identify critical research directions including improved gradient estimation,standardized benchmarking protocols,and hardware-software co-design approaches.This survey provides researchers and practitioners with a comprehensive understanding of current SNN capabilities,limitations,and future prospects.展开更多
On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to f...On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.展开更多
Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Define...Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Defined Network(SDN)provides solutions through centralized control and dynamic resource allocation,but the existing scheduling methods based on Deep Reinforcement Learning(DRL)are insufficient in terms of convergence speed and dynamic optimization capability.To solve these problems,this paper proposes DRL-AMIR,which is an efficient flow scheduling method for software defined ZTN.This method constructs a flow scheduling optimization model that comprehensively considers service delay,bandwidth occupation,and path hops.Additionally,it balances the differentiated requirements of delay-critical K-flows,bandwidth-intensive D-flows,and background B-flows through adaptiveweighting.Theproposed framework employs a customized state space comprising node labels,link bandwidth,delaymetrics,and path length.It incorporates an action space derived fromnode weights and a hybrid reward function that integrates both single-step and multi-step excitation mechanisms.Based on these components,a hierarchical architecture is designed,effectively integrating the data plane,control plane,and knowledge plane.In particular,the adaptive expert mechanism is introduced,which triggers the shortest path algorithm in the training process to accelerate convergence,reduce trial and error costs,and maintain stability.Experiments across diverse real-world network topologies demonstrate that DRL-AMIR achieves a 15–20%reduction in K-flow transmission delays,a 10–15%improvement in link bandwidth utilization compared to SPR,QoSR,and DRSIR,and a 30%faster convergence speed via adaptive expert mechanisms.展开更多
In-optical-sensor computing architectures based on neuro-inspired optical sensor arrays have become key milestones for in-sensor artificial intelligence(AI)technology,enabling intelligent vision sensing and extensive ...In-optical-sensor computing architectures based on neuro-inspired optical sensor arrays have become key milestones for in-sensor artificial intelligence(AI)technology,enabling intelligent vision sensing and extensive data processing.These architectures must demonstrate potential advantages in terms of mass production and complementary metal oxide semiconductor compatibility.Here,we introduce a visible-light-driven neuromorphic vision system that integrates front-end retinomorphic photosensors with a back-end artificial neural network(ANN),employing a single neuro-inspired indium-g allium-zinc-oxide photo transistor(NIP)featuring an aluminum sensitization layer(ASL).By methodically adjusting the ASL coverage on IGZO phototransistors,a fast-switching response-type and a synaptic response-type of IGZO photo transistors are successfully developed.Notably,the fabricated NIP shows a remarkable retina-like photoinduced synaptic plasticity under wavelengths up to 635 nm,with over256-states,weight update nonlinearity below 0.1,and a dynamic range of 64.01.Owing to this technology,a 6×6 neuro-inspired optical image sensor array with the NIP can perform highly integrated sensing,memory,and preprocessing functions,including contrast enhancement,and handwritten digit image recognition.The demonstrated prototype highlights the potential for efficient hardware implementations in in-sensor AI technologies.展开更多
The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficu...The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficulty effectively processing and fully representing their spatiotemporal complexity patterns.The article also discusses a potential path of AI development in the engineering domain.Based on the existing understanding of the principles of multilevel com-plexity,this article suggests that consistency among the logical structures of datasets,AI models,model-building software,and hardware will be an important AI development direction and is worthy of careful consideration.展开更多
Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited late...Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited latency.DDoS attacks pose significant risks to entrepreneurial businesses,preventing legitimate customers from accessing their websites.These attacks require intelligent analytics before processing service requests.Distributed denial of service(DDoS)attacks exploit vulnerabilities in IoT devices by launchingmulti-point distributed attacks.These attacks generate massive traffic that overwhelms the victim’s network,disrupting normal operations.The consequences of distributed denial of service(DDoS)attacks are typically more severe in software-defined networks(SDNs)than in traditional networks.The centralised architecture of these networks can exacerbate existing vulnerabilities,as these weaknesses may not be effectively addressed in this model.The preliminary objective for detecting and mitigating distributed denial of service(DDoS)attacks in software-defined networks(SDN)is to monitor traffic patterns and identify anomalies that indicate distributed denial of service(DDoS)attacks.It implements measures to counter the effects ofDDoS attacks,and ensure network reliability and availability by leveraging the flexibility and programmability of SDN to adaptively respond to threats.The authors present a mechanism that leverages the OpenFlow and sFlow protocols to counter the threats posed by DDoS attacks.The results indicate that the proposed model effectively mitigates the negative effects of DDoS attacks in an SDN environment.展开更多
A hardwale demodulation method for 2-D edge detection is proposed. The filtering step and the differential step are implemented by using the hardware circuit. This demodulation circuit simplifies the edgefinder and re...A hardwale demodulation method for 2-D edge detection is proposed. The filtering step and the differential step are implemented by using the hardware circuit. This demodulation circuit simplifies the edgefinder and reduces the measuring cycle. The calibration method of scale setting is also presented,and bymeasuring some calibrated objects,the demodulation errors and the error correction table is obtained.展开更多
The emphasis of constructing and developing the campus information network is how to design and optimize the network hardware system. This paper mainly studies the network system structure design, the server system st...The emphasis of constructing and developing the campus information network is how to design and optimize the network hardware system. This paper mainly studies the network system structure design, the server system structure design and the network export design, and discusses the network hardware system design and optimization for different scale universities according to different practical demand. The objective is that the network hardware system can meet the demand and have been made full use.展开更多
By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN dep...By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN deployments.For wide-area SDN deployments,multiple controllers are often required,and the placement of these controllers becomes a particularly important task in the SDN context.This paper studies the problem of placing controllers in SDNs,so as to maximize the reliability of SDN control networks.We present a novel metric,called expected percentage of control path loss,to characterize the reliability of SDN control networks.We formulate the reliability-aware control placement problem,prove its NP-hardness,and examine several placement algorithms that can solve this problem.Through extensive simulations using real topologies,we show how the number of controllers and their placement influence the reliability of SDN control networks.Besides,we also found that,through strategic controller placement,the reliability of SDN control networks can be significantly improved without introducing unacceptable switch-to-controller latencies.展开更多
Software-Defined Networking (SDN) has been a hot topic for future network development, which implements the different layers of control plane and data plane respectively. Despite providing high openness and programmab...Software-Defined Networking (SDN) has been a hot topic for future network development, which implements the different layers of control plane and data plane respectively. Despite providing high openness and programmability, the “three-layer two-interface” architecture of SDN changes the traditional network and increases the network attack nodes, which results in new security issues. In this paper, we firstly introduced the background, architecture and working process of SDN. Secondly, we summarized and analyzed the typical security issues from north to south: application layer, northbound interface, control layer, southbound interface and data layer. Another contribution is to review and analyze the existing solutions and latest research progress of each layer, mainly including: authorized authentication module, application isolation, DoS/DDoS defense, multi-controller deployment and flow rule consistency detection. Finally, a conclusion about the future works of SDN security and an idealized global security architecture is proposed.展开更多
The controller is indispensable in software-defined networking(SDN).With several features,controllers monitor the network and respond promptly to dynamic changes.Their performance affects the quality-of-service(QoS)in...The controller is indispensable in software-defined networking(SDN).With several features,controllers monitor the network and respond promptly to dynamic changes.Their performance affects the quality-of-service(QoS)in SDN.Every controller supports a set of features.However,the support of the features may be more prominent in one controller.Moreover,a single controller leads to performance,single-point-of-failure(SPOF),and scalability problems.To overcome this,a controller with an optimum feature set must be available for SDN.Furthermore,a cluster of optimum feature set controllers will overcome an SPOF and improve the QoS in SDN.Herein,leveraging an analytical network process(ANP),we rank SDN controllers regarding their supporting features and create a hierarchical control plane based cluster(HCPC)of the highly ranked controller computed using the ANP,evaluating their performance for the OS3E topology.The results demonstrated in Mininet reveal that a HCPC environment with an optimum controller achieves an improved QoS.Moreover,the experimental results validated in Mininet show that our proposed approach surpasses the existing distributed controller clustering(DCC)schemes in terms of several performance metrics i.e.,delay,jitter,throughput,load balancing,scalability and CPU(central processing unit)utilization.展开更多
The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative...The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging.展开更多
The self-healing strategy is a key component in designing the bio-inspired embryonics circuit with the structure of cell arrays. However, the existing self-healing strategies of embryonics circuits mainly focus on per...The self-healing strategy is a key component in designing the bio-inspired embryonics circuit with the structure of cell arrays. However, the existing self-healing strategies of embryonics circuits mainly focus on permanent faults inside the modules of cells such as the function module and the configuration register, while little attention is paid to transient faults. From the point of view of obtaining high efficiency of hardware utilization, it would be a huge waste of hardware resources by permanent elimination when a cell only suffers a transient fault which can be repaired by a configuration mechanism. A new self-healing strategy, the Fault-Cell Reutilization Self-healing Strategy(FCRSS) which presents a method for reusing transient fault cells, is proposed in this paper. The circuit structures of all the modules in the cells are described in detail. In the new strategy, two processes of elimination and reconfiguration are combined. Within the process of fault-cell elimination, cells with transient faults in the embryonics circuit array could be reused simultaneously to replace the functions of the cells on their left side in the same row. Therefore, transient fault-cells in a transparent state can be reconfigured to realize the fault-cell reutilization. Finally,a circuit simulation, resource consumption, a reliability analysis and a detailed normalization analysis are presented. The FCRSS can improve the hardware utilization rate and system reliability at the expense of a small amount of hardware resources and reconfiguration time. Following the conclusion, the method of determining the optimal self-healing strategy is presented according to the environmental conditions.展开更多
Scientific research requires the collection of data in order to study, monitor, analyze, describe, or understand a particular process or event. Data collection efforts are often a compromise: manual measurements can b...Scientific research requires the collection of data in order to study, monitor, analyze, describe, or understand a particular process or event. Data collection efforts are often a compromise: manual measurements can be time-consuming and labor-intensive, resulting in data being collected at a low frequency, while automating the data-collection process can reduce labor requirements and increase the frequency of measurements, but at the cost of added expense of electronic data-collecting instrumentation. Rapid advances in electronic technologies have resulted in a variety of new and inexpensive sensing, monitoring, and control capabilities which offer opportunities for implementation in agricultural and natural-resource research applications. An Open Source Hardware project called Arduino consists of a programmable microcontroller development platform, expansion capability through add-on boards, and a programming development environment for creating custom microcontroller software. All circuit-board and electronic component specifications, as well as the programming software, are open-source and freely available for anyone to use or modify. Inexpensive sensors and the Arduino development platform were used to develop several inexpensive, automated sensing and datalogging systems for use in agricultural and natural-resources related research projects. Systems were developed and implemented to monitor soil-moisture status of field crops for irrigation scheduling and crop-water use studies, to measure daily evaporation-pan water levels for quantifying evaporative demand, and to monitor environmental parameters under forested conditions. These studies demonstrate the usefulness of automated measurements, and offer guidance for other researchers in developing inexpensive sensing and monitoring systems to further their research.展开更多
As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as ...As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.展开更多
In the face of harsh natural environment applications such as earth-orbiting and deep space satellites, underwater sea vehicles, strong electromagnetic interference and temperature stress,the circuits faults appear ea...In the face of harsh natural environment applications such as earth-orbiting and deep space satellites, underwater sea vehicles, strong electromagnetic interference and temperature stress,the circuits faults appear easily. Circuit faults will inevitably lead to serious losses of availability or impeded mission success without self-repair over the mission duration. Traditional fault-repair methods based on redundant fault-tolerant technique are straightforward to implement, yet their area, power and weight cost can be excessive. Moreover they utilize all plug-in or component level circuits to realize redundant backup, such that their applicability is limited. Hence, a novel selfrepair technology based on evolvable hardware(EHW) and reparation balance technology(RBT) is proposed. Its cost is low, and fault self-repair of various circuits and devices can be realized through dynamic configuration. Making full use of the fault signals, correcting circuit can be found through EHW technique to realize the balance and compensation of the fault output-signals. In this paper, the self-repair model was analyzed which based on EHW and RBT technique, the specific self-repair strategy was studied, the corresponding self-repair circuit fault system was designed, and the typical faults were simulated and analyzed which combined with the actual electronic devices. Simulation results demonstrated that the proposed fault self-repair strategy was feasible. Compared to traditional techniques, fault self-repair based on EHW consumes fewer hardware resources, and the scope of fault self-repair was expanded significantly.展开更多
Hardware/software partitioning is an important step in the design of embedded systems. In this paper, the hardware/software partitioning problem is modeled as a constrained binary integer programming problem, which is...Hardware/software partitioning is an important step in the design of embedded systems. In this paper, the hardware/software partitioning problem is modeled as a constrained binary integer programming problem, which is further converted equivalently to an unconstrained binary integer programming problem by a penalty method. A local search method, HSFM, is developed to obtain a discrete local minimizer of the unconstrained binary integer programming problem. Next, an auxiliary function, which has the same global optimal solutions as the unconstrained binary integer programming problem, is constructed, and its properties are studied. We show that applying HSFM to minimize the auxiliary function can escape from previous local optima by the increase of the parameter value successfully. Finally, a discrete dynamic convexized method is developed to solve the hardware/software partitioning problem. Computational results and comparisons indicate that the proposed algorithm can get high-quality solutions.展开更多
Software-Defined Networking(SDN)is an emerging architecture that enables a computer network to be intelligently and centrally controlled via software applications.It can help manage the whole network environment in a ...Software-Defined Networking(SDN)is an emerging architecture that enables a computer network to be intelligently and centrally controlled via software applications.It can help manage the whole network environment in a consistent and holistic way,without the need of understanding the underlying network structure.At present,SDN may face many challenges like insider attacks,i.e.,the centralized control plane would be attacked by malicious underlying devices and switches.To protect the security of SDN,effective detection approaches are indispensable.In the literature,challenge-based collaborative intrusion detection networks(CIDNs)are an effective detection framework in identifying malicious nodes.It calculates the nodes'reputation and detects a malicious node by sending out a special message called a challenge.In this work,we devise a challenge-based CIDN in SDN and measure its performance against malicious internal nodes.Our results demonstrate that such a mechanism can be effective in SDN environments.展开更多
基金supported in part by National Key R&D Program of China(Grant No.2022YFC3803700)in part by the National Natural Science Foundation of China(Grant No.92067102)in part by the project of Beijing Laboratory of Advanced Information Networks.
文摘The rise of time-sensitive applications with broad geographical scope drives the development of time-sensitive networking(TSN)from intra-domain to inter-domain to ensure overall end-to-end connectivity requirements in heterogeneous deployments.When multiple TSN networks interconnect over non-TSN networks,all devices in the network need to be syn-chronized by sharing a uniform time reference.How-ever,most non-TSN networks are best-effort.Path delay asymmetry and random noise accumulation can introduce unpredictable time errors during end-to-end time synchronization.These factors can degrade syn-chronization performance.Therefore,cross-domain time synchronization becomes a challenging issue for multiple TSN networks interconnected by non-TSN networks.This paper presents a cross-domain time synchronization scheme that follows the software-defined TSN(SD-TSN)paradigm.It utilizes a com-bined control plane constructed by a coordinate con-troller and a domain controller for centralized control and management of cross-domain time synchroniza-tion.The general operation flow of the cross-domain time synchronization process is designed.The mecha-nism of cross-domain time synchronization is revealed by introducing a synchronization model and an error compensation method.A TSN cross-domain proto-type testbed is constructed for verification.Results show that the scheme can achieve end-to-end high-precision time synchronization with accuracy and sta-bility.
基金supported by the National Natural Science Foundation of China(61962016)the Ministry of Science and Technology of China(G2022033002L)+1 种基金National Natural Science Foundation of Guangxi(2022JJA170057)Guangxi Education Department’s Project on Improving the Basic Research Ability of Young and Middleaged Teachers in Universities(2023ky0812,Research on Statistical Network Delay Predictions in Large-scale SDNs).
文摘Accurate early classification of elephant flows(elephants)is important for network management and resource optimization.Elephant models,mainly based on the byte count of flows,can always achieve high accuracy,but not in a time-efficient manner.The time efficiency becomes even worse when the flows to be classified are sampled by flow entry timeout over Software-Defined Networks(SDNs)to achieve a better resource efficiency.This paper addresses this situation by combining co-training and Reinforcement Learning(RL)to enable a closed-loop classification approach that divides the entire classification process into episodes,each involving two elephant models.One predicts elephants and is retrained by a selection of flows automatically labeled online by the other.RL is used to formulate a reward function that estimates the values of the possible actions based on the current states of both models and further adjusts the ratio of flows to be labeled in each phase.Extensive evaluation based on real traffic traces shows that the proposed approach can stably predict elephants using the packets received in the first 10% of their lifetime with an accuracy of over 80%,and using only about 10% more control channel bandwidth than the baseline over the evolved SDNs.
文摘Spiking neural networks(SNN)represent a paradigm shift toward discrete,event-driven neural computation that mirrors biological brain mechanisms.This survey systematically examines current SNN research,focusing on training methodologies,hardware implementations,and practical applications.We analyze four major training paradigms:ANN-to-SNN conversion,direct gradient-based training,spike-timing-dependent plasticity(STDP),and hybrid approaches.Our review encompasses major specialized hardware platforms:Intel Loihi,IBM TrueNorth,SpiNNaker,and BrainScaleS,analyzing their capabilities and constraints.We survey applications spanning computer vision,robotics,edge computing,and brain-computer interfaces,identifying where SNN provide compelling advantages.Our comparative analysis reveals SNN offer significant energy efficiency improvements(1000-10000×reduction)and natural temporal processing,while facing challenges in scalability and training complexity.We identify critical research directions including improved gradient estimation,standardized benchmarking protocols,and hardware-software co-design approaches.This survey provides researchers and practitioners with a comprehensive understanding of current SNN capabilities,limitations,and future prospects.
基金supported by the National Research Foundation of Korea(NRF)grant for RLRC funded by the Korea government(MSIT)(No.2022R1A5A8026986,RLRC)supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2020-0-01304,Development of Self-Learnable Mobile Recursive Neural Network Processor Technology)+3 种基金supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the Grand Information Technology Research Center support program(IITP-2024-2020-0-01462,Grand-ICT)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)supported by the Korea Technology and Information Promotion Agency for SMEs(TIPA)supported by the Korean government(Ministry of SMEs and Startups)’s Smart Manufacturing Innovation R&D(RS-2024-00434259).
文摘On-device Artificial Intelligence(AI)accelerators capable of not only inference but also training neural network models are in increasing demand in the industrial AI field,where frequent retraining is crucial due to frequent production changes.Batch normalization(BN)is fundamental to training convolutional neural networks(CNNs),but its implementation in compact accelerator chips remains challenging due to computational complexity,particularly in calculating statistical parameters and gradients across mini-batches.Existing accelerator architectures either compromise the training accuracy of CNNs through approximations or require substantial computational resources,limiting their practical deployment.We present a hardware-optimized BN accelerator that maintains training accuracy while significantly reducing computational overhead through three novel techniques:(1)resourcesharing for efficient resource utilization across forward and backward passes,(2)interleaved buffering for reduced dynamic random-access memory(DRAM)access latencies,and(3)zero-skipping for minimal gradient computation.Implemented on a VCU118 Field Programmable Gate Array(FPGA)on 100 MHz and validated using You Only Look Once version 2-tiny(YOLOv2-tiny)on the PASCALVisualObjectClasses(VOC)dataset,our normalization accelerator achieves a 72%reduction in processing time and 83%lower power consumption compared to a 2.4 GHz Intel Central Processing Unit(CPU)software normalization implementation,while maintaining accuracy(0.51%mean Average Precision(mAP)drop at floating-point 32 bits(FP32),1.35%at brain floating-point 16 bits(bfloat16)).When integrated into a neural processing unit(NPU),the design demonstrates 63%and 97%performance improvements over AMD CPU and Reduced Instruction Set Computing-V(RISC-V)implementations,respectively.These results confirm that our proposed BN hardware design enables efficient,high-accuracy,and power-saving on-device training for modern CNNs.Our results demonstrate that efficient hardware implementation of standard batch normalization is achievable without sacrificing accuracy,enabling practical on-device CNN training with significantly reduced computational and power requirements.
基金supported in part by Scientific Research Fund of Zhejiang Provincial Education Department under Grant Y202351110in part by Huzhou Science and Technology Plan Project under Grant 2024YZ23+1 种基金in part by Research Fund of National Key Laboratory of Advanced Communication Networks under Grant SCX23641X004in part by Postgraduate Research and Innovation Project of Huzhou University under Grant 2024KYCX50.
文摘Zero Trust Network(ZTN)enhances network security through strict authentication and access control.However,in the ZTN,optimizing flow control to improve the quality of service is still facing challenges.Software Defined Network(SDN)provides solutions through centralized control and dynamic resource allocation,but the existing scheduling methods based on Deep Reinforcement Learning(DRL)are insufficient in terms of convergence speed and dynamic optimization capability.To solve these problems,this paper proposes DRL-AMIR,which is an efficient flow scheduling method for software defined ZTN.This method constructs a flow scheduling optimization model that comprehensively considers service delay,bandwidth occupation,and path hops.Additionally,it balances the differentiated requirements of delay-critical K-flows,bandwidth-intensive D-flows,and background B-flows through adaptiveweighting.Theproposed framework employs a customized state space comprising node labels,link bandwidth,delaymetrics,and path length.It incorporates an action space derived fromnode weights and a hybrid reward function that integrates both single-step and multi-step excitation mechanisms.Based on these components,a hierarchical architecture is designed,effectively integrating the data plane,control plane,and knowledge plane.In particular,the adaptive expert mechanism is introduced,which triggers the shortest path algorithm in the training process to accelerate convergence,reduce trial and error costs,and maintain stability.Experiments across diverse real-world network topologies demonstrate that DRL-AMIR achieves a 15–20%reduction in K-flow transmission delays,a 10–15%improvement in link bandwidth utilization compared to SPR,QoSR,and DRSIR,and a 30%faster convergence speed via adaptive expert mechanisms.
基金supported by the National Research Foundation of Korea(NRF)Grant funded by the Korea government(MSIT)(Grant No.RS-2023-00256917)Samsung Display。
文摘In-optical-sensor computing architectures based on neuro-inspired optical sensor arrays have become key milestones for in-sensor artificial intelligence(AI)technology,enabling intelligent vision sensing and extensive data processing.These architectures must demonstrate potential advantages in terms of mass production and complementary metal oxide semiconductor compatibility.Here,we introduce a visible-light-driven neuromorphic vision system that integrates front-end retinomorphic photosensors with a back-end artificial neural network(ANN),employing a single neuro-inspired indium-g allium-zinc-oxide photo transistor(NIP)featuring an aluminum sensitization layer(ASL).By methodically adjusting the ASL coverage on IGZO phototransistors,a fast-switching response-type and a synaptic response-type of IGZO photo transistors are successfully developed.Notably,the fabricated NIP shows a remarkable retina-like photoinduced synaptic plasticity under wavelengths up to 635 nm,with over256-states,weight update nonlinearity below 0.1,and a dynamic range of 64.01.Owing to this technology,a 6×6 neuro-inspired optical image sensor array with the NIP can perform highly integrated sensing,memory,and preprocessing functions,including contrast enhancement,and handwritten digit image recognition.The demonstrated prototype highlights the potential for efficient hardware implementations in in-sensor AI technologies.
文摘The aim of this article is to explore potential directions for the development of artificial intelligence(AI).It points out that,while current AI can handle the statistical properties of complex systems,it has difficulty effectively processing and fully representing their spatiotemporal complexity patterns.The article also discusses a potential path of AI development in the engineering domain.Based on the existing understanding of the principles of multilevel com-plexity,this article suggests that consistency among the logical structures of datasets,AI models,model-building software,and hardware will be an important AI development direction and is worthy of careful consideration.
基金supported by the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Distributed denial of service(DDoS)attacks are common network attacks that primarily target Internet of Things(IoT)devices.They are critical for emerging wireless services,especially for applications with limited latency.DDoS attacks pose significant risks to entrepreneurial businesses,preventing legitimate customers from accessing their websites.These attacks require intelligent analytics before processing service requests.Distributed denial of service(DDoS)attacks exploit vulnerabilities in IoT devices by launchingmulti-point distributed attacks.These attacks generate massive traffic that overwhelms the victim’s network,disrupting normal operations.The consequences of distributed denial of service(DDoS)attacks are typically more severe in software-defined networks(SDNs)than in traditional networks.The centralised architecture of these networks can exacerbate existing vulnerabilities,as these weaknesses may not be effectively addressed in this model.The preliminary objective for detecting and mitigating distributed denial of service(DDoS)attacks in software-defined networks(SDN)is to monitor traffic patterns and identify anomalies that indicate distributed denial of service(DDoS)attacks.It implements measures to counter the effects ofDDoS attacks,and ensure network reliability and availability by leveraging the flexibility and programmability of SDN to adaptively respond to threats.The authors present a mechanism that leverages the OpenFlow and sFlow protocols to counter the threats posed by DDoS attacks.The results indicate that the proposed model effectively mitigates the negative effects of DDoS attacks in an SDN environment.
文摘A hardwale demodulation method for 2-D edge detection is proposed. The filtering step and the differential step are implemented by using the hardware circuit. This demodulation circuit simplifies the edgefinder and reduces the measuring cycle. The calibration method of scale setting is also presented,and bymeasuring some calibrated objects,the demodulation errors and the error correction table is obtained.
文摘The emphasis of constructing and developing the campus information network is how to design and optimize the network hardware system. This paper mainly studies the network system structure design, the server system structure design and the network export design, and discusses the network hardware system design and optimization for different scale universities according to different practical demand. The objective is that the network hardware system can meet the demand and have been made full use.
基金supported in part by the National High Technology Research and Development Program(863 Program)of China under Grant No.2011AA01A101the National High Technology Research and Development Program(863 Program)of China under Grant No.2013AA01330the National High Technology Research and Development Program(863 Program)of China under Grant No.2013AA013303
文摘By decoupling control plane and data plane,Software-Defined Networking(SDN) approach simplifies network management and speeds up network innovations.These benefits have led not only to prototypes,but also real SDN deployments.For wide-area SDN deployments,multiple controllers are often required,and the placement of these controllers becomes a particularly important task in the SDN context.This paper studies the problem of placing controllers in SDNs,so as to maximize the reliability of SDN control networks.We present a novel metric,called expected percentage of control path loss,to characterize the reliability of SDN control networks.We formulate the reliability-aware control placement problem,prove its NP-hardness,and examine several placement algorithms that can solve this problem.Through extensive simulations using real topologies,we show how the number of controllers and their placement influence the reliability of SDN control networks.Besides,we also found that,through strategic controller placement,the reliability of SDN control networks can be significantly improved without introducing unacceptable switch-to-controller latencies.
基金supported by the Wuhan Frontier Program of Application Foundation (No.2018010401011295)National High Technology Research and Development Program of China (“863” Program) (Grant No. 2015AA016002)
文摘Software-Defined Networking (SDN) has been a hot topic for future network development, which implements the different layers of control plane and data plane respectively. Despite providing high openness and programmability, the “three-layer two-interface” architecture of SDN changes the traditional network and increases the network attack nodes, which results in new security issues. In this paper, we firstly introduced the background, architecture and working process of SDN. Secondly, we summarized and analyzed the typical security issues from north to south: application layer, northbound interface, control layer, southbound interface and data layer. Another contribution is to review and analyze the existing solutions and latest research progress of each layer, mainly including: authorized authentication module, application isolation, DoS/DDoS defense, multi-controller deployment and flow rule consistency detection. Finally, a conclusion about the future works of SDN security and an idealized global security architecture is proposed.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2020-2018-0-01431)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘The controller is indispensable in software-defined networking(SDN).With several features,controllers monitor the network and respond promptly to dynamic changes.Their performance affects the quality-of-service(QoS)in SDN.Every controller supports a set of features.However,the support of the features may be more prominent in one controller.Moreover,a single controller leads to performance,single-point-of-failure(SPOF),and scalability problems.To overcome this,a controller with an optimum feature set must be available for SDN.Furthermore,a cluster of optimum feature set controllers will overcome an SPOF and improve the QoS in SDN.Herein,leveraging an analytical network process(ANP),we rank SDN controllers regarding their supporting features and create a hierarchical control plane based cluster(HCPC)of the highly ranked controller computed using the ANP,evaluating their performance for the OS3E topology.The results demonstrated in Mininet reveal that a HCPC environment with an optimum controller achieves an improved QoS.Moreover,the experimental results validated in Mininet show that our proposed approach surpasses the existing distributed controller clustering(DCC)schemes in terms of several performance metrics i.e.,delay,jitter,throughput,load balancing,scalability and CPU(central processing unit)utilization.
文摘The interpretation of spinal images fixed with metallic hardware forms an increasing bulk of daily practice in a busy imaging department. Radiologists are required to be familiar with the instrumentation and operative options used in spinal fixation and fusion procedures, especially in his or her institute. This is critical in evaluating the position of implants and potential complications associated with the operative approaches and spinal fixation devices used. Thus, the radiologist can play an important role in patient care and outcome. This review outlines the advantages and disadvantages of commonly used imaging methods and reports on the best yield for each modality and how to overcome the problematic issues associated with the presence of metallic hardware during imaging. Baseline radiographs are essential as they are the baseline point for evaluation of future studies should patients develop symptoms suggesting possible complications. They may justify further imaging workup with computed tomography, magnetic resonance and/or nuclear medicine studies as the evaluation of a patient with a spinal implant involves a multi-modality approach. This review describes imaging features of potential complications associated with spinal fusion surgery as well as the instrumentation used. This basic knowledge aims to help radiologists approach everyday practice in clinical imaging.
基金co-supported by the National Natural Science Foundation of China(Nos.61202001,61402226)the Fundamental Research Funds for the Central Universities of NUAA of China(Nos.NS2018026,NS2012024)
文摘The self-healing strategy is a key component in designing the bio-inspired embryonics circuit with the structure of cell arrays. However, the existing self-healing strategies of embryonics circuits mainly focus on permanent faults inside the modules of cells such as the function module and the configuration register, while little attention is paid to transient faults. From the point of view of obtaining high efficiency of hardware utilization, it would be a huge waste of hardware resources by permanent elimination when a cell only suffers a transient fault which can be repaired by a configuration mechanism. A new self-healing strategy, the Fault-Cell Reutilization Self-healing Strategy(FCRSS) which presents a method for reusing transient fault cells, is proposed in this paper. The circuit structures of all the modules in the cells are described in detail. In the new strategy, two processes of elimination and reconfiguration are combined. Within the process of fault-cell elimination, cells with transient faults in the embryonics circuit array could be reused simultaneously to replace the functions of the cells on their left side in the same row. Therefore, transient fault-cells in a transparent state can be reconfigured to realize the fault-cell reutilization. Finally,a circuit simulation, resource consumption, a reliability analysis and a detailed normalization analysis are presented. The FCRSS can improve the hardware utilization rate and system reliability at the expense of a small amount of hardware resources and reconfiguration time. Following the conclusion, the method of determining the optimal self-healing strategy is presented according to the environmental conditions.
文摘Scientific research requires the collection of data in order to study, monitor, analyze, describe, or understand a particular process or event. Data collection efforts are often a compromise: manual measurements can be time-consuming and labor-intensive, resulting in data being collected at a low frequency, while automating the data-collection process can reduce labor requirements and increase the frequency of measurements, but at the cost of added expense of electronic data-collecting instrumentation. Rapid advances in electronic technologies have resulted in a variety of new and inexpensive sensing, monitoring, and control capabilities which offer opportunities for implementation in agricultural and natural-resource research applications. An Open Source Hardware project called Arduino consists of a programmable microcontroller development platform, expansion capability through add-on boards, and a programming development environment for creating custom microcontroller software. All circuit-board and electronic component specifications, as well as the programming software, are open-source and freely available for anyone to use or modify. Inexpensive sensors and the Arduino development platform were used to develop several inexpensive, automated sensing and datalogging systems for use in agricultural and natural-resources related research projects. Systems were developed and implemented to monitor soil-moisture status of field crops for irrigation scheduling and crop-water use studies, to measure daily evaporation-pan water levels for quantifying evaporative demand, and to monitor environmental parameters under forested conditions. These studies demonstrate the usefulness of automated measurements, and offer guidance for other researchers in developing inexpensive sensing and monitoring systems to further their research.
文摘As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.
基金supported by the National Natural Science Foundation of China (Nos. 61271153, 61372039)
文摘In the face of harsh natural environment applications such as earth-orbiting and deep space satellites, underwater sea vehicles, strong electromagnetic interference and temperature stress,the circuits faults appear easily. Circuit faults will inevitably lead to serious losses of availability or impeded mission success without self-repair over the mission duration. Traditional fault-repair methods based on redundant fault-tolerant technique are straightforward to implement, yet their area, power and weight cost can be excessive. Moreover they utilize all plug-in or component level circuits to realize redundant backup, such that their applicability is limited. Hence, a novel selfrepair technology based on evolvable hardware(EHW) and reparation balance technology(RBT) is proposed. Its cost is low, and fault self-repair of various circuits and devices can be realized through dynamic configuration. Making full use of the fault signals, correcting circuit can be found through EHW technique to realize the balance and compensation of the fault output-signals. In this paper, the self-repair model was analyzed which based on EHW and RBT technique, the specific self-repair strategy was studied, the corresponding self-repair circuit fault system was designed, and the typical faults were simulated and analyzed which combined with the actual electronic devices. Simulation results demonstrated that the proposed fault self-repair strategy was feasible. Compared to traditional techniques, fault self-repair based on EHW consumes fewer hardware resources, and the scope of fault self-repair was expanded significantly.
基金Supported by the National Natural Science Foundation of China(11301255)the Fund by Collaborative Innovation Center of IoT Industrialization and Intelligent Production,Minjiang University(IIC1703)+1 种基金Foundation of Minjiang University(MYK17032)the Program for New Century Excellent Talents in Fujian Province University
文摘Hardware/software partitioning is an important step in the design of embedded systems. In this paper, the hardware/software partitioning problem is modeled as a constrained binary integer programming problem, which is further converted equivalently to an unconstrained binary integer programming problem by a penalty method. A local search method, HSFM, is developed to obtain a discrete local minimizer of the unconstrained binary integer programming problem. Next, an auxiliary function, which has the same global optimal solutions as the unconstrained binary integer programming problem, is constructed, and its properties are studied. We show that applying HSFM to minimize the auxiliary function can escape from previous local optima by the increase of the parameter value successfully. Finally, a discrete dynamic convexized method is developed to solve the hardware/software partitioning problem. Computational results and comparisons indicate that the proposed algorithm can get high-quality solutions.
基金This work was supported by National Natural Science Foundation of China(No.61802080 and 61802077)Guangdong General Colleges and Universities Research Project(2018GkQNCX105)+1 种基金Zhongshan Public Welfare Science and Technology Research Project(2019B2044)Keping Yu was supported in part by the Japan Society for the Promotion of Science(JSPS)Grants-in-Aid for Scientific Research(KAKENHI)under Grant JP18K18044.
文摘Software-Defined Networking(SDN)is an emerging architecture that enables a computer network to be intelligently and centrally controlled via software applications.It can help manage the whole network environment in a consistent and holistic way,without the need of understanding the underlying network structure.At present,SDN may face many challenges like insider attacks,i.e.,the centralized control plane would be attacked by malicious underlying devices and switches.To protect the security of SDN,effective detection approaches are indispensable.In the literature,challenge-based collaborative intrusion detection networks(CIDNs)are an effective detection framework in identifying malicious nodes.It calculates the nodes'reputation and detects a malicious node by sending out a special message called a challenge.In this work,we devise a challenge-based CIDN in SDN and measure its performance against malicious internal nodes.Our results demonstrate that such a mechanism can be effective in SDN environments.