The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are cr...The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.展开更多
Network intrusion poses a severe threat to the Internet.However,existing intrusion detection models cannot effectively distinguish different intrusions with high-degree feature overlap.In addition,efficient real-time ...Network intrusion poses a severe threat to the Internet.However,existing intrusion detection models cannot effectively distinguish different intrusions with high-degree feature overlap.In addition,efficient real-time detection is an urgent problem.To address the two above problems,we propose a Latent Dirichlet Allocation topic model-based framework for real-time network Intrusion Detection(LDA-ID),consisting of static and online LDA-ID.The problem of feature overlap is transformed into static LDA-ID topic number optimization and topic selection.Thus,the detection is based on the latent topic features.To achieve efficient real-time detection,we design an online computing mode for static LDA-ID,in which a parameter iteration method based on momentum is proposed to balance the contribution of prior knowledge and new information.Furthermore,we design two matching mechanisms to accommodate the static and online LDA-ID,respectively.Experimental results on the public NSL-KDD and UNSW-NB15 datasets show that our framework gets higher accuracy than the others.展开更多
The paper deals with the problem of the asymptotic stability for general continuous nonlinear networked control systems (NCSs). Based on Lyapunov stability theorem combined with improved Razumikhin technique, the su...The paper deals with the problem of the asymptotic stability for general continuous nonlinear networked control systems (NCSs). Based on Lyapunov stability theorem combined with improved Razumikhin technique, the sufficient conditions of asymptotic stability for the system are derived. With the proposed method, the estimate of maximum allowable delay bound (MADB) for linear networked control system is also given. Compared to the other methods, the proposed method gives a much less conservative MADB and more general results. Numerical examples and some simulations are worked out to demonstrate the effectiveness and performance of the proposed method.展开更多
Security measures are urgently required to mitigate the recent rapid increase in network security attacks.Although methods employing machine learning have been researched and developed to detect various network attack...Security measures are urgently required to mitigate the recent rapid increase in network security attacks.Although methods employing machine learning have been researched and developed to detect various network attacks effectively,these are passive approaches that cannot protect the network from attacks,but detect them after the end of the session.Since such passive approaches cannot provide fundamental security solutions,we propose an active approach that can prevent further damage by detecting and blocking attacks in real time before the session ends.The proposed technology uses a two-level classifier structure:the first-stage classifier supports real-time classification,and the second-stage classifier supports accurate classification.Thus,the proposed approach can be used to determine whether an attack has occurred with high accuracy,even under heavy traffic.Through extensive evaluation,we confirm that our approach can provide a high detection rate in real time.Furthermore,because the proposed approach is fast,light,and easy to implement,it can be adopted in most existing network security equipment.Finally,we hope to mitigate the limitations of existing security systems,and expect to keep networks faster and safer from the increasing number of cyber-attacks.展开更多
The energy consumption of the information and communication technology sector has become a significant portion of the total global energy consumption, warranting research efforts to attempt to reduce it. The pre-requi...The energy consumption of the information and communication technology sector has become a significant portion of the total global energy consumption, warranting research efforts to attempt to reduce it. The pre-requisite for effectual energy management is the availability of the current power consumption values from network devices. Previous works have attempted to estimate and model the consumption values or have measured it using intrusive approaches such as using an in-line power meter. Recent trends suggest that information models are being increasingly used in all aspects of network management. This paper presents a framework developed for enabling the collection of real-time power consumption information from the next generation of networking hardware non-intrusively by employing information models. The experiment results indicate that it is feasible to gather power consumption data using standardized IETF information models, or non-standard customized information models, or through abstracting and exposing the information in a uniform format when no support for the required information models exists. Functional validation of the proposed framework is performed and the results from this research could be leveraged to make energy-efficient network management decisions.展开更多
This article mainly introduces the multi-layer distributed C/S architecture of system design scheme. Its working principle is the client program runs automatically after the computer starts, and establish communicatio...This article mainly introduces the multi-layer distributed C/S architecture of system design scheme. Its working principle is the client program runs automatically after the computer starts, and establish communication with the application server. The network administrator can monitor and intelligent management of the client computer through the server program, the computer will execute the corresponding operation according to the server to send command instructions. The system realize the main module of the whole system framework, network monitoring data initialization module, network data transmission module, image coding and decoding module, the advantages of system make full use of existing LAN resources, timely delivery and manaRement information.展开更多
Space-division multiplexing(SDM)utilizing uncoupled multi-core fibers(MCF)is considered a promising candidate for nextgeneration high-speed optical transmission systems due to its huge capacity and low inter-core cros...Space-division multiplexing(SDM)utilizing uncoupled multi-core fibers(MCF)is considered a promising candidate for nextgeneration high-speed optical transmission systems due to its huge capacity and low inter-core crosstalk.In this paper,we demonstrate a realtime high-speed SDM transmission system over a field-deployed 7-core MCF cable using commercial 400 Gbit/s backbone optical transport network(OTN)transceivers and a network management system.The transceivers employ a high noise-tolerant quadrature phase shift keying(QPSK)modulation format with a 130 Gbaud rate,enabled by optoelectronic multi-chip module(OE-MCM)packaging.The network management system can effectively manage and monitor the performance of the 7-core SDM OTN system and promptly report failure events through alarms.Our field trial demonstrates the compatibility of uncoupled MCF with high-speed OTN transmission equipment and network management systems,supporting its future deployment in next-generation high-speed terrestrial cable transmission networks.展开更多
Real-time semantic segmentation tasks place stringent demands on network inference speed,often requiring a reduction in network depth to decrease computational load.However,shallow networks tend to exhibit degradation...Real-time semantic segmentation tasks place stringent demands on network inference speed,often requiring a reduction in network depth to decrease computational load.However,shallow networks tend to exhibit degradation in feature extraction completeness and inference accuracy.Therefore,balancing high performance with real-time requirements has become a critical issue in the study of real-time semantic segmentation.To address these challenges,this paper proposes a lightweight bilateral dual-residual network.By introducing a novel residual structure combined with feature extraction and fusion modules,the proposed network significantly enhances representational capacity while reducing computational costs.Specifically,an improved compound residual structure is designed to optimize the efficiency of information propagation and feature extraction.Furthermore,the proposed feature extraction and fusion module enables the network to better capture multi-scale information in images,improving the ability to detect both detailed and global semantic features.Experimental results on the publicly available Cityscapes dataset demonstrate that the proposed lightweight dual-branch network achieves outstanding performance while maintaining low computational complexity.In particular,the network achieved a mean Intersection over Union(mIoU)of 78.4%on the Cityscapes validation set,surpassing many existing semantic segmentation models.Additionally,in terms of inference speed,the network reached 74.5 frames per second when tested on an NVIDIA GeForce RTX 3090 GPU,significantly improving real-time performance.展开更多
The information exchange among satellites is crucial for the implementation of cluster satellite cooperative missions.However,achieving fast perception,rapid networking,and highprecision time synchronization among nod...The information exchange among satellites is crucial for the implementation of cluster satellite cooperative missions.However,achieving fast perception,rapid networking,and highprecision time synchronization among nodes without the support of the Global Navigation Satellite System(GNSS)and other prior information remains a formidable challenge to real-time wireless networks design.Therefore,a self-organizing network methodology based on multi-agent negotiation is proposed,which autonomously determines the master node through collaborative negotiation and competitive elections.On this basis,a real-time network protocol design is carried out and a high-precision time synchronization method with motion compensation is proposed.Simulation results demonstrate that the proposed method enables rapid networking with the capability of selfdiscovery,self-organization,and self-healing.For a cluster of 8 satellites,the networking time and the reorganization time are less than 4 s.The time synchronization accuracy exceeds 10-10s with motion compensation,demonstrating excellent real-time performance and stability.The research presented in this paper provides a valuable reference for the design and application of spacebased self-organizing networks for satellite cluster.展开更多
Over 1.3 million people die annually in traffic accidents,and this tragic fact highlights the urgent need to enhance the intelligence of traffic safety and control systems.In modern industrial and technological applic...Over 1.3 million people die annually in traffic accidents,and this tragic fact highlights the urgent need to enhance the intelligence of traffic safety and control systems.In modern industrial and technological applications and collaborative edge intelligence,control systems are crucial for ensuring efficiency and safety.However,deficiencies in these systems can lead to significant operational risks.This paper uses edge intelligence to address the challenges of achieving target speeds and improving efficiency in vehicle control,particularly the limitations of traditional Proportional-Integral-Derivative(PID)controllers inmanaging nonlinear and time-varying dynamics,such as varying road conditions and vehicle behavior,which often result in substantial discrepancies between desired and actual speeds,as well as inefficiencies due to manual parameter adjustments.The paper uses edge intelligence to propose a novel PID control algorithm that integrates Backpropagation(BP)neural networks to enhance robustness and adaptability.The BP neural network is first trained to capture the nonlinear dynamic characteristics of the vehicle.Thetrained network is then combined with the PID controller to forma hybrid control strategy.The output layer of the neural network directly adjusts the PIDparameters(k_(p),k_(i),k_(d)),optimizing performance for specific driving scenarios through self-learning and weight adjustments.Simulation experiments demonstrate that our BP neural network-based PID design significantly outperforms traditional methods,with the response time for acceleration from 0 to 1 m/s improved from 0.25 s to just 0.065 s.Furthermore,real-world tests on an intelligent vehicle show its ability to make timely adjustments in response to complex road conditions,ensuring consistent speed maintenance and enhancing overall system performance.展开更多
In the face of the large number of people with motor function disabilities,rehabilitation robots have attracted more and more attention.In order to promote the active participation of the user's motion intention i...In the face of the large number of people with motor function disabilities,rehabilitation robots have attracted more and more attention.In order to promote the active participation of the user's motion intention in the assisted rehabilitation process of the robots,it is crucial to establish the human motion prediction model.In this paper,a hybrid prediction model built on long short-term memory(LSTM)neural network using surface electromyography(sEMG)is applied to predict the elbow motion of the users in advance.This model includes two sub-models:a back-propagation neural network and an LSTM network.The former extracts a preliminary prediction of the elbow motion,and the latter corrects this prediction to increase accuracy.The proposed model takes time series data as input,which includes the sEMG signals measured by electrodes and the continuous angles from inertial measurement units.The offline and online tests were carried out to verify the established hybrid model.Finally,average root mean square errors of 3.52°and 4.18°were reached respectively for offline and online tests,and the correlation coefficients for both were above 0.98.展开更多
Enhancing the accuracy of real-time ship roll prediction is crucial for maritime safety and operational efficiency.To address the challenge of accurately predicting the ship roll status with nonlinear time-varying dyn...Enhancing the accuracy of real-time ship roll prediction is crucial for maritime safety and operational efficiency.To address the challenge of accurately predicting the ship roll status with nonlinear time-varying dynamic characteristics,a real-time ship roll prediction scheme is proposed on the basis of a data preprocessing strategy and a novel stochastic trainer-based feedforward neural network.The sliding data window serves as a ship time-varying dynamic observer to enhance model prediction stability.The variational mode decomposition method extracts effective information on ship roll motion and reduces the non-stationary characteristics of the series.The energy entropy method reconstructs the mode components into high-frequency,medium-frequency,and low-frequency series to reduce model complexity.An improved black widow optimization algorithm trainer-based feedforward neural network with enhanced local optimal avoidance predicts the high-frequency component,enabling accurate tracking of abrupt signals.Additionally,the deterministic algorithm trainer-based neural network,characterized by rapid processing speed,predicts the remaining two mode components.Thus,real-time ship roll forecasting can be achieved through the reconstruction of mode component prediction results.The feasibility and effectiveness of the proposed hybrid prediction scheme for ship roll motion are demonstrated through the measured data of a full-scale ship trial.The proposed prediction scheme achieves real-time ship roll prediction with superior prediction accuracy.展开更多
Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable...Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.展开更多
Along with process control,perception represents the main function performed by the Edge Layer of an Internet of Things(IoT)network.Many of these networks implement various applications where the response time does no...Along with process control,perception represents the main function performed by the Edge Layer of an Internet of Things(IoT)network.Many of these networks implement various applications where the response time does not represent an important parameter.However,in critical applications,this parameter represents a crucial aspect.One important sensing device used in IoT designs is the accelerometer.In most applications,the response time of the embedded driver software handling this device is generally not analysed and not taken into account.In this paper,we present the design and implementation of a predictable real-time driver stack for a popular accelerometer and gyroscope device family.We provide clear justifications for why this response time is extremely important for critical applications in the acquisition process of such data.We present extensive measurements and experimental results that demonstrate the predictability of our solution,making it suitable for critical real-time systems.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
Through a case analysis,this study examines the spatiotemporal evolution of microseismic(MS)events,energy characteristics,volumetric features,and fracture network development in surface well hydraulic fracturing.A tot...Through a case analysis,this study examines the spatiotemporal evolution of microseismic(MS)events,energy characteristics,volumetric features,and fracture network development in surface well hydraulic fracturing.A total of 349 MS events were analyzed across different fracturing sections,revealing significant heterogeneity in fracture propagation.Energy scanning results showed that cumulative energy values ranged from 240 to 1060 J across the sections,indicating notable differences.Stimulated reservoir volume(SRV)analysis demonstrated well-developed fracture networks in certain sections,with a total SRV exceeding 1540000 m^(3).The hydraulic fracture network analysis revealed that during the midfracturing stage,the density and spatial extent of MS events significantly increased,indicating rapid fracture propagation and the formation of complex networks.In the later stage,the number of secondary fractures near fracture edges decreased,and the fracture network stabilized.By comparing the branching index,fracture length,width,height,and SRV values across different fracturing sections,Sections No.1 and No.8 showed the best performance,with high MS event densities,extensive fracture networks,and significant energy release.However,Sections No.4 and No.5 exhibited sparse MS activity and poor fracture connectivity,indicating suboptimal stimulation effectiveness.展开更多
To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on...To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on long shortterm memory(RPP-LSTM)network is proposed,which combines the memory characteristics of recurrent neural network(RNN)and the deep reinforcement learning algorithm.LSTM networks are used in this algorithm as Q-value networks for the deep Q network(DQN)algorithm,which makes the decision of the Q-value network has some memory.Thanks to LSTM network,the Q-value network can use the previous environmental information and action information which effectively avoids the problem of single-step decision considering only the current environment.Besides,the algorithm proposes a hierarchical reward and punishment function for the specific problem of UAV real-time path planning,so that the UAV can more reasonably perform path planning.Simulation verification shows that compared with the traditional feed-forward neural network(FNN)based UAV autonomous path planning algorithm,the RPP-LSTM proposed in this paper can adapt to more complex environments and has significantly improved robustness and accuracy when performing UAV real-time path planning.展开更多
Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive act...Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive activities.However,the obvious memory effect of ship motion time series brings certain difficulty to rapid and accurate prediction.Therefore,a real-time framework based on the Long-Short Term Memory(LSTM)neural network model is proposed to predict ship motions in regular and irregular head waves.A 15000 TEU container ship model is employed to illustrate the proposed framework.The numerical implementation and the real-time ship motion prediction in irregular head waves corresponding to the different time scales are carried out based on the container ship model.The related experimental data were employed to verify the numerical simulation results.The results show that the proposed method is more robust than the classical extreme short-term prediction method based on potential flow theory in the prediction of nonlinear ship motions.展开更多
In recent years,the country has spent significant workforce and material resources to prevent traffic accidents,particularly those caused by fatigued driving.The current studies mainly concentrate on driver physiologi...In recent years,the country has spent significant workforce and material resources to prevent traffic accidents,particularly those caused by fatigued driving.The current studies mainly concentrate on driver physiological signals,driving behavior,and vehicle information.However,most of the approaches are computationally intensive and inconvenient for real-time detection.Therefore,this paper designs a network that combines precision,speed and lightweight and proposes an algorithm for facial fatigue detection based on multi-feature fusion.Specifically,the face detection model takes YOLOv8(You Only Look Once version 8)as the basic framework,and replaces its backbone network with MobileNetv3.To focus on the significant regions in the image,CPCA(Channel Prior Convolution Attention)is adopted to enhance the network’s capacity for feature extraction.Meanwhile,the network training phase employs the Focal-EIOU(Focal and Efficient Intersection Over Union)loss function,which makes the network lightweight and increases the accuracy of target detection.Ultimately,the Dlib toolkit was employed to annotate 68 facial feature points.This study established an evaluation metric for facial fatigue and developed a novel fatigue detection algorithm to assess the driver’s condition.A series of comparative experiments were carried out on the self-built dataset.The suggested method’s mAP(mean Average Precision)values for object detection and fatigue detection are 96.71%and 95.75%,respectively,as well as the detection speed is 47 FPS(Frames Per Second).This method can balance the contradiction between computational complexity and model accuracy.Furthermore,it can be transplanted to NVIDIA Jetson Orin NX and quickly detect the driver’s state while maintaining a high degree of accuracy.It contributes to the development of automobile safety systems and reduces the occurrence of traffic accidents.展开更多
Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operati...Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions.展开更多
文摘The increasing popularity of the Internet and the widespread use of information technology have led to a rise in the number and sophistication of network attacks and security threats.Intrusion detection systems are crucial to network security,playing a pivotal role in safeguarding networks from potential threats.However,in the context of an evolving landscape of sophisticated and elusive attacks,existing intrusion detection methodologies often overlook critical aspects such as changes in network topology over time and interactions between hosts.To address these issues,this paper proposes a real-time network intrusion detection method based on graph neural networks.The proposedmethod leverages the advantages of graph neural networks and employs a straightforward graph construction method to represent network traffic as dynamic graph-structured data.Additionally,a graph convolution operation with a multi-head attention mechanism is utilized to enhance the model’s ability to capture the intricate relationships within the graph structure comprehensively.Furthermore,it uses an integrated graph neural network to address dynamic graphs’structural and topological changes at different time points and the challenges of edge embedding in intrusion detection data.The edge classification problem is effectively transformed into node classification by employing a line graph data representation,which facilitates fine-grained intrusion detection tasks on dynamic graph node feature representations.The efficacy of the proposed method is evaluated using two commonly used intrusion detection datasets,UNSW-NB15 and NF-ToN-IoT-v2,and results are compared with previous studies in this field.The experimental results demonstrate that our proposed method achieves 99.3%and 99.96%accuracy on the two datasets,respectively,and outperforms the benchmark model in several evaluation metrics.
基金supported by the National Natural Science Foundation of China(Grant No.U1636208,No.61862008,No.61902013)the Beihang Youth Top Talent Support Program(Grant No.YWF-21-BJJ-1039)。
文摘Network intrusion poses a severe threat to the Internet.However,existing intrusion detection models cannot effectively distinguish different intrusions with high-degree feature overlap.In addition,efficient real-time detection is an urgent problem.To address the two above problems,we propose a Latent Dirichlet Allocation topic model-based framework for real-time network Intrusion Detection(LDA-ID),consisting of static and online LDA-ID.The problem of feature overlap is transformed into static LDA-ID topic number optimization and topic selection.Thus,the detection is based on the latent topic features.To achieve efficient real-time detection,we design an online computing mode for static LDA-ID,in which a parameter iteration method based on momentum is proposed to balance the contribution of prior knowledge and new information.Furthermore,we design two matching mechanisms to accommodate the static and online LDA-ID,respectively.Experimental results on the public NSL-KDD and UNSW-NB15 datasets show that our framework gets higher accuracy than the others.
基金supported by Hi-Tech Research and Development (863) Program of China (No.2006AA04Z207)Research Fund for Doctorial Programof Higher Education of China (No.20060006018)+1 种基金International Cooperation Program of Science and Technology of China (No.2007DFA11530)the National Natural Science Foundation of China (No.60875072)
文摘The paper deals with the problem of the asymptotic stability for general continuous nonlinear networked control systems (NCSs). Based on Lyapunov stability theorem combined with improved Razumikhin technique, the sufficient conditions of asymptotic stability for the system are derived. With the proposed method, the estimate of maximum allowable delay bound (MADB) for linear networked control system is also given. Compared to the other methods, the proposed method gives a much less conservative MADB and more general results. Numerical examples and some simulations are worked out to demonstrate the effectiveness and performance of the proposed method.
基金This work was supported in part by the Information Technology Research Center(ITRC)Support Program supervised by the Institute for Information and Communications Technology Planning and Evaluation(IITP)(IITP-2020-2016-0-00313),and in part by and the 2021 Yeungnam University Research Grant.
文摘Security measures are urgently required to mitigate the recent rapid increase in network security attacks.Although methods employing machine learning have been researched and developed to detect various network attacks effectively,these are passive approaches that cannot protect the network from attacks,but detect them after the end of the session.Since such passive approaches cannot provide fundamental security solutions,we propose an active approach that can prevent further damage by detecting and blocking attacks in real time before the session ends.The proposed technology uses a two-level classifier structure:the first-stage classifier supports real-time classification,and the second-stage classifier supports accurate classification.Thus,the proposed approach can be used to determine whether an attack has occurred with high accuracy,even under heavy traffic.Through extensive evaluation,we confirm that our approach can provide a high detection rate in real time.Furthermore,because the proposed approach is fast,light,and easy to implement,it can be adopted in most existing network security equipment.Finally,we hope to mitigate the limitations of existing security systems,and expect to keep networks faster and safer from the increasing number of cyber-attacks.
文摘The energy consumption of the information and communication technology sector has become a significant portion of the total global energy consumption, warranting research efforts to attempt to reduce it. The pre-requisite for effectual energy management is the availability of the current power consumption values from network devices. Previous works have attempted to estimate and model the consumption values or have measured it using intrusive approaches such as using an in-line power meter. Recent trends suggest that information models are being increasingly used in all aspects of network management. This paper presents a framework developed for enabling the collection of real-time power consumption information from the next generation of networking hardware non-intrusively by employing information models. The experiment results indicate that it is feasible to gather power consumption data using standardized IETF information models, or non-standard customized information models, or through abstracting and exposing the information in a uniform format when no support for the required information models exists. Functional validation of the proposed framework is performed and the results from this research could be leveraged to make energy-efficient network management decisions.
文摘This article mainly introduces the multi-layer distributed C/S architecture of system design scheme. Its working principle is the client program runs automatically after the computer starts, and establish communication with the application server. The network administrator can monitor and intelligent management of the client computer through the server program, the computer will execute the corresponding operation according to the server to send command instructions. The system realize the main module of the whole system framework, network monitoring data initialization module, network data transmission module, image coding and decoding module, the advantages of system make full use of existing LAN resources, timely delivery and manaRement information.
文摘Space-division multiplexing(SDM)utilizing uncoupled multi-core fibers(MCF)is considered a promising candidate for nextgeneration high-speed optical transmission systems due to its huge capacity and low inter-core crosstalk.In this paper,we demonstrate a realtime high-speed SDM transmission system over a field-deployed 7-core MCF cable using commercial 400 Gbit/s backbone optical transport network(OTN)transceivers and a network management system.The transceivers employ a high noise-tolerant quadrature phase shift keying(QPSK)modulation format with a 130 Gbaud rate,enabled by optoelectronic multi-chip module(OE-MCM)packaging.The network management system can effectively manage and monitor the performance of the 7-core SDM OTN system and promptly report failure events through alarms.Our field trial demonstrates the compatibility of uncoupled MCF with high-speed OTN transmission equipment and network management systems,supporting its future deployment in next-generation high-speed terrestrial cable transmission networks.
文摘Real-time semantic segmentation tasks place stringent demands on network inference speed,often requiring a reduction in network depth to decrease computational load.However,shallow networks tend to exhibit degradation in feature extraction completeness and inference accuracy.Therefore,balancing high performance with real-time requirements has become a critical issue in the study of real-time semantic segmentation.To address these challenges,this paper proposes a lightweight bilateral dual-residual network.By introducing a novel residual structure combined with feature extraction and fusion modules,the proposed network significantly enhances representational capacity while reducing computational costs.Specifically,an improved compound residual structure is designed to optimize the efficiency of information propagation and feature extraction.Furthermore,the proposed feature extraction and fusion module enables the network to better capture multi-scale information in images,improving the ability to detect both detailed and global semantic features.Experimental results on the publicly available Cityscapes dataset demonstrate that the proposed lightweight dual-branch network achieves outstanding performance while maintaining low computational complexity.In particular,the network achieved a mean Intersection over Union(mIoU)of 78.4%on the Cityscapes validation set,surpassing many existing semantic segmentation models.Additionally,in terms of inference speed,the network reached 74.5 frames per second when tested on an NVIDIA GeForce RTX 3090 GPU,significantly improving real-time performance.
基金supported by the National Natural Science Foundation of China(No.62401597)the Natural Science Foundation of Hunan Province,China(No.2024JJ6469)the Scientific Research Project of National University of Defense Technology,China(No.ZK22-02)。
文摘The information exchange among satellites is crucial for the implementation of cluster satellite cooperative missions.However,achieving fast perception,rapid networking,and highprecision time synchronization among nodes without the support of the Global Navigation Satellite System(GNSS)and other prior information remains a formidable challenge to real-time wireless networks design.Therefore,a self-organizing network methodology based on multi-agent negotiation is proposed,which autonomously determines the master node through collaborative negotiation and competitive elections.On this basis,a real-time network protocol design is carried out and a high-precision time synchronization method with motion compensation is proposed.Simulation results demonstrate that the proposed method enables rapid networking with the capability of selfdiscovery,self-organization,and self-healing.For a cluster of 8 satellites,the networking time and the reorganization time are less than 4 s.The time synchronization accuracy exceeds 10-10s with motion compensation,demonstrating excellent real-time performance and stability.The research presented in this paper provides a valuable reference for the design and application of spacebased self-organizing networks for satellite cluster.
基金supported by the National Key Research and Development Program of China(No.2023YFF0715103)-financial supportNational Natural Science Foundation of China(Grant Nos.62306237 and 62006191)-financial support+1 种基金Key Research and Development Program of Shaanxi(Nos.2024GX-YBXM-149 and 2021ZDLGY15-04)-financial support,NorthwestUniversity Graduate Innovation Project(No.CX2023194)-financial supportNatural Science Foundation of Shaanxi(No.2023-JC-QN-0750)-financial support.
文摘Over 1.3 million people die annually in traffic accidents,and this tragic fact highlights the urgent need to enhance the intelligence of traffic safety and control systems.In modern industrial and technological applications and collaborative edge intelligence,control systems are crucial for ensuring efficiency and safety.However,deficiencies in these systems can lead to significant operational risks.This paper uses edge intelligence to address the challenges of achieving target speeds and improving efficiency in vehicle control,particularly the limitations of traditional Proportional-Integral-Derivative(PID)controllers inmanaging nonlinear and time-varying dynamics,such as varying road conditions and vehicle behavior,which often result in substantial discrepancies between desired and actual speeds,as well as inefficiencies due to manual parameter adjustments.The paper uses edge intelligence to propose a novel PID control algorithm that integrates Backpropagation(BP)neural networks to enhance robustness and adaptability.The BP neural network is first trained to capture the nonlinear dynamic characteristics of the vehicle.Thetrained network is then combined with the PID controller to forma hybrid control strategy.The output layer of the neural network directly adjusts the PIDparameters(k_(p),k_(i),k_(d)),optimizing performance for specific driving scenarios through self-learning and weight adjustments.Simulation experiments demonstrate that our BP neural network-based PID design significantly outperforms traditional methods,with the response time for acceleration from 0 to 1 m/s improved from 0.25 s to just 0.065 s.Furthermore,real-world tests on an intelligent vehicle show its ability to make timely adjustments in response to complex road conditions,ensuring consistent speed maintenance and enhancing overall system performance.
基金the National Key Research and Development Program of China(No.2020YFC2007500)the Science and Technology Commission of Shanghai Municipality(No.20DZ2220400)。
文摘In the face of the large number of people with motor function disabilities,rehabilitation robots have attracted more and more attention.In order to promote the active participation of the user's motion intention in the assisted rehabilitation process of the robots,it is crucial to establish the human motion prediction model.In this paper,a hybrid prediction model built on long short-term memory(LSTM)neural network using surface electromyography(sEMG)is applied to predict the elbow motion of the users in advance.This model includes two sub-models:a back-propagation neural network and an LSTM network.The former extracts a preliminary prediction of the elbow motion,and the latter corrects this prediction to increase accuracy.The proposed model takes time series data as input,which includes the sEMG signals measured by electrodes and the continuous angles from inertial measurement units.The offline and online tests were carried out to verify the established hybrid model.Finally,average root mean square errors of 3.52°and 4.18°were reached respectively for offline and online tests,and the correlation coefficients for both were above 0.98.
基金supported by the National Natural Science Foundation of China(Grant Nos.52231014 and 52271361)the Natural Science Foundation of Guangdong Province of China(Grant No.2023A1515010684).
文摘Enhancing the accuracy of real-time ship roll prediction is crucial for maritime safety and operational efficiency.To address the challenge of accurately predicting the ship roll status with nonlinear time-varying dynamic characteristics,a real-time ship roll prediction scheme is proposed on the basis of a data preprocessing strategy and a novel stochastic trainer-based feedforward neural network.The sliding data window serves as a ship time-varying dynamic observer to enhance model prediction stability.The variational mode decomposition method extracts effective information on ship roll motion and reduces the non-stationary characteristics of the series.The energy entropy method reconstructs the mode components into high-frequency,medium-frequency,and low-frequency series to reduce model complexity.An improved black widow optimization algorithm trainer-based feedforward neural network with enhanced local optimal avoidance predicts the high-frequency component,enabling accurate tracking of abrupt signals.Additionally,the deterministic algorithm trainer-based neural network,characterized by rapid processing speed,predicts the remaining two mode components.Thus,real-time ship roll forecasting can be achieved through the reconstruction of mode component prediction results.The feasibility and effectiveness of the proposed hybrid prediction scheme for ship roll motion are demonstrated through the measured data of a full-scale ship trial.The proposed prediction scheme achieves real-time ship roll prediction with superior prediction accuracy.
文摘Security and safety remain paramount concerns for both governments and individuals worldwide.In today’s context,the frequency of crimes and terrorist attacks is alarmingly increasing,becoming increasingly intolerable to society.Consequently,there is a pressing need for swift identification of potential threats to preemptively alert law enforcement and security forces,thereby preventing potential attacks or violent incidents.Recent advancements in big data analytics and deep learning have significantly enhanced the capabilities of computer vision in object detection,particularly in identifying firearms.This paper introduces a novel automatic firearm detection surveillance system,utilizing a one-stage detection approach named MARIE(Mechanism for Realtime Identification of Firearms).MARIE incorporates the Single Shot Multibox Detector(SSD)model,which has been specifically optimized to balance the speed-accuracy trade-off critical in firearm detection applications.The SSD model was further refined by integrating MobileNetV2 and InceptionV2 architectures for superior feature extraction capabilities.The experimental results demonstrate that this modified SSD configuration provides highly satisfactory performance,surpassing existing methods trained on the same dataset in terms of the critical speedaccuracy trade-off.Through these innovations,MARIE sets a new standard in surveillance technology,offering a robust solution to enhance public safety effectively.
文摘Along with process control,perception represents the main function performed by the Edge Layer of an Internet of Things(IoT)network.Many of these networks implement various applications where the response time does not represent an important parameter.However,in critical applications,this parameter represents a crucial aspect.One important sensing device used in IoT designs is the accelerometer.In most applications,the response time of the embedded driver software handling this device is generally not analysed and not taken into account.In this paper,we present the design and implementation of a predictable real-time driver stack for a popular accelerometer and gyroscope device family.We provide clear justifications for why this response time is extremely important for critical applications in the acquisition process of such data.We present extensive measurements and experimental results that demonstrate the predictability of our solution,making it suitable for critical real-time systems.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
基金supported by Yunlong Lake Laboratory of Deep Underground Science and Engineering Project(No.104024008)the National Natural Science Foundation of China(Nos.52274241 and 52474261)the Natural Science Foundation of Jiangsu Province(No.BK20240207).
文摘Through a case analysis,this study examines the spatiotemporal evolution of microseismic(MS)events,energy characteristics,volumetric features,and fracture network development in surface well hydraulic fracturing.A total of 349 MS events were analyzed across different fracturing sections,revealing significant heterogeneity in fracture propagation.Energy scanning results showed that cumulative energy values ranged from 240 to 1060 J across the sections,indicating notable differences.Stimulated reservoir volume(SRV)analysis demonstrated well-developed fracture networks in certain sections,with a total SRV exceeding 1540000 m^(3).The hydraulic fracture network analysis revealed that during the midfracturing stage,the density and spatial extent of MS events significantly increased,indicating rapid fracture propagation and the formation of complex networks.In the later stage,the number of secondary fractures near fracture edges decreased,and the fracture network stabilized.By comparing the branching index,fracture length,width,height,and SRV values across different fracturing sections,Sections No.1 and No.8 showed the best performance,with high MS event densities,extensive fracture networks,and significant energy release.However,Sections No.4 and No.5 exhibited sparse MS activity and poor fracture connectivity,indicating suboptimal stimulation effectiveness.
基金supported by the Natural Science Basic Research Prog ram of Shaanxi(2022JQ-593)。
文摘To address the shortcomings of single-step decision making in the existing deep reinforcement learning based unmanned aerial vehicle(UAV)real-time path planning problem,a real-time UAV path planning algorithm based on long shortterm memory(RPP-LSTM)network is proposed,which combines the memory characteristics of recurrent neural network(RNN)and the deep reinforcement learning algorithm.LSTM networks are used in this algorithm as Q-value networks for the deep Q network(DQN)algorithm,which makes the decision of the Q-value network has some memory.Thanks to LSTM network,the Q-value network can use the previous environmental information and action information which effectively avoids the problem of single-step decision considering only the current environment.Besides,the algorithm proposes a hierarchical reward and punishment function for the specific problem of UAV real-time path planning,so that the UAV can more reasonably perform path planning.Simulation verification shows that compared with the traditional feed-forward neural network(FNN)based UAV autonomous path planning algorithm,the RPP-LSTM proposed in this paper can adapt to more complex environments and has significantly improved robustness and accuracy when performing UAV real-time path planning.
文摘Ship motions induced by waves have a significant impact on the efficiency and safety of offshore operations.Real-time prediction of ship motions in the next few seconds plays a crucial role in performing sensitive activities.However,the obvious memory effect of ship motion time series brings certain difficulty to rapid and accurate prediction.Therefore,a real-time framework based on the Long-Short Term Memory(LSTM)neural network model is proposed to predict ship motions in regular and irregular head waves.A 15000 TEU container ship model is employed to illustrate the proposed framework.The numerical implementation and the real-time ship motion prediction in irregular head waves corresponding to the different time scales are carried out based on the container ship model.The related experimental data were employed to verify the numerical simulation results.The results show that the proposed method is more robust than the classical extreme short-term prediction method based on potential flow theory in the prediction of nonlinear ship motions.
基金supported by the Science and Technology Bureau of Xi’an project(24KGDW0049)the Key Research and Development Programof Shaanxi(2023-YBGY-264)the Key Research and Development Program of Guangxi(GK-AB20159032).
文摘In recent years,the country has spent significant workforce and material resources to prevent traffic accidents,particularly those caused by fatigued driving.The current studies mainly concentrate on driver physiological signals,driving behavior,and vehicle information.However,most of the approaches are computationally intensive and inconvenient for real-time detection.Therefore,this paper designs a network that combines precision,speed and lightweight and proposes an algorithm for facial fatigue detection based on multi-feature fusion.Specifically,the face detection model takes YOLOv8(You Only Look Once version 8)as the basic framework,and replaces its backbone network with MobileNetv3.To focus on the significant regions in the image,CPCA(Channel Prior Convolution Attention)is adopted to enhance the network’s capacity for feature extraction.Meanwhile,the network training phase employs the Focal-EIOU(Focal and Efficient Intersection Over Union)loss function,which makes the network lightweight and increases the accuracy of target detection.Ultimately,the Dlib toolkit was employed to annotate 68 facial feature points.This study established an evaluation metric for facial fatigue and developed a novel fatigue detection algorithm to assess the driver’s condition.A series of comparative experiments were carried out on the self-built dataset.The suggested method’s mAP(mean Average Precision)values for object detection and fatigue detection are 96.71%and 95.75%,respectively,as well as the detection speed is 47 FPS(Frames Per Second).This method can balance the contradiction between computational complexity and model accuracy.Furthermore,it can be transplanted to NVIDIA Jetson Orin NX and quickly detect the driver’s state while maintaining a high degree of accuracy.It contributes to the development of automobile safety systems and reduces the occurrence of traffic accidents.
基金supported by the National Key R&D Program of China(No.2022YFB4301102).
文摘Currently,most trains are equipped with dedicated cameras for capturing pantograph videos.Pantographs are core to the high-speed-railway pantograph-catenary system,and their failure directly affects the normal operation of high-speed trains.However,given the complex and variable real-world operational conditions of high-speed railways,there is no real-time and robust pantograph fault-detection method capable of handling large volumes of surveillance video.Hence,it is of paramount importance to maintain real-time monitoring and analysis of pantographs.Our study presents a real-time intelligent detection technology for identifying faults in high-speed railway pantographs,utilizing a fusion of self-attention and convolution features.We delved into lightweight multi-scale feature-extraction and fault-detection models based on deep learning to detect pantograph anomalies.Compared with traditional methods,this approach achieves high recall and accuracy in pantograph recognition,accurately pinpointing issues like discharge sparks,pantograph horns,and carbon pantograph-slide malfunctions.After experimentation and validation with actual surveillance videos of electric multiple-unit train,our algorithmic model demonstrates real-time,high-accuracy performance even under complex operational conditions.