With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based...With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.展开更多
Maximize the resource utilization efficiency and guarantee the quality of service(QoS)of users by selecting the network are the key issues for heterogeneous network operators,but the resources occupied by users in dif...Maximize the resource utilization efficiency and guarantee the quality of service(QoS)of users by selecting the network are the key issues for heterogeneous network operators,but the resources occupied by users in different networks cannot be compared directly.This paper proposes a network selection algorithm for heterogeneous network.Firstly,the concept of equivalent bandwidth is proposed,through which the actual resources occupied by users with certain QoS requirements in different networks can be compared directly.Then the concept of network applicability is defined to express the abilities of networks to support different services.The proposed network selection algorithm first evaluates whether the network has enough equivalent bandwidth required by the user and then prioritizes network with poor applicability to avoid the situation that there are still residual resources in entire network,but advanced services can not be admitted.The simulation results show that the proposed algorithm obtained better performance than the baselines in terms of reducing call blocking probability and improving network resource utilization efficiency.展开更多
The Internet of Things (IoT) has gained popularity and is widely used in modern society. The growth in the sizes of IoT networks with more internet‑connected devices has led to concerns regarding privacy and security....The Internet of Things (IoT) has gained popularity and is widely used in modern society. The growth in the sizes of IoT networks with more internet‑connected devices has led to concerns regarding privacy and security. In particular, related to the routing protocol for low‑power and lossy networks (RPL), which lacks robust security functions, many IoT devices in RPL networks are resource‑constrained, with limited computing power, bandwidth, memory, and bat‑tery life. This causes them to face various vulnerabilities and potential attacks, such as DIO neighbor suppression attacks. This type of attack specifcally targets neighboring nodes through DIO messages and poses a signifcant security threat to RPL‑based IoT networks. Recent studies have proposed methods for detecting and mitigating this attack;however, they produce high false‑positive and false‑negative rates in detection tasks and cannot fully protect RPL networks against this attack type. In this paper, we propose a novel fuzzy logic‑based intrusion detection scheme to secure the RPL protocol (FLSec‑RPL) to protect against this attack. Our method is built of three key phases consecu‑tively: (1) it tracks attack activity variables to determine potential malicious behaviors;(2) it performs fuzzy logic‑based intrusion detection to identify malicious neighbor nodes;and (3) it provides a detection validation and blocking mechanism to ensure that both malicious and suspected malicious nodes are accurately detected and blocked. To evaluate the efectiveness of our method, we conduct comprehensive experiments across diverse scenarios, including Static‑RPL and Mobile‑RPL networks. We compare the performance of our proposed method with that of the state‑of‑the‑art methods. The results demonstrate that our method outperforms existing methods in terms of the detection accuracy, F1 score, power consumption, end‑to‑end delay, and packet delivery ratio metrics.展开更多
As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and am...As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.展开更多
Future 6G communications will open up opportunities for innovative applications,including Cyber-Physical Systems,edge computing,supporting Industry 5.0,and digital agriculture.While automation is creating efficiencies...Future 6G communications will open up opportunities for innovative applications,including Cyber-Physical Systems,edge computing,supporting Industry 5.0,and digital agriculture.While automation is creating efficiencies,it can also create new cyber threats,such as vulnerabilities in trust and malicious node injection.Denialof-Service(DoS)attacks can stop many forms of operations by overwhelming networks and systems with data noise.Current anomaly detection methods require extensive software changes and only detect static threats.Data collection is important for being accurate,but it is often a slow,tedious,and sometimes inefficient process.This paper proposes a new wavelet transformassisted Bayesian deep learning based probabilistic(WT-BDLP)approach tomitigate malicious data injection attacks in 6G edge networks.The proposed approach combines outlier detection based on a Bayesian learning conditional variational autoencoder(Bay-LCVariAE)and traffic pattern analysis based on continuous wavelet transform(CWT).The Bay-LCVariAE framework allows for probabilistic modelling of generative features to facilitate capturing how features of interest change over time,spatially,and for recognition of anomalies.Similarly,CWT allows emphasizing the multi-resolution spectral analysis and permits temporally relevant frequency pattern recognition.Experimental testing showed that the flexibility of the Bayesian probabilistic framework offers a vast improvement in anomaly detection accuracy over existing methods,with a maximum accuracy of 98.21%recognizing anomalies.展开更多
Quantum machine learning is an important application of quantum computing in the era of noisy intermediate-scale quantum devices.Domain adaptation(DA)is an effective method for addressing the distribution discrepancy ...Quantum machine learning is an important application of quantum computing in the era of noisy intermediate-scale quantum devices.Domain adaptation(DA)is an effective method for addressing the distribution discrepancy problem between the training data and the real data when the neural network model is deployed.In this paper,we propose a variational quantum domain adaptation method inspired by the quantum convolutional neural network,named variational quantum domain adaptation(VQDA).The data are first uploaded by a‘quantum coding module',then the feature information is extracted by several‘quantum convolution layers'and‘quantum pooling layers',which is named‘Feature Extractor'.Subsequently,the labels and the domains of the samples are obtained by the‘quantum fully connected layer'.With a gradient reversal module,the trained‘Feature Extractor'can extract the features that cannot be distinguished from the source and target domains.The simulations on the local computer and IBM Quantum Experience(IBM Q)platform by Qiskit show the effectiveness of the proposed method.The results show that VQDA(with 8 quantum bits)has 91.46%average classification accuracy for DA task between MNIST→USPS(USPS→MNIST),achieves 91.16%average classification accuracy for gray-scale and color images(with 10 quantum bits),and has 69.25%average classification accuracy on the DA task for color images(also with 10 quantum bits).VQDA achieves a 9.14%improvement in average classification accuracy compared to its corresponding classical domain adaptation method with the same parameter scale for different DA tasks.Simultaneously,the parameters scale is reduced to 43%by using VQDA when both quantum and classical DA methods have similar classification accuracies.展开更多
Next-generation 6G networks seek to provide ultra-reliable and low-latency communications,necessitating network designs that are intelligent and adaptable.Network slicing has developed as an effective option for resou...Next-generation 6G networks seek to provide ultra-reliable and low-latency communications,necessitating network designs that are intelligent and adaptable.Network slicing has developed as an effective option for resource separation and service-level differentiation inside virtualized infrastructures.Nonetheless,sustaining elevated Quality of Service(QoS)in dynamic,resource-limited systems poses significant hurdles.This study introduces an innovative packet-based proactive end-to-end(ETE)resource management system that facilitates network slicing with improved resilience and proactivity.To get around the drawbacks of conventional reactive systems,we develop a cost-efficient slice provisioning architecture that takes into account limits on radio,processing,and transmission resources.The optimization issue is non-convex,NP-hard,and requires online resolution in a dynamic setting.We offer a hybrid solution that integrates an advanced Deep Reinforcement Learning(DRL)methodology with an Improved Manta-Ray Foraging Optimization(ImpMRFO)algorithm.The ImpMRFO utilizes Chebyshev chaotic mapping for the formation of a varied starting population and incorporates Lévy flight-based stochastic movement to avert premature convergence,hence facilitating improved exploration-exploitation trade-offs.The DRL model perpetually acquires optimum provisioning strategies via agent-environment interactions,whereas the ImpMRFO enhances policy performance for effective slice provisioning.The solution,developed in Python,is evaluated across several 6G slicing scenarios that include varied QoS profiles and traffic requirements.The DRL model perpetually acquires optimum provisioning methods via agent-environment interactions,while the ImpMRFO enhances policy performance for effective slice provisioning.The solution,developed in Python,is evaluated across several 6G slicing scenarios that include varied QoS profiles and traffic requirements.Experimental findings reveal that the proactive ETE system outperforms DRL models and non-resilient provisioning techniques.Our technique increases PSSRr,decreases average latency,and optimizes resource use.These results demonstrate that the hybrid architecture for robust,real-time,and scalable slice management in future 6G networks is feasible.展开更多
Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existin...Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existing FGIR works often follow two steps:discriminative sub-region localization and local feature representation.However,these works pay less attention on global context information.They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different subregions from a global view point.Therefore,in this paper,we consider both global and local information for FGIR,and propose a collaborative teacher-student strategy to reinforce and unity the two types of information.Our framework is implemented mainly by convolutional neural network,referred to Teacher-Student Based Attention Convolutional Neural Network(T-S-ACNN).For fine-grained local information,we choose the classic Multi-Attention Network(MA-Net)as our baseline,and propose a type of boundary constraint to further reduce background noises in the local attention maps.In this way,the discriminative sub-regions tend to appear in the area occupied by fine-grained objects,leading to more accurate sub-region localization.For fine-grained global information,we design a graph convolution based Global Attention Network(GA-Net),which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among subregions.At last,we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes,so as to enhance the cooperative reinforcement of MA-Net and GA-Net.Extensive experiments on CUB-200-2011,Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework.展开更多
To ensure the access security of 6G,physical-layer authentication(PLA)leverages the randomness and space-time-frequency uniqueness of the channel to provide unique identity signatures for transmitters.Furthermore,the ...To ensure the access security of 6G,physical-layer authentication(PLA)leverages the randomness and space-time-frequency uniqueness of the channel to provide unique identity signatures for transmitters.Furthermore,the introduction of artificial intelligence(AI)facilitates the learning of the distribution characteristics of channel fingerprints,effectively addressing the uncertainties and unknown dynamic challenges in wireless link modeling.This paper reviews representative AI-enabled PLA schemes and proposes a graph neural network(GNN)-based PLA approach in response to the challenges existing methods face in identifying mobile users.Simulation results demonstrate that the proposed method outperforms six baseline schemes in terms of authentication accuracy.Furthermore,this paper outlines the future development directions of PLA.展开更多
With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods...With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods.展开更多
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist...Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.展开更多
Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content deliver...Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content delivery over the seamless global coverage area. However, the on-board supportable resources of a single satellite are extremely limited and lack of interaction with others. In this paper, we design a network model with two-layered cache deployment, i.e., satellite layer and ground base station layer, and two types of sharing links, i.e., terrestrial-satellite sharing(TSS) links and inter-satellite sharing(ISS) links, to enhance the capability of cooperative delivery over STINs. Thus, we use rateless codes for the content divided-packet transmission, and derive the total energy efficiency(EE) in the whole transmission procedure, which is defined as the ratio of traffic offloading and energy consumption. We formulate two optimization problems about maximizing EE in different sharing scenarios(only TSS and TSS-ISS),and propose two optimized algorithms to obtain the optimal content placement matrixes, respectively.Simulation results demonstrate that, enabling sharing links with optimized cache placement have more than 2 times improvement of EE performance than other traditional placement schemes. Particularly, TSS-ISS schemes have the higher EE performance than only TSS schemes under the conditions of enough number of satellites and smaller inter-satellite distances.展开更多
Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few wo...Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few works have been payed on the estimation of the two degra-dation functions.To learn the two functions from image pairs to be fused,we propose a Dirichletnetwork,where both functions are properly constrained.Specifically,the spatial response function isconstrained with positivity,while the Dirichlet distribution along with a total variation is imposedon the point spread function.To the best of our knowledge,the neural network and the Dirichlet regularization are exclusively investigated,for the first time,to estimate the degradation functions.Both image degradation and fusion experiments demonstrate the effectiveness and superiority of theproposed Dirichlet network.展开更多
UAV-aided cellular networks,millimeter wave(mm-wave) communications and multi-antenna techniques are viewed as promising components of the solution for beyond-5G(B5G) and even 6G communications.By leveraging the power...UAV-aided cellular networks,millimeter wave(mm-wave) communications and multi-antenna techniques are viewed as promising components of the solution for beyond-5G(B5G) and even 6G communications.By leveraging the power of stochastic geometry,this paper aims at providing an effective framework for modeling and analyzing a UAV-aided heterogeneous cellular network,where the terrestrial base stations(TBSs) and the UAV base stations(UBSs) coexist,and the UBSs are provided with mm-wave and multi-antenna techniques.By modeling the TBSs as a PPP and the UBSs as a Matern hard-core point process of type Ⅱ(MPH-Ⅱ),approximated but accurate analytical results for the average rate of the typical user of both tiers are derived through an approximation method based on the mean interference-to-signal ratio(MISR) gain.The influence of some relevant parameters is discussed in detail,and some insights into the network deployment and optimization are revealed.Numerical results show that some trade-offs are worthy of being considered,such as the antenna array size,the altitude of the UAVs and the power control factor of the UBSs.展开更多
The reliability of a network is an important indicator for maintaining communication and ensuring its stable operation. Therefore, the assessment of reliability in underlying interconnection networks has become an inc...The reliability of a network is an important indicator for maintaining communication and ensuring its stable operation. Therefore, the assessment of reliability in underlying interconnection networks has become an increasingly important research issue. However, at present, the reliability assessment of many interconnected networks is not yet accurate,which inevitably weakens their fault tolerance and diagnostic capabilities. To improve network reliability,researchers have proposed various methods and strategies for precise assessment. This paper introduces a novel family of interconnection networks called general matching composed networks(gMCNs), which is based on the common characteristics of network topology structure. After analyzing the topological properties of gMCNs, we establish a relationship between super connectivity and conditional diagnosability of gMCNs. Furthermore, we assess the reliability of g MCNs, and determine the conditional diagnosability of many interconnection networks.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
This paper investigates a wireless powered and backscattering enabled sensor network based on the non-linear energy harvesting model, where the power beacon(PB) delivers energy signals to wireless sensors to enable th...This paper investigates a wireless powered and backscattering enabled sensor network based on the non-linear energy harvesting model, where the power beacon(PB) delivers energy signals to wireless sensors to enable their passive backscattering and active transmission to the access point(AP). We propose an efficient time scheduling scheme for network performance enhancement, based on which each sensor can always harvest energy from the PB over the entire block except its time slots allocated for passive and active information delivery. Considering the PB and wireless sensors are from two selfish service providers, we use the Stackelberg game to model the energy interaction among them. To address the non-convexity of the leader-level problem, we propose to decompose the original problem into two subproblems and solve them iteratively in an alternating manner. Specifically, the successive convex approximation, semi-definite relaxation(SDR) and variable substitution techniques are applied to find a nearoptimal solution. To evaluate the performance loss caused by the interaction between two providers, we further investigate the social welfare maximization problem. Numerical results demonstrate that compared to the benchmark schemes, the proposed scheme can achieve up to 35.4% and 38.7% utility gain for the leader and the follower, respectively.展开更多
This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization pr...This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization problem is formulated by jointly optimizing the user scheduling and data assignment.Due to the non-analytic expression of the WSA w.r.t.the optimization variables and the unknowability of future network information,the problem cannot be solved with known solution methods.Therefore,an online Joint Partial Offloading and User Scheduling Optimization(JPOUSO)algorithm is proposed by transforming the original problem into a single-slot data assignment subproblem and a single-slot user scheduling sub-problem and solving the two sub-problems separately.We analyze the computational complexity of the presented JPO-USO algorithm,which is of O(N),with N being the number of users.Simulation results show that the proposed JPO-USO algorithm is able to achieve better AoI performance compared with various baseline methods.It is shown that both the user’s data assignment and the user’s AoI should be jointly taken into account to decrease the system WSA when scheduling users.展开更多
Along with the proliferating research interest in semantic communication(Sem Com),joint source channel coding(JSCC)has dominated the attention due to the widely assumed existence in efficiently delivering information ...Along with the proliferating research interest in semantic communication(Sem Com),joint source channel coding(JSCC)has dominated the attention due to the widely assumed existence in efficiently delivering information semantics.Nevertheless,this paper challenges the conventional JSCC paradigm and advocates for adopting separate source channel coding(SSCC)to enjoy a more underlying degree of freedom for optimization.We demonstrate that SSCC,after leveraging the strengths of the Large Language Model(LLM)for source coding and Error Correction Code Transformer(ECCT)complemented for channel coding,offers superior performance over JSCC.Our proposed framework also effectively highlights the compatibility challenges between Sem Com approaches and digital communication systems,particularly concerning the resource costs associated with the transmission of high-precision floating point numbers.Through comprehensive evaluations,we establish that assisted by LLM-based compression and ECCT-enhanced error correction,SSCC remains a viable and effective solution for modern communication systems.In other words,separate source channel coding is still what we need.展开更多
Developing sensorless techniques for estimating battery expansion is essential for effective mechanical state monitoring,improving the accuracy of digital twin simulation and abnormality detection.Therefore,this paper...Developing sensorless techniques for estimating battery expansion is essential for effective mechanical state monitoring,improving the accuracy of digital twin simulation and abnormality detection.Therefore,this paper presents a data-driven approach to expansion estimation using electromechanical coupled models with machine learning.The proposed method integrates reduced-order impedance models with data-driven mechanical models,coupling the electrochemical and mechanical states through the state of charge(SOC)and mechanical pressure within a state estimation framework.The coupling relationship was established through experimental insights into pressure-related impedance parameters and the nonlinear mechanical behavior with SOC and pressure.The data-driven model was interpreted by introducing a novel swelling coefficient defined by component stiffnesses to capture the nonlinear mechanical behavior across various mechanical constraints.Sensitivity analysis of the impedance model shows that updating model parameters with pressure can reduce the mean absolute error of simulated voltage by 20 mV and SOC estimation error by 2%.The results demonstrate the model's estimation capabilities,achieving a root mean square error of less than 1 kPa when the maximum expansion force is from 30 kPa to 120 kPa,outperforming calibrated stiffness models and other machine learning techniques.The model's robustness and generalizability are further supported by its effective handling of SOC estimation and pressure measurement errors.This work highlights the importance of the proposed framework in enhancing state estimation and fault diagnosis for lithium-ion batteries.展开更多
基金supported by the National Key Research and Development Program of China No.2023YFA1009500.
文摘With the emphasis on user privacy and communication security, encrypted traffic has increased dramatically, which brings great challenges to traffic classification. The classification method of encrypted traffic based on GNN can deal with encrypted traffic well. However, existing GNN-based approaches ignore the relationship between client or server packets. In this paper, we design a network traffic topology based on GCN, called Flow Mapping Graph (FMG). FMG establishes sequential edges between vertexes by the arrival order of packets and establishes jump-order edges between vertexes by connecting packets in different bursts with the same direction. It not only reflects the time characteristics of the packet but also strengthens the relationship between the client or server packets. According to FMG, a Traffic Mapping Classification model (TMC-GCN) is designed, which can automatically capture and learn the characteristics and structure information of the top vertex in FMG. The TMC-GCN model is used to classify the encrypted traffic. The encryption stream classification problem is transformed into a graph classification problem, which can effectively deal with data from different data sources and application scenarios. By comparing the performance of TMC-GCN with other classical models in four public datasets, including CICIOT2023, ISCXVPN2016, CICAAGM2017, and GraphDapp, the effectiveness of the FMG algorithm is verified. The experimental results show that the accuracy rate of the TMC-GCN model is 96.13%, the recall rate is 95.04%, and the F1 rate is 94.54%.
文摘Maximize the resource utilization efficiency and guarantee the quality of service(QoS)of users by selecting the network are the key issues for heterogeneous network operators,but the resources occupied by users in different networks cannot be compared directly.This paper proposes a network selection algorithm for heterogeneous network.Firstly,the concept of equivalent bandwidth is proposed,through which the actual resources occupied by users with certain QoS requirements in different networks can be compared directly.Then the concept of network applicability is defined to express the abilities of networks to support different services.The proposed network selection algorithm first evaluates whether the network has enough equivalent bandwidth required by the user and then prioritizes network with poor applicability to avoid the situation that there are still residual resources in entire network,but advanced services can not be admitted.The simulation results show that the proposed algorithm obtained better performance than the baselines in terms of reducing call blocking probability and improving network resource utilization efficiency.
基金funded by a Royal Scholarship from Her Royal Highness Prin‑cess Maha Chakri Sirindhorn Education Project to Cambodia for 2020,faculty of College of Computing,Khon Kaen University.
文摘The Internet of Things (IoT) has gained popularity and is widely used in modern society. The growth in the sizes of IoT networks with more internet‑connected devices has led to concerns regarding privacy and security. In particular, related to the routing protocol for low‑power and lossy networks (RPL), which lacks robust security functions, many IoT devices in RPL networks are resource‑constrained, with limited computing power, bandwidth, memory, and bat‑tery life. This causes them to face various vulnerabilities and potential attacks, such as DIO neighbor suppression attacks. This type of attack specifcally targets neighboring nodes through DIO messages and poses a signifcant security threat to RPL‑based IoT networks. Recent studies have proposed methods for detecting and mitigating this attack;however, they produce high false‑positive and false‑negative rates in detection tasks and cannot fully protect RPL networks against this attack type. In this paper, we propose a novel fuzzy logic‑based intrusion detection scheme to secure the RPL protocol (FLSec‑RPL) to protect against this attack. Our method is built of three key phases consecu‑tively: (1) it tracks attack activity variables to determine potential malicious behaviors;(2) it performs fuzzy logic‑based intrusion detection to identify malicious neighbor nodes;and (3) it provides a detection validation and blocking mechanism to ensure that both malicious and suspected malicious nodes are accurately detected and blocked. To evaluate the efectiveness of our method, we conduct comprehensive experiments across diverse scenarios, including Static‑RPL and Mobile‑RPL networks. We compare the performance of our proposed method with that of the state‑of‑the‑art methods. The results demonstrate that our method outperforms existing methods in terms of the detection accuracy, F1 score, power consumption, end‑to‑end delay, and packet delivery ratio metrics.
基金Project supported by the National Natural Science Foundation of China (Grant No. 62371253)the Postgraduate Research and Practice Innovation Program of Jiangsu Province, China (Grant No. KYCX24_1179)。
文摘As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.
文摘Future 6G communications will open up opportunities for innovative applications,including Cyber-Physical Systems,edge computing,supporting Industry 5.0,and digital agriculture.While automation is creating efficiencies,it can also create new cyber threats,such as vulnerabilities in trust and malicious node injection.Denialof-Service(DoS)attacks can stop many forms of operations by overwhelming networks and systems with data noise.Current anomaly detection methods require extensive software changes and only detect static threats.Data collection is important for being accurate,but it is often a slow,tedious,and sometimes inefficient process.This paper proposes a new wavelet transformassisted Bayesian deep learning based probabilistic(WT-BDLP)approach tomitigate malicious data injection attacks in 6G edge networks.The proposed approach combines outlier detection based on a Bayesian learning conditional variational autoencoder(Bay-LCVariAE)and traffic pattern analysis based on continuous wavelet transform(CWT).The Bay-LCVariAE framework allows for probabilistic modelling of generative features to facilitate capturing how features of interest change over time,spatially,and for recognition of anomalies.Similarly,CWT allows emphasizing the multi-resolution spectral analysis and permits temporally relevant frequency pattern recognition.Experimental testing showed that the flexibility of the Bayesian probabilistic framework offers a vast improvement in anomaly detection accuracy over existing methods,with a maximum accuracy of 98.21%recognizing anomalies.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 61871234)。
文摘Quantum machine learning is an important application of quantum computing in the era of noisy intermediate-scale quantum devices.Domain adaptation(DA)is an effective method for addressing the distribution discrepancy problem between the training data and the real data when the neural network model is deployed.In this paper,we propose a variational quantum domain adaptation method inspired by the quantum convolutional neural network,named variational quantum domain adaptation(VQDA).The data are first uploaded by a‘quantum coding module',then the feature information is extracted by several‘quantum convolution layers'and‘quantum pooling layers',which is named‘Feature Extractor'.Subsequently,the labels and the domains of the samples are obtained by the‘quantum fully connected layer'.With a gradient reversal module,the trained‘Feature Extractor'can extract the features that cannot be distinguished from the source and target domains.The simulations on the local computer and IBM Quantum Experience(IBM Q)platform by Qiskit show the effectiveness of the proposed method.The results show that VQDA(with 8 quantum bits)has 91.46%average classification accuracy for DA task between MNIST→USPS(USPS→MNIST),achieves 91.16%average classification accuracy for gray-scale and color images(with 10 quantum bits),and has 69.25%average classification accuracy on the DA task for color images(also with 10 quantum bits).VQDA achieves a 9.14%improvement in average classification accuracy compared to its corresponding classical domain adaptation method with the same parameter scale for different DA tasks.Simultaneously,the parameters scale is reduced to 43%by using VQDA when both quantum and classical DA methods have similar classification accuracies.
文摘Next-generation 6G networks seek to provide ultra-reliable and low-latency communications,necessitating network designs that are intelligent and adaptable.Network slicing has developed as an effective option for resource separation and service-level differentiation inside virtualized infrastructures.Nonetheless,sustaining elevated Quality of Service(QoS)in dynamic,resource-limited systems poses significant hurdles.This study introduces an innovative packet-based proactive end-to-end(ETE)resource management system that facilitates network slicing with improved resilience and proactivity.To get around the drawbacks of conventional reactive systems,we develop a cost-efficient slice provisioning architecture that takes into account limits on radio,processing,and transmission resources.The optimization issue is non-convex,NP-hard,and requires online resolution in a dynamic setting.We offer a hybrid solution that integrates an advanced Deep Reinforcement Learning(DRL)methodology with an Improved Manta-Ray Foraging Optimization(ImpMRFO)algorithm.The ImpMRFO utilizes Chebyshev chaotic mapping for the formation of a varied starting population and incorporates Lévy flight-based stochastic movement to avert premature convergence,hence facilitating improved exploration-exploitation trade-offs.The DRL model perpetually acquires optimum provisioning strategies via agent-environment interactions,whereas the ImpMRFO enhances policy performance for effective slice provisioning.The solution,developed in Python,is evaluated across several 6G slicing scenarios that include varied QoS profiles and traffic requirements.The DRL model perpetually acquires optimum provisioning methods via agent-environment interactions,while the ImpMRFO enhances policy performance for effective slice provisioning.The solution,developed in Python,is evaluated across several 6G slicing scenarios that include varied QoS profiles and traffic requirements.Experimental findings reveal that the proactive ETE system outperforms DRL models and non-resilient provisioning techniques.Our technique increases PSSRr,decreases average latency,and optimizes resource use.These results demonstrate that the hybrid architecture for robust,real-time,and scalable slice management in future 6G networks is feasible.
基金supported by the National Natural Science Foundation of China,China (Grants No.62171232)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China。
文摘Fine-grained Image Recognition(FGIR)task is dedicated to distinguishing similar sub-categories that belong to the same super-category,such as bird species and car types.In order to highlight visual differences,existing FGIR works often follow two steps:discriminative sub-region localization and local feature representation.However,these works pay less attention on global context information.They neglect a fact that the subtle visual difference in challenging scenarios can be highlighted through exploiting the spatial relationship among different subregions from a global view point.Therefore,in this paper,we consider both global and local information for FGIR,and propose a collaborative teacher-student strategy to reinforce and unity the two types of information.Our framework is implemented mainly by convolutional neural network,referred to Teacher-Student Based Attention Convolutional Neural Network(T-S-ACNN).For fine-grained local information,we choose the classic Multi-Attention Network(MA-Net)as our baseline,and propose a type of boundary constraint to further reduce background noises in the local attention maps.In this way,the discriminative sub-regions tend to appear in the area occupied by fine-grained objects,leading to more accurate sub-region localization.For fine-grained global information,we design a graph convolution based Global Attention Network(GA-Net),which can combine extracted local attention maps from MA-Net with non-local techniques to explore spatial relationship among subregions.At last,we develop a collaborative teacher-student strategy to adaptively determine the attended roles and optimization modes,so as to enhance the cooperative reinforcement of MA-Net and GA-Net.Extensive experiments on CUB-200-2011,Stanford Cars and FGVC Aircraft datasets illustrate the promising performance of our framework.
文摘To ensure the access security of 6G,physical-layer authentication(PLA)leverages the randomness and space-time-frequency uniqueness of the channel to provide unique identity signatures for transmitters.Furthermore,the introduction of artificial intelligence(AI)facilitates the learning of the distribution characteristics of channel fingerprints,effectively addressing the uncertainties and unknown dynamic challenges in wireless link modeling.This paper reviews representative AI-enabled PLA schemes and proposes a graph neural network(GNN)-based PLA approach in response to the challenges existing methods face in identifying mobile users.Simulation results demonstrate that the proposed method outperforms six baseline schemes in terms of authentication accuracy.Furthermore,this paper outlines the future development directions of PLA.
基金supported by the National Key Research and Development Program of China No.2023YFB2705000.
文摘With the rise of encrypted traffic,traditional network analysis methods have become less effective,leading to a shift towards deep learning-based approaches.Among these,multimodal learning-based classification methods have gained attention due to their ability to leverage diverse feature sets from encrypted traffic,improving classification accuracy.However,existing research predominantly relies on late fusion techniques,which hinder the full utilization of deep features within the data.To address this limitation,we propose a novel multimodal encrypted traffic classification model that synchronizes modality fusion with multiscale feature extraction.Specifically,our approach performs real-time fusion of modalities at each stage of feature extraction,enhancing feature representation at each level and preserving inter-level correlations for more effective learning.This continuous fusion strategy improves the model’s ability to detect subtle variations in encrypted traffic,while boosting its robustness and adaptability to evolving network conditions.Experimental results on two real-world encrypted traffic datasets demonstrate that our method achieves a classification accuracy of 98.23% and 97.63%,outperforming existing multimodal learning-based methods.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,62201307)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297)+2 种基金the Shenzhen Science and Technology Program ZDSYS20210623091808025Stable Support Plan Program GXWD20231129102638002the Major Key Project of PCL(No.PCL2024A01)。
文摘Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,61831008)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297,2021A1515011572)Shenzhen Science and Technology Program ZDSYS20210623091808025,Stable Support Plan Program GXWD20231129102638002.
文摘Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content delivery over the seamless global coverage area. However, the on-board supportable resources of a single satellite are extremely limited and lack of interaction with others. In this paper, we design a network model with two-layered cache deployment, i.e., satellite layer and ground base station layer, and two types of sharing links, i.e., terrestrial-satellite sharing(TSS) links and inter-satellite sharing(ISS) links, to enhance the capability of cooperative delivery over STINs. Thus, we use rateless codes for the content divided-packet transmission, and derive the total energy efficiency(EE) in the whole transmission procedure, which is defined as the ratio of traffic offloading and energy consumption. We formulate two optimization problems about maximizing EE in different sharing scenarios(only TSS and TSS-ISS),and propose two optimized algorithms to obtain the optimal content placement matrixes, respectively.Simulation results demonstrate that, enabling sharing links with optimized cache placement have more than 2 times improvement of EE performance than other traditional placement schemes. Particularly, TSS-ISS schemes have the higher EE performance than only TSS schemes under the conditions of enough number of satellites and smaller inter-satellite distances.
基金the Postdoctoral ScienceFoundation of China(No.2023M730156)the NationalNatural Foundation of China(No.62301012).
文摘Hyper-and multi-spectral image fusion is an important technology to produce hyper-spectral and hyper-resolution images,which always depends on the spectral response function andthe point spread function.However,few works have been payed on the estimation of the two degra-dation functions.To learn the two functions from image pairs to be fused,we propose a Dirichletnetwork,where both functions are properly constrained.Specifically,the spatial response function isconstrained with positivity,while the Dirichlet distribution along with a total variation is imposedon the point spread function.To the best of our knowledge,the neural network and the Dirichlet regularization are exclusively investigated,for the first time,to estimate the degradation functions.Both image degradation and fusion experiments demonstrate the effectiveness and superiority of theproposed Dirichlet network.
基金supported by National Natural Science Foundation of China (No.62001135)the Joint funds for Regional Innovation and Development of the National Natural Science Foundation of China(No.U21A20449)the Beijing Natural Science Foundation Haidian Original Innovation Joint Fund (No.L232002)
文摘UAV-aided cellular networks,millimeter wave(mm-wave) communications and multi-antenna techniques are viewed as promising components of the solution for beyond-5G(B5G) and even 6G communications.By leveraging the power of stochastic geometry,this paper aims at providing an effective framework for modeling and analyzing a UAV-aided heterogeneous cellular network,where the terrestrial base stations(TBSs) and the UAV base stations(UBSs) coexist,and the UBSs are provided with mm-wave and multi-antenna techniques.By modeling the TBSs as a PPP and the UBSs as a Matern hard-core point process of type Ⅱ(MPH-Ⅱ),approximated but accurate analytical results for the average rate of the typical user of both tiers are derived through an approximation method based on the mean interference-to-signal ratio(MISR) gain.The influence of some relevant parameters is discussed in detail,and some insights into the network deployment and optimization are revealed.Numerical results show that some trade-offs are worthy of being considered,such as the antenna array size,the altitude of the UAVs and the power control factor of the UBSs.
基金supported by National Natural Science Foundation of China (No.62362005)。
文摘The reliability of a network is an important indicator for maintaining communication and ensuring its stable operation. Therefore, the assessment of reliability in underlying interconnection networks has become an increasingly important research issue. However, at present, the reliability assessment of many interconnected networks is not yet accurate,which inevitably weakens their fault tolerance and diagnostic capabilities. To improve network reliability,researchers have proposed various methods and strategies for precise assessment. This paper introduces a novel family of interconnection networks called general matching composed networks(gMCNs), which is based on the common characteristics of network topology structure. After analyzing the topological properties of gMCNs, we establish a relationship between super connectivity and conditional diagnosability of gMCNs. Furthermore, we assess the reliability of g MCNs, and determine the conditional diagnosability of many interconnection networks.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
基金supported by National Natural Science Foundation of China(No.61901229 and No.62071242)the Project of Jiangsu Engineering Research Center of Novel Optical Fiber Technology and Communication Network(No.SDGC2234)+1 种基金the Open Research Project of Jiangsu Provincial Key Laboratory of Photonic and Electronic Materials Sciences and Technology(No.NJUZDS2022-008)the Post-Doctoral Research Supporting Program of Jiangsu Province(No.SBH20).
文摘This paper investigates a wireless powered and backscattering enabled sensor network based on the non-linear energy harvesting model, where the power beacon(PB) delivers energy signals to wireless sensors to enable their passive backscattering and active transmission to the access point(AP). We propose an efficient time scheduling scheme for network performance enhancement, based on which each sensor can always harvest energy from the PB over the entire block except its time slots allocated for passive and active information delivery. Considering the PB and wireless sensors are from two selfish service providers, we use the Stackelberg game to model the energy interaction among them. To address the non-convexity of the leader-level problem, we propose to decompose the original problem into two subproblems and solve them iteratively in an alternating manner. Specifically, the successive convex approximation, semi-definite relaxation(SDR) and variable substitution techniques are applied to find a nearoptimal solution. To evaluate the performance loss caused by the interaction between two providers, we further investigate the social welfare maximization problem. Numerical results demonstrate that compared to the benchmark schemes, the proposed scheme can achieve up to 35.4% and 38.7% utility gain for the leader and the follower, respectively.
基金supported in part by the Fundamental Research Funds for the Central Universities under Grant 2022JBGP003in part by the National Natural Science Foundation of China(NSFC)under Grant 62071033in part by ZTE IndustryUniversity-Institute Cooperation Funds under Grant No.IA20230217003。
文摘This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization problem is formulated by jointly optimizing the user scheduling and data assignment.Due to the non-analytic expression of the WSA w.r.t.the optimization variables and the unknowability of future network information,the problem cannot be solved with known solution methods.Therefore,an online Joint Partial Offloading and User Scheduling Optimization(JPOUSO)algorithm is proposed by transforming the original problem into a single-slot data assignment subproblem and a single-slot user scheduling sub-problem and solving the two sub-problems separately.We analyze the computational complexity of the presented JPO-USO algorithm,which is of O(N),with N being the number of users.Simulation results show that the proposed JPO-USO algorithm is able to achieve better AoI performance compared with various baseline methods.It is shown that both the user’s data assignment and the user’s AoI should be jointly taken into account to decrease the system WSA when scheduling users.
基金supported in part by the National Key Research and Development Program of China under Grant No.2024YFE0200600the Zhejiang Provincial Natural Science Foundation of China under Grant No.LR23F010005the Huawei Cooperation Project under Grant No.TC20240829036。
文摘Along with the proliferating research interest in semantic communication(Sem Com),joint source channel coding(JSCC)has dominated the attention due to the widely assumed existence in efficiently delivering information semantics.Nevertheless,this paper challenges the conventional JSCC paradigm and advocates for adopting separate source channel coding(SSCC)to enjoy a more underlying degree of freedom for optimization.We demonstrate that SSCC,after leveraging the strengths of the Large Language Model(LLM)for source coding and Error Correction Code Transformer(ECCT)complemented for channel coding,offers superior performance over JSCC.Our proposed framework also effectively highlights the compatibility challenges between Sem Com approaches and digital communication systems,particularly concerning the resource costs associated with the transmission of high-precision floating point numbers.Through comprehensive evaluations,we establish that assisted by LLM-based compression and ECCT-enhanced error correction,SSCC remains a viable and effective solution for modern communication systems.In other words,separate source channel coding is still what we need.
基金Fund supported this work for Excellent Youth Scholars of China(Grant No.52222708)the National Natural Science Foundation of China(Grant No.51977007)+1 种基金Part of this work is supported by the research project“SPEED”(03XP0585)at RWTH Aachen Universityfunded by the German Federal Ministry of Education and Research(BMBF)。
文摘Developing sensorless techniques for estimating battery expansion is essential for effective mechanical state monitoring,improving the accuracy of digital twin simulation and abnormality detection.Therefore,this paper presents a data-driven approach to expansion estimation using electromechanical coupled models with machine learning.The proposed method integrates reduced-order impedance models with data-driven mechanical models,coupling the electrochemical and mechanical states through the state of charge(SOC)and mechanical pressure within a state estimation framework.The coupling relationship was established through experimental insights into pressure-related impedance parameters and the nonlinear mechanical behavior with SOC and pressure.The data-driven model was interpreted by introducing a novel swelling coefficient defined by component stiffnesses to capture the nonlinear mechanical behavior across various mechanical constraints.Sensitivity analysis of the impedance model shows that updating model parameters with pressure can reduce the mean absolute error of simulated voltage by 20 mV and SOC estimation error by 2%.The results demonstrate the model's estimation capabilities,achieving a root mean square error of less than 1 kPa when the maximum expansion force is from 30 kPa to 120 kPa,outperforming calibrated stiffness models and other machine learning techniques.The model's robustness and generalizability are further supported by its effective handling of SOC estimation and pressure measurement errors.This work highlights the importance of the proposed framework in enhancing state estimation and fault diagnosis for lithium-ion batteries.