A distributed information network with complex network structure always has a challenge of locating fault root causes.In this paper,we propose a novel root cause analysis(RCA)method by random walk on the weighted faul...A distributed information network with complex network structure always has a challenge of locating fault root causes.In this paper,we propose a novel root cause analysis(RCA)method by random walk on the weighted fault propagation graph.Different from other RCA methods,it mines effective features information related to root causes from offline alarms.Combined with the information,online alarms and graph relationship of network structure are used to construct a weighted graph.Thus,this approach does not require operational experience and can be widely applied in different distributed networks.The proposed method can be used in multiple fault location cases.The experiment results show the proposed approach achieves much better performance with 6%higher precision at least for root fault location,compared with three baseline methods.Besides,we explain how the optimal parameter’s value in the random walk algorithm influences RCA results.展开更多
In this paper,a resource mobility aware two-stage hybrid task planning algorithm is proposed to reduce the resource conflict between emergency tasks and the common tasks,so as to improve the overall performance of spa...In this paper,a resource mobility aware two-stage hybrid task planning algorithm is proposed to reduce the resource conflict between emergency tasks and the common tasks,so as to improve the overall performance of space information networks.Specifically,in the common task planning stage,a resource fragment avoidance task planning algorithm is proposed,which reduces the contention between emergency tasks and the planned common tasks in the next stage by avoiding the generation of resource fragments.For emergency tasks,we design a metric to quantify the revenue of the candidate resource combination of emergency tasks,which considers both the priority of the tasks and the impact on the planned common tasks.Based on this,a resource mobility aware emergency task planning algorithm is proposed,which strikes a good balance between improving the sum priority and avoiding disturbing the planned common tasks.Finally,simulation results show that the proposed algorithm is superior to the existing algorithms in both the sum task priority and the task completion rate.展开更多
As the use of deepfake facial videos proliferate,the associated threats to social security and integrity cannot be overstated.Effective methods for detecting forged facial videos are thus urgently needed.While many de...As the use of deepfake facial videos proliferate,the associated threats to social security and integrity cannot be overstated.Effective methods for detecting forged facial videos are thus urgently needed.While many deep learning-based facial forgery detection approaches show promise,they often fail to delve deeply into the complex relationships between image features and forgery indicators,limiting their effectiveness to specific forgery techniques.To address this challenge,we propose a dual-branch collaborative deepfake detection network.The network processes video frame images as input,where a specialized noise extraction module initially extracts the noise feature maps.Subsequently,the original facial images and corresponding noise maps are directed into two parallel feature extraction branches to concurrently learn texture and noise forgery clues.An attention mechanism is employed between the two branches to facilitate mutual guidance and enhancement of texture and noise features across four different scales.This dual-modal feature integration enhances sensitivity to forgery artifacts and boosts generalization ability across various forgery techniques.Features from both branches are then effectively combined and processed through a multi-layer perception layer to distinguish between real and forged video.Experimental results on benchmark deepfake detection datasets demonstrate that our approach outperforms existing state-of-the-art methods in terms of detection performance,accuracy,and generalization ability.展开更多
Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies dri...Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.展开更多
Non-Orthogonal Multiple Access(NOMA)assisted Unmanned Aerial Vehicle(UAV)communication is becoming a promising technique for future B5G/6G networks.However,the security of the NOMA-UAV networks remains critical challe...Non-Orthogonal Multiple Access(NOMA)assisted Unmanned Aerial Vehicle(UAV)communication is becoming a promising technique for future B5G/6G networks.However,the security of the NOMA-UAV networks remains critical challenges due to the shared wireless spectrum and Line-of-Sight(LoS)channel.This paper formulates a joint UAV trajectory design and power allocation problem with the aid of the ground jammer to maximize the sum secrecy rate.First,the joint optimization problem is modeled as a Markov Decision Process(MDP).Then,the Deep Reinforcement Learning(DRL)method is utilized to search the optimal policy from the continuous action space.In order to accelerate the sample accumulation,the Asynchronous Advantage Actor-Critic(A3C)scheme with multiple workers is proposed,which reformulates the action and reward to acquire complete update duration.Simulation results demonstrate that the A3C-based scheme outperforms the baseline schemes in term of the secrecy rate and stability.展开更多
In this paper,the clustering and resource allocation problem in device-to-device(D2D)multicast transmission underlay cellular networks are investigated.For the sake of classifying D2D users into different D2D multicas...In this paper,the clustering and resource allocation problem in device-to-device(D2D)multicast transmission underlay cellular networks are investigated.For the sake of classifying D2D users into different D2D multicast clusters,a hybrid intelligent clustering strategy(HICS)based on unsupervised machine learning is proposed first.By maximizing the total energy efficiency of D2D multicast clusters,a joint resource allocation scheme is then presented.More specifically,the energy efficiency optimization problem is constructed under the quality of service(QoS)constraints.Since the joint optimization problem is non-convex,we transform the original problem into a mixed-integer programming problem according to the Dinkelbach algorithm.Furthermore,to avoid the high computational complexity inherent in the traditional resource allocation problem,a Q-Learning based joint resource allocation and power control algorithm is proposed.Numerical results reveal that the proposed algorithm achieves better energy efficiency in terms of throughput per energy consumption.展开更多
Recently,electric vehicles(EVs)have been widely used under the call of green travel and environmental protection,and diverse requirements for charging are also increasing gradually.In order to ensure the authenticity ...Recently,electric vehicles(EVs)have been widely used under the call of green travel and environmental protection,and diverse requirements for charging are also increasing gradually.In order to ensure the authenticity and privacy of charging information interaction,blockchain technology is proposed and applied in charging station billing systems.However,there are some issues in blockchain itself,including lower computing efficiency of the nodes and higher energy consumption in the consensus process.To handle the above issues,in this paper,combining blockchain and mobile edge computing(MEC),we develop a reliable billing data transmission scheme to improve the computing capacity of nodes and reduce the energy consumption of the consensus process.By jointly optimizing the primary and replica nodes offloading decisions,block size and block interval,the transaction throughput of the blockchain system is maximized,as well as the latency and energy consumption of the system are minimized.Moreover,we formulate the joint optimization problem as a Markov decision process(MDP).To tackle the dynamic and continuity of the system state,the reinforcement learning(RL)is introduced to solve the MDP problem.Finally,simulation results demonstrate that the performance improvement of the proposed scheme through comparison with other existing schemes.展开更多
This paper develops a QKD (quantum key distribution)-based queueing model to investigate the data delay on QKD link and network, especially that based on trusted relays. It shows the mean packet delay performance of...This paper develops a QKD (quantum key distribution)-based queueing model to investigate the data delay on QKD link and network, especially that based on trusted relays. It shows the mean packet delay performance of the QKD system. Furthermore, it proposes a key buffering policy which could effectively improve the delay performance in practice. The results will be helpful for quality of service in practical QKD systems.展开更多
Cooperative spectrum sensing in cog- nitive radio is investigated to improve the det- ection performance of Primary User (PU). Meanwhile, cluster-based hierarchical coop- eration is introduced for reducing the overh...Cooperative spectrum sensing in cog- nitive radio is investigated to improve the det- ection performance of Primary User (PU). Meanwhile, cluster-based hierarchical coop- eration is introduced for reducing the overhead as well as maintaining a certain level of sens- ing performance. However, in existing hierar- chically cooperative spectrum sensing algo- rithms, the robustness problem of the system is seldom considered. In this paper, we pro- pose a reputation-based hierarchically coop- erative spectrum sensing scheme in Cognitive Radio Networks (CRNs). Before spectrum sensing, clusters are grouped based on the location correlation coefficients of Secondary Users (SUs). In the proposed scheme, there are two levels of cooperation, the first one is performed within a cluster and the second one is carried out among clusters. With the reputa- tion mechanism and modified MAJORITY rule in the second level cooperation, the pro- posed scheme can not only relieve the influ- ence of the shadowing, but also eliminate the impact of the PU emulation attack on a rela- tively large scale. Simulation results show that, in the scenarios with deep-shadowing or mul- tiple attacked SUs, our proposed scheme ach- ieves a better tradeoff between the system robustness and the energy saving compared with those conventionally cooperative sensing schemes.展开更多
the routing protocol for low-power and lossy networks(RPL) has been used in advanced metering infrastructure(AMI)which could provide two-way communication between smart meters and city utilities.To improve the network...the routing protocol for low-power and lossy networks(RPL) has been used in advanced metering infrastructure(AMI)which could provide two-way communication between smart meters and city utilities.To improve the network performance of AMI networks, this paper proposed an improved algorithm of RPL based on triangle module operator(IAR-TMO). IAR-TMO proposes membership functions of the following five typical routing metrics: end-to-end delay, number of hops, expected transmission count(ETX),node remaining energy, and child node count.Moreover, IAR-TMO uses triangle module operator to fuse membership functions of these routing metrics. Then, IAR-TMO selects preferred parents(the next hop) based on the triangle module operator. Theoretical analysis and simulation results show that IAR-TMO has a great improvement when compared with two recent representative algorithms: ETXOF(ETX Objective Function) and OF-FL(Objective Function based on Fuzzy Logic), in terms of network lifetime, average end-to-end delay,etc. Consequently, the network performances of AMI networks can be improved effectively.展开更多
There are an increasing of scenarios that require the independent bandwidth and delay demands. For instance, in a data center, the interactive message would not occupy much bandwidth, but it requires the rigorous dema...There are an increasing of scenarios that require the independent bandwidth and delay demands. For instance, in a data center, the interactive message would not occupy much bandwidth, but it requires the rigorous demands for the delay. However, the existing QoS approaches are mainly bandwidth based, which are inappropriate for these scenarios. Hence, we propose the decoupled scheme in the OpenFlow networks to provide the centralized differential bandwidth and delay control. We leverage the mature HTB to manage the bandwidth. And we design the Queue Delay Management Scheme (QDMS) for queuing delay arrangement, as well as the Comprehensive Parameters based Dijkstra Route algorithm (CPDR) for the propagation delay control. The evaluation results verify the decoupling effectiveness. And the decoupled scheme can reduce the delay for high priority flows.展开更多
The six-generation(6G)wireless network is expected to satisfy the requirements of ubiquitous connectivity and intelligent endogenous.Terrestrialsatellite networks(TSN)enable seamless coverage for terrestrial users in ...The six-generation(6G)wireless network is expected to satisfy the requirements of ubiquitous connectivity and intelligent endogenous.Terrestrialsatellite networks(TSN)enable seamless coverage for terrestrial users in a wide area,making it very promising in 6G.As data traffic in TSNs surges,the integrated management for caching,computing,and communication(3C)has attracted much research attention.In this paper,we investigate the multi-resource management in the uplink and downlink transmission of TSN,respectively.In particularly,we aim to guarantee both throughput fairness and data security in the uplink transmission of TSN.Considering the intermittent communication of the satellite,we introduce two kinds of relays,i.e.,terrestrial relays(TRs)and aerial relays(ARs)to improve the system throughput performance in the downlink transmission of TSN.Finally,we study a specific case of TSN with the uplink and downlink transmission,and the corresponding simulation results validate the effectiveness of our proposed schemes.展开更多
In this paper,we consider a new spectrum sharing scenario for a cognitive relay network,where a secondary unmanned aerial vehicle(UAV)relay receives information from the ground secondary base station(SBS)and transmits...In this paper,we consider a new spectrum sharing scenario for a cognitive relay network,where a secondary unmanned aerial vehicle(UAV)relay receives information from the ground secondary base station(SBS)and transmits information to the ground secondary user(SU),coexisting with the primary users(PUs)at the same wireless frequency band.We investigate the optimization of the UAV relay’s three-dimensional(3D)trajectory to improve the communication throughput performance of the secondary network subject to the interference constraints of the PUs.The information throughput maximization problem is studied by jointly optimizing the UAV relay’s 3D trajectory and the transmit power of the SBS and the UAV,subject to the constraints on the velocity and elevation of the UAV relay,the maximum and average transmit power,and the information causality,as well as a set of interference temperature(IT)constraints.An efficient algorithm is proposed to solve the admittedly challenging non-convex problem by using the path discretization technique,the successive convex approximation technique and the alternating optimization method.Finally,simulation results are provided to show that our proposed design outperforms other benchmark schemes in terms of the throughput。展开更多
Weather phenomenon recognition plays an important role in the field of meteorology.Nowadays,weather radars and weathers sensor have been widely used for weather recognition.However,given the high cost in deploying and...Weather phenomenon recognition plays an important role in the field of meteorology.Nowadays,weather radars and weathers sensor have been widely used for weather recognition.However,given the high cost in deploying and maintaining the devices,it is difficult to apply them to intensive weather phenomenon recognition.Moreover,advanced machine learning models such as Convolutional Neural Networks(CNNs)have shown a lot of promise in meteorology,but these models also require intensive computation and large memory,which make it difficult to use them in reality.In practice,lightweight models are often used to solve such problems.However,lightweight models often result in significant performance losses.To this end,after taking a deep dive into a large number of lightweight models and summarizing their shortcomings,we propose a novel lightweight CNNs model which is constructed based on new building blocks.The experimental results show that the model proposed in this paper has comparable performance with the mainstream non-lightweight model while also saving 25 times of memory consumption.Such memory reduction is even better than that of existing lightweight models.展开更多
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro...How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.展开更多
Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out se...Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.展开更多
In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refe...In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refers to detecting whether two moving objects are close to each other or not in real time. However, the battery life and computing capability of mobile devices are limited in the actual scene,which results in high latency and energy consumption. Therefore, it is a tough problem to determine the proximity relationship between mobile users with low latency and energy consumption. In this article, we aim at finding a tradeoff between latency and energy consumption. We formalize the computation offloading problem base on mobile edge computing(MEC)into a constrained multiobjective optimization problem(CMOP) and utilize NSGA-II to solve it. The simulation results demonstrate that NSGA-II can find the Pareto set, which reduces the latency and energy consumption effectively. In addition, a large number of solutions provided by the Pareto set give us more choices of the offloading decision according to the actual situation.展开更多
Long Term Evolution(LTE)and IEEE 802.16 WiMAX are competing access network technologies adopted in 4G wireless networks in recent years.LTE complies with3 GPP standards whereas 802.16 WiMAX is regulated by the Institu...Long Term Evolution(LTE)and IEEE 802.16 WiMAX are competing access network technologies adopted in 4G wireless networks in recent years.LTE complies with3 GPP standards whereas 802.16 WiMAX is regulated by the Institute of Electrical and Electronics Engineers(IEEE).Although WiMAX is already operating commercially in Taiwan,the system is limited to an independent new system that is incompatible with the current 3G system.Hence,the cost of implementing the WiMAX system is relatively high,this being an impediment to its rapid uptake and widespread use.On the other hand,LTE conforms to 3GPP that is supported by telecommunication manufacturers and operators and is,moreover,backward compatible with 3G/UMTS cellular systems.The LTE specifications define how user equipment(UE)connects and communicates with evolved Node B(eNB)base stations.The enhanced version,LTE-Advanced,adds a new entity called the relay node(RN)to widen service coverage,although this change has resulted in a more complex architecture.Mobility management and data forwarding are essential components in wireless mobile networking.This paper focuses on the efficient handover procedure in LTE-Advanced networks,and proposes a Smart Forwarding mechanism to improve the handover performance.Simulation studies show that the proposed Smart Forwarding scheme employs a better operational transmission path that effectively reduces handover latency and signal overhead.展开更多
Unmanned aerial vehicle(UAV)communications are subject to the severe spectrum scarcity problem.Cognitive UAV networks are promising to tackle this issue while the confidential information is susceptible to be eavesdro...Unmanned aerial vehicle(UAV)communications are subject to the severe spectrum scarcity problem.Cognitive UAV networks are promising to tackle this issue while the confidential information is susceptible to be eavesdropped.A UAV jamming assisted scheme is proposed.A joint resource allocation and trajectories optimization problem is formulated in a UAV-assisted jamming cognitive UAV network subject to diverse power and trajectory constraints.An alternative optimization algorithm is proposed to solve the challenging non-convex joint optimization problem.Extensive simulation results demonstrate the superiority of our proposed scheme and many meaningful insights are obtained for the practical design of cognitive UAV networks.展开更多
The increasing network throughput challenges the current network traffic monitor systems to have compatible high-performance data processing.The design of packet processing systems is guided by the requirements of hig...The increasing network throughput challenges the current network traffic monitor systems to have compatible high-performance data processing.The design of packet processing systems is guided by the requirements of high packet processing throughput.In this paper,we depict an in-depth research on the related techniques and an implementation of a high-performance data acquisition mechanism.Through the bottleneck analysis with the aid of queuing network model,several performance optimising methods,such as service rate increasing,queue removing and model simplification,are integrated.The experiment results indicate that this approach is capable of reducing the CPU utilization ratio while improving the efficiency of data acquisition in high-speed networks.展开更多
基金supported by ZTE Industry-University-Institute Cooperation Funds under Grant No.HC-CN-20201120009。
文摘A distributed information network with complex network structure always has a challenge of locating fault root causes.In this paper,we propose a novel root cause analysis(RCA)method by random walk on the weighted fault propagation graph.Different from other RCA methods,it mines effective features information related to root causes from offline alarms.Combined with the information,online alarms and graph relationship of network structure are used to construct a weighted graph.Thus,this approach does not require operational experience and can be widely applied in different distributed networks.The proposed method can be used in multiple fault location cases.The experiment results show the proposed approach achieves much better performance with 6%higher precision at least for root fault location,compared with three baseline methods.Besides,we explain how the optimal parameter’s value in the random walk algorithm influences RCA results.
基金supported by the National Natural Science Foundation of China(61701365,61801365 and 91638202)China Postdoctoral Science Foundation(2018M643581,2019TQ0241)+2 种基金National Natural Science Foundation of Shaanxi Province(2019JQ-152)Postdoctoral Foundation in Shaanxi Province of Chinathe Fundamental Research Funds for the Central Universities.
文摘In this paper,a resource mobility aware two-stage hybrid task planning algorithm is proposed to reduce the resource conflict between emergency tasks and the common tasks,so as to improve the overall performance of space information networks.Specifically,in the common task planning stage,a resource fragment avoidance task planning algorithm is proposed,which reduces the contention between emergency tasks and the planned common tasks in the next stage by avoiding the generation of resource fragments.For emergency tasks,we design a metric to quantify the revenue of the candidate resource combination of emergency tasks,which considers both the priority of the tasks and the impact on the planned common tasks.Based on this,a resource mobility aware emergency task planning algorithm is proposed,which strikes a good balance between improving the sum priority and avoiding disturbing the planned common tasks.Finally,simulation results show that the proposed algorithm is superior to the existing algorithms in both the sum task priority and the task completion rate.
基金funded by the Ministry of Public Security Science and Technology Program Project(No.2023LL35)the Key Laboratory of Smart Policing and National Security Risk Governance,Sichuan Province(No.ZHZZZD2302).
文摘As the use of deepfake facial videos proliferate,the associated threats to social security and integrity cannot be overstated.Effective methods for detecting forged facial videos are thus urgently needed.While many deep learning-based facial forgery detection approaches show promise,they often fail to delve deeply into the complex relationships between image features and forgery indicators,limiting their effectiveness to specific forgery techniques.To address this challenge,we propose a dual-branch collaborative deepfake detection network.The network processes video frame images as input,where a specialized noise extraction module initially extracts the noise feature maps.Subsequently,the original facial images and corresponding noise maps are directed into two parallel feature extraction branches to concurrently learn texture and noise forgery clues.An attention mechanism is employed between the two branches to facilitate mutual guidance and enhancement of texture and noise features across four different scales.This dual-modal feature integration enhances sensitivity to forgery artifacts and boosts generalization ability across various forgery techniques.Features from both branches are then effectively combined and processed through a multi-layer perception layer to distinguish between real and forged video.Experimental results on benchmark deepfake detection datasets demonstrate that our approach outperforms existing state-of-the-art methods in terms of detection performance,accuracy,and generalization ability.
基金funded by the National Key Research and Development Program of China under Grant 2019YFB1803301Beijing Natural Science Foundation (L202002)。
文摘Cybertwin-enabled 6th Generation(6G)network is envisioned to support artificial intelligence-native management to meet changing demands of 6G applications.Multi-Agent Deep Reinforcement Learning(MADRL)technologies driven by Cybertwins have been proposed for adaptive task offloading strategies.However,the existence of random transmission delay between Cybertwin-driven agents and underlying networks is not considered in related works,which destroys the standard Markov property and increases the decision reaction time to reduce the task offloading strategy performance.In order to address this problem,we propose a pipelining task offloading method to lower the decision reaction time and model it as a delay-aware Markov Decision Process(MDP).Then,we design a delay-aware MADRL algorithm to minimize the weighted sum of task execution latency and energy consumption.Firstly,the state space is augmented using the lastly-received state and historical actions to rebuild the Markov property.Secondly,Gate Transformer-XL is introduced to capture historical actions'importance and maintain the consistent input dimension dynamically changed due to random transmission delays.Thirdly,a sampling method and a new loss function with the difference between the current and target state value and the difference between real state-action value and augmented state-action value are designed to obtain state transition trajectories close to the real ones.Numerical results demonstrate that the proposed methods are effective in reducing reaction time and improving the task offloading performance in the random-delay Cybertwin-enabled 6G networks.
基金supported by the Fundamental Research Funds for the Central Universities,China(No.2024MS115).
文摘Non-Orthogonal Multiple Access(NOMA)assisted Unmanned Aerial Vehicle(UAV)communication is becoming a promising technique for future B5G/6G networks.However,the security of the NOMA-UAV networks remains critical challenges due to the shared wireless spectrum and Line-of-Sight(LoS)channel.This paper formulates a joint UAV trajectory design and power allocation problem with the aid of the ground jammer to maximize the sum secrecy rate.First,the joint optimization problem is modeled as a Markov Decision Process(MDP).Then,the Deep Reinforcement Learning(DRL)method is utilized to search the optimal policy from the continuous action space.In order to accelerate the sample accumulation,the Asynchronous Advantage Actor-Critic(A3C)scheme with multiple workers is proposed,which reformulates the action and reward to acquire complete update duration.Simulation results demonstrate that the A3C-based scheme outperforms the baseline schemes in term of the secrecy rate and stability.
基金This research was supported by the National Natural Science Foundation of China(Grant Nos.62071377,61801382,61901367)the Key Project of Natural Science Foundation of Shaanxi Province(Grant No.2019JZ-06)+1 种基金the Key Industrial Chain Project of Shaanxi Province(Grant No.2019ZDLGY07-06)the College Science and Technology Innovation Activity Project of Xi’an University of Posts and Telecommunications(Grant No.19-B-289).
文摘In this paper,the clustering and resource allocation problem in device-to-device(D2D)multicast transmission underlay cellular networks are investigated.For the sake of classifying D2D users into different D2D multicast clusters,a hybrid intelligent clustering strategy(HICS)based on unsupervised machine learning is proposed first.By maximizing the total energy efficiency of D2D multicast clusters,a joint resource allocation scheme is then presented.More specifically,the energy efficiency optimization problem is constructed under the quality of service(QoS)constraints.Since the joint optimization problem is non-convex,we transform the original problem into a mixed-integer programming problem according to the Dinkelbach algorithm.Furthermore,to avoid the high computational complexity inherent in the traditional resource allocation problem,a Q-Learning based joint resource allocation and power control algorithm is proposed.Numerical results reveal that the proposed algorithm achieves better energy efficiency in terms of throughput per energy consumption.
基金in part by the National Natural Science Foundation of China under Grant 61901011in part by the Foundation of Beijing Municipal Commission of Education under Grant KM202110005021 and KM202010005017.
文摘Recently,electric vehicles(EVs)have been widely used under the call of green travel and environmental protection,and diverse requirements for charging are also increasing gradually.In order to ensure the authenticity and privacy of charging information interaction,blockchain technology is proposed and applied in charging station billing systems.However,there are some issues in blockchain itself,including lower computing efficiency of the nodes and higher energy consumption in the consensus process.To handle the above issues,in this paper,combining blockchain and mobile edge computing(MEC),we develop a reliable billing data transmission scheme to improve the computing capacity of nodes and reduce the energy consumption of the consensus process.By jointly optimizing the primary and replica nodes offloading decisions,block size and block interval,the transaction throughput of the blockchain system is maximized,as well as the latency and energy consumption of the system are minimized.Moreover,we formulate the joint optimization problem as a Markov decision process(MDP).To tackle the dynamic and continuity of the system state,the reinforcement learning(RL)is introduced to solve the MDP problem.Finally,simulation results demonstrate that the performance improvement of the proposed scheme through comparison with other existing schemes.
基金Project supported by National Fundamental Research Program of China (Grant No 2006CB921900)National Natural Science Foundation of China (Grant Nos 60537020 and 60621064)Knowledge Innovation Project of Chinese Academy of Sciences
文摘This paper develops a QKD (quantum key distribution)-based queueing model to investigate the data delay on QKD link and network, especially that based on trusted relays. It shows the mean packet delay performance of the QKD system. Furthermore, it proposes a key buffering policy which could effectively improve the delay performance in practice. The results will be helpful for quality of service in practical QKD systems.
基金ACKNOWLEDGEMENT This work was partially supported by the Na- tional Natural Science Foundation of China under Grant No. 61071127 and the Science and Technology Department of Zhejiang Pro- vince under Grants No. 2012C01036-1, No. 2011R10035.
文摘Cooperative spectrum sensing in cog- nitive radio is investigated to improve the det- ection performance of Primary User (PU). Meanwhile, cluster-based hierarchical coop- eration is introduced for reducing the overhead as well as maintaining a certain level of sens- ing performance. However, in existing hierar- chically cooperative spectrum sensing algo- rithms, the robustness problem of the system is seldom considered. In this paper, we pro- pose a reputation-based hierarchically coop- erative spectrum sensing scheme in Cognitive Radio Networks (CRNs). Before spectrum sensing, clusters are grouped based on the location correlation coefficients of Secondary Users (SUs). In the proposed scheme, there are two levels of cooperation, the first one is performed within a cluster and the second one is carried out among clusters. With the reputa- tion mechanism and modified MAJORITY rule in the second level cooperation, the pro- posed scheme can not only relieve the influ- ence of the shadowing, but also eliminate the impact of the PU emulation attack on a rela- tively large scale. Simulation results show that, in the scenarios with deep-shadowing or mul- tiple attacked SUs, our proposed scheme ach- ieves a better tradeoff between the system robustness and the energy saving compared with those conventionally cooperative sensing schemes.
基金supported by the Beijing Laboratory of Advanced Information Networks
文摘the routing protocol for low-power and lossy networks(RPL) has been used in advanced metering infrastructure(AMI)which could provide two-way communication between smart meters and city utilities.To improve the network performance of AMI networks, this paper proposed an improved algorithm of RPL based on triangle module operator(IAR-TMO). IAR-TMO proposes membership functions of the following five typical routing metrics: end-to-end delay, number of hops, expected transmission count(ETX),node remaining energy, and child node count.Moreover, IAR-TMO uses triangle module operator to fuse membership functions of these routing metrics. Then, IAR-TMO selects preferred parents(the next hop) based on the triangle module operator. Theoretical analysis and simulation results show that IAR-TMO has a great improvement when compared with two recent representative algorithms: ETXOF(ETX Objective Function) and OF-FL(Objective Function based on Fuzzy Logic), in terms of network lifetime, average end-to-end delay,etc. Consequently, the network performances of AMI networks can be improved effectively.
基金supported National Natural Science Foundation of China (Project Number: 61671086)Consulting Project of Chinese Academy of Engineering (Project Number: 2016-XY-09)
文摘There are an increasing of scenarios that require the independent bandwidth and delay demands. For instance, in a data center, the interactive message would not occupy much bandwidth, but it requires the rigorous demands for the delay. However, the existing QoS approaches are mainly bandwidth based, which are inappropriate for these scenarios. Hence, we propose the decoupled scheme in the OpenFlow networks to provide the centralized differential bandwidth and delay control. We leverage the mature HTB to manage the bandwidth. And we design the Queue Delay Management Scheme (QDMS) for queuing delay arrangement, as well as the Comprehensive Parameters based Dijkstra Route algorithm (CPDR) for the propagation delay control. The evaluation results verify the decoupling effectiveness. And the decoupled scheme can reduce the delay for high priority flows.
基金the National Natural Science Foundation of China under Grant 61701054the Fundamental Research Funds for the Central University under Grants 2020CDJQY-A001 and 2021CDJQY-013。
文摘The six-generation(6G)wireless network is expected to satisfy the requirements of ubiquitous connectivity and intelligent endogenous.Terrestrialsatellite networks(TSN)enable seamless coverage for terrestrial users in a wide area,making it very promising in 6G.As data traffic in TSNs surges,the integrated management for caching,computing,and communication(3C)has attracted much research attention.In this paper,we investigate the multi-resource management in the uplink and downlink transmission of TSN,respectively.In particularly,we aim to guarantee both throughput fairness and data security in the uplink transmission of TSN.Considering the intermittent communication of the satellite,we introduce two kinds of relays,i.e.,terrestrial relays(TRs)and aerial relays(ARs)to improve the system throughput performance in the downlink transmission of TSN.Finally,we study a specific case of TSN with the uplink and downlink transmission,and the corresponding simulation results validate the effectiveness of our proposed schemes.
基金This work was supported by the National Key Research and Development Project under Grant 2020YFB1807602,Natural Science Foundation of China under Grant 62071223,62031012,61701214 and 61661028by the National Key Scientific Instrument and Equipment Development Project under Grant No.61827801+1 种基金the Open Project of the Shaanxi Key Laboratory of Information Communication Network and Security under Grant ICNS201701the Excellent Youth Foundation of Jiangxi Province under Grant 2018ACB21012 and in part by the Young Elite Scientist Sponsorship Program by CAST.
文摘In this paper,we consider a new spectrum sharing scenario for a cognitive relay network,where a secondary unmanned aerial vehicle(UAV)relay receives information from the ground secondary base station(SBS)and transmits information to the ground secondary user(SU),coexisting with the primary users(PUs)at the same wireless frequency band.We investigate the optimization of the UAV relay’s three-dimensional(3D)trajectory to improve the communication throughput performance of the secondary network subject to the interference constraints of the PUs.The information throughput maximization problem is studied by jointly optimizing the UAV relay’s 3D trajectory and the transmit power of the SBS and the UAV,subject to the constraints on the velocity and elevation of the UAV relay,the maximum and average transmit power,and the information causality,as well as a set of interference temperature(IT)constraints.An efficient algorithm is proposed to solve the admittedly challenging non-convex problem by using the path discretization technique,the successive convex approximation technique and the alternating optimization method.Finally,simulation results are provided to show that our proposed design outperforms other benchmark schemes in terms of the throughput。
基金This paper is supported by the following funds:National Key R&D Program of China(2018YFF01010100)National natural science foundation of China(61672064)+1 种基金Basic Research Program of Qinghai Province under Grants No.2020-ZJ-709Advanced information network Beijing laboratory(PXM2019_014204_500029).
文摘Weather phenomenon recognition plays an important role in the field of meteorology.Nowadays,weather radars and weathers sensor have been widely used for weather recognition.However,given the high cost in deploying and maintaining the devices,it is difficult to apply them to intensive weather phenomenon recognition.Moreover,advanced machine learning models such as Convolutional Neural Networks(CNNs)have shown a lot of promise in meteorology,but these models also require intensive computation and large memory,which make it difficult to use them in reality.In practice,lightweight models are often used to solve such problems.However,lightweight models often result in significant performance losses.To this end,after taking a deep dive into a large number of lightweight models and summarizing their shortcomings,we propose a novel lightweight CNNs model which is constructed based on new building blocks.The experimental results show that the model proposed in this paper has comparable performance with the mainstream non-lightweight model while also saving 25 times of memory consumption.Such memory reduction is even better than that of existing lightweight models.
文摘How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.
基金supported by the People’s Public Security University of China central basic scientific research business program(No.2021JKF206).
文摘Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.
基金supported in part by the National Natural Science Foundation of China (Grant No. 61901052)in part by the 111 project (Grant No. B17007)in part by the Director Funds of Beijing Key Laboratory of Network System Architecture and Convergence (Grant No. 2017BKL-NSACZJ-02)。
文摘In recent years, artificial intelligence and automotive industry have developed rapidly, and autonomous driving has gradually become the focus of the industry. In road networks, the problem of proximity detection refers to detecting whether two moving objects are close to each other or not in real time. However, the battery life and computing capability of mobile devices are limited in the actual scene,which results in high latency and energy consumption. Therefore, it is a tough problem to determine the proximity relationship between mobile users with low latency and energy consumption. In this article, we aim at finding a tradeoff between latency and energy consumption. We formalize the computation offloading problem base on mobile edge computing(MEC)into a constrained multiobjective optimization problem(CMOP) and utilize NSGA-II to solve it. The simulation results demonstrate that NSGA-II can find the Pareto set, which reduces the latency and energy consumption effectively. In addition, a large number of solutions provided by the Pareto set give us more choices of the offloading decision according to the actual situation.
基金supported in part by the National Science Council,Taiwan,"R.O.C.",under grant no.NSC 101-2221-E-164-019 and NSC 101-2221-E-164-020
文摘Long Term Evolution(LTE)and IEEE 802.16 WiMAX are competing access network technologies adopted in 4G wireless networks in recent years.LTE complies with3 GPP standards whereas 802.16 WiMAX is regulated by the Institute of Electrical and Electronics Engineers(IEEE).Although WiMAX is already operating commercially in Taiwan,the system is limited to an independent new system that is incompatible with the current 3G system.Hence,the cost of implementing the WiMAX system is relatively high,this being an impediment to its rapid uptake and widespread use.On the other hand,LTE conforms to 3GPP that is supported by telecommunication manufacturers and operators and is,moreover,backward compatible with 3G/UMTS cellular systems.The LTE specifications define how user equipment(UE)connects and communicates with evolved Node B(eNB)base stations.The enhanced version,LTE-Advanced,adds a new entity called the relay node(RN)to widen service coverage,although this change has resulted in a more complex architecture.Mobility management and data forwarding are essential components in wireless mobile networking.This paper focuses on the efficient handover procedure in LTE-Advanced networks,and proposes a Smart Forwarding mechanism to improve the handover performance.Simulation studies show that the proposed Smart Forwarding scheme employs a better operational transmission path that effectively reduces handover latency and signal overhead.
基金This work was supported in part by the National Natural Science Foundation of China(No.62031012,62071223,and 62061030)in part by the National Key Research and Development Project of China(2018YFB1404303,2018YFB14043033,and 2020YFB1807602)+2 种基金in part by the National Key Scientific Instrument and Equipment Development Project(61827801)in part by the Open Project of the Shaanxi Key Laboratory of Information Communication Network and Security(ICNS201701)by Young Elite Scientist Sponsorship Program by CAST,and by Graduate Innovation Foundation of Jiangxi Province(YC2019-S0350).
文摘Unmanned aerial vehicle(UAV)communications are subject to the severe spectrum scarcity problem.Cognitive UAV networks are promising to tackle this issue while the confidential information is susceptible to be eavesdropped.A UAV jamming assisted scheme is proposed.A joint resource allocation and trajectories optimization problem is formulated in a UAV-assisted jamming cognitive UAV network subject to diverse power and trajectory constraints.An alternative optimization algorithm is proposed to solve the challenging non-convex joint optimization problem.Extensive simulation results demonstrate the superiority of our proposed scheme and many meaningful insights are obtained for the practical design of cognitive UAV networks.
基金ACKNOWLEDGEMENT This project was supported by the National Natural Science Foundation of China under Grant No. 61170262 the National High Tech- nology Research and Development Program of China (863 Program) under Grants No. 2012AA012506, No. 2012AA012901, No. 2012- AA012903+5 种基金 the Specialised Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20121103120032 the Humanity and Social Science Youth Founda- tion of Ministry of Education of China under Grant No. 13YJCZH065 the Opening Project of Key Lab of Information Network Security of Ministry of Public Security (The Third Re- search Institute of Ministry of Public Security) under Grant No. C13613 the China Postdoc- toral Science Foundation, General Program of Science and Technology Development Project of Beijing Municipal Education Commission of China under Grant No. km201410005012 the Research on Education and Teaching of Beijing University of Technology under Grant No. ER2013C24 the Beijing Municipal Natu- ral Science Foundation, Sponsored by Hunan Postdoctoral Scientific Program, Open Re- search Fund of Beijing Key Laboratory of Trusted Computing.
文摘The increasing network throughput challenges the current network traffic monitor systems to have compatible high-performance data processing.The design of packet processing systems is guided by the requirements of high packet processing throughput.In this paper,we depict an in-depth research on the related techniques and an implementation of a high-performance data acquisition mechanism.Through the bottleneck analysis with the aid of queuing network model,several performance optimising methods,such as service rate increasing,queue removing and model simplification,are integrated.The experiment results indicate that this approach is capable of reducing the CPU utilization ratio while improving the efficiency of data acquisition in high-speed networks.