Aiming at the problem of mobile data traffic surge in 5G networks,this paper proposes an effective solution combining massive multiple-input multiple-output techniques with Ultra-Dense Network(UDN)and focuses on solvi...Aiming at the problem of mobile data traffic surge in 5G networks,this paper proposes an effective solution combining massive multiple-input multiple-output techniques with Ultra-Dense Network(UDN)and focuses on solving the resulting challenge of increased energy consumption.A base station control algorithm based on Multi-Agent Proximity Policy Optimization(MAPPO)is designed.In the constructed 5G UDN model,each base station is considered as an agent,and the MAPPO algorithm enables inter-base station collaboration and interference management to optimize the network performance.To reduce the extra power consumption due to frequent sleep mode switching of base stations,a sleep mode switching decision algorithm is proposed.The algorithm reduces unnecessary power consumption by evaluating the network state similarity and intelligently adjusting the agent’s action strategy.Simulation results show that the proposed algorithm reduces the power consumption by 24.61% compared to the no-sleep strategy and further reduces the power consumption by 5.36% compared to the traditional MAPPO algorithm under the premise of guaranteeing the quality of service of users.展开更多
Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication direct...Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication directly affects patient outcomes in these settings;nonetheless,current resource allocation techniques struggle with complicated interference patterns and different service needs of AI-native healthcare systems.In dense installations where conventional approaches fail,this paper tackles the challenge of combining network efficiency with medical care priority.Thus,we offer a Dueling Deep Q-Network(DDQN)-based resource allocation approach for AI-native healthcare systems in 6G dense networks.First,we create a point-line graph coloringbased interference model to capture the unique characteristics of medical device communications.Building on this foundation,we suggest a DDQN approach to optimal resource allocation over multiple medical services by combining advantage estimate with healthcare-aware state evaluation.Unlike traditional graph-based models,this one correctly depicts the overlapping coverage areas common in hospital environments.Building on this basis,our DDQN design allows the system to prioritize medical needs while distributing resources by separating healthcare state assessment from advantage estimation.Experimental findings show that the suggested DDQN outperforms state-of-the-art techniques in dense healthcare installations by 14.6%greater network throughput and 13.7%better resource use.The solution shows particularly strong in maintaining service quality under vital conditions with 5.5%greater Qo S satisfaction for emergency services and 8.2%quicker recovery from interruptions.展开更多
5G sets an ambitious goal of increasing the capacity per area of current 4G network by 1000 fold. Due to the high splitting gain of dense small cells, ultra dense network(UDN) is widely considered as a key component i...5G sets an ambitious goal of increasing the capacity per area of current 4G network by 1000 fold. Due to the high splitting gain of dense small cells, ultra dense network(UDN) is widely considered as a key component in achieving this goal. In this paper, we outline the main challenges that come with dense cell deployment, including interference, mobility, power consumption and backhaul. Technologies designed to tackle these challenges in long term evolution system(LTE) and their deficiencies in UDN context are also analyzed. To combat these challenges more efficiently, a series of technologies are introduced along with some of our initial research results. Moreover, the trends of user-centric and peer-to-peer design in UDN are also elaborated.展开更多
Drone applications in 5th generation(5G)networks mainly focus on services and use cases such as providing connectivity during crowded events,human-instigated disasters,unmanned aerial vehicle traffic management,intern...Drone applications in 5th generation(5G)networks mainly focus on services and use cases such as providing connectivity during crowded events,human-instigated disasters,unmanned aerial vehicle traffic management,internet of things in the sky,and situation awareness.4G and 5G cellular networks face various challenges to ensure dynamic control and safe mobility of the drone when it is tasked with delivering these services.The drone can fly in three-dimensional space.The drone connectivity can suffer from increased handover cost due to several reasons,including variations in the received signal strength indicator,co-channel interference offered to the drone by neighboring cells,and abrupt drop in lobe edge signals due to antenna nulls.The baseline greedy handover algorithm only ensures the strongest connection between the drone and small cells so that the drone may experience several handovers.Intended for fast environment learning,machine learning techniques such as Q-learning help the drone fly with minimum handover cost along with robust connectivity.In this study,we propose a Q-learning-based approach evaluated in three different scenarios.The handover decision is optimized gradually using Q-learning to provide efficient mobility support with high data rate in time-sensitive applications,tactile internet,and haptics communication.Simulation results demonstrate that the proposed algorithm can effectively minimize the handover cost in a learning environment.This work presents a notable contribution to determine the optimal route of drones for researchers who are exploring UAV use cases in cellular networks where a large testing site comprised of several cells with multiple UAVs is under consideration.展开更多
Single object tracking based on deep learning has achieved the advanced performance in many applications of computer vision.However,the existing trackers have certain limitations owing to deformation,occlusion,movemen...Single object tracking based on deep learning has achieved the advanced performance in many applications of computer vision.However,the existing trackers have certain limitations owing to deformation,occlusion,movement and some other conditions.We propose a siamese attentional dense network called SiamADN in an end-to-end offline manner,especially aiming at unmanned aerial vehicle(UAV)tracking.First,it applies a dense network to reduce vanishing-gradient,which strengthens the features transfer.Second,the channel attention mechanism is involved into the Densenet structure,in order to focus on the possible key regions.The advance corner detection network is introduced to improve the following tracking process.Extensive experiments are carried out on four mainly tracking benchmarks as OTB-2015,UAV123,LaSOT and VOT.The accuracy rate on UAV123 is 78.9%,and the running speed is 32 frame per second(FPS),which demonstrates its efficiency in the practical real application.展开更多
The stratigraphic correlation of well logs plays an essential role in characterizing subsurface reservoirs.However,it suffers from a small amount of training data and expensive computing time.In this work,we propose t...The stratigraphic correlation of well logs plays an essential role in characterizing subsurface reservoirs.However,it suffers from a small amount of training data and expensive computing time.In this work,we propose the Attention Based Dense Network(ASDNet)for the stratigraphic correlation of well logs.To implement the suggested model,we first employ the attention mechanism to the input well logs,which can effectively generate the weighted well logs to serve for further feature extraction.Subsequently,the DenseNet is utilized to achieve good feature reuse and avoid gradient vanishing.After model training,we employ the ASDNet to the testing data set and evaluate its performance based on the well log data set from Northwest China.Finally,the numerical results demonstrate that the suggested ASDNet provides higher prediction accuracy for automated stratigraphic correlation of well logs than state-of-the-art contrastive UNet and SegNet.展开更多
In this paper,we propose a low complexity spectrum resource allocation scheme cross the access points(APs)for the ultra dense networks(UDNs),in which all the APs are divided into several AP groups(APGs)and the total b...In this paper,we propose a low complexity spectrum resource allocation scheme cross the access points(APs)for the ultra dense networks(UDNs),in which all the APs are divided into several AP groups(APGs)and the total bandwidth is divided into several narrow band spectrum resources and each spectrum resource is allocated to APGs independently to decrease the interference among the cells.Furthermore,we investigate the joint spectrum and power allocation problem in UDNs to maximize the overall throughput.The problem is formulated as a mixed-integer nonconvex optimization(MINCP)problem which is difficult to solve in general.The joint optimization problem is decomposed into two subproblems in terms of the spectrum allocation and power allocation respectively.For the spectrum allocation,we model it as a auction problem and a combinatorial auction approach is proposed to tackle it.In addition,the DC programming method is adopted to optimize the power allocation subproblem.To decrease the signaling and computational overhead,we propose a distributed algorithm based on the Lagrangian dual method.Simulation results illustrate that the proposed algorithm can effectively improve the system throughput.展开更多
Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This p...Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This puts high constraints and challenges on the design of such networks.Structural changing of the network is one of such challenges that affect the network performance,includ-ing the required quality of service(QoS).The fractal dimension(FD)is consid-ered one of the main indicators used to represent the structure of the communication network.To this end,this work analyzes the FD of the network and its use for telecommunication networks investigation and planning.The clus-ter growing method for assessing the FD is introduced and analyzed.The article proposes a novel method for estimating the FD of a communication network,based on assessing the network’s connectivity,by searching for the shortest routes.Unlike the cluster growing method,the proposed method does not require multiple iterations,which reduces the number of calculations,and increases the stability of the results obtained.Thus,the proposed method requires less compu-tational cost than the cluster growing method and achieves higher stability.The method is quite simple to implement and can be used in the tasks of research and planning of modern and promising communication networks.The developed method is evaluated for two different network structures and compared with the cluster growing method.Results validate the developed method.展开更多
Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where ...Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where the coarse scale information is retained in the LF component and the rain streaks and texture correspond to the HF component, we propose a single image rain removal algorithm using image decomposition and a dense network. We design two task-driven sub-networks to estimate the LF and non-rain HF components of a rainy image. The high-frequency estimation sub-network employs a densely connected network structure, while the low-frequency sub-network uses a simple convolutional neural network(CNN).We add total variation(TV) regularization and LF-channel fidelity terms to the loss function to optimize the two subnetworks jointly. The method then obtains de-rained output by combining the estimated LF and non-rain HF components.Extensive experiments on synthetic and real-world rainy images demonstrate that our method removes rain streaks while preserving non-rain details, and achieves superior de-raining performance both perceptually and quantitatively.展开更多
The κ-μ fading model is an advanced channel model in super dense wireless networks.In this paper,we evaluate the performance of the system over κ-μ fading channel in super dense relay networks with consideration o...The κ-μ fading model is an advanced channel model in super dense wireless networks.In this paper,we evaluate the performance of the system over κ-μ fading channel in super dense relay networks with consideration of multiple independent but not necessarily identically distributed(i.n.i.d.) cochannel interference(CCI) under interferencelimited environment.More specifically,we derive a useful and accurate cumulative distribution function(CDF) expression of the end-to-end signal-to-interference plus noise(SINR) ratio.Moreover,we derive novel analytical expressions of the outage probability(OP),average bit error probability(ABEP) and average capacity for binary modulation types and arbitrary positive values of κ-and μ of such system.Furthermore,we propose asymptotic analysis for both the OP and ABEP to give physical insights.A simplified analytical form for the ABEP at high-SNR regimes is provided as well.Finally,the accuracy of the derived expressions is well validated by Monte Carlo simulations.展开更多
Hyperspectral Image(HSI)classification based on deep learning has been an attractive area in recent years.However,as a kind of data-driven algorithm,the deep learning method usually requires numerous computational res...Hyperspectral Image(HSI)classification based on deep learning has been an attractive area in recent years.However,as a kind of data-driven algorithm,the deep learning method usually requires numerous computational resources and high-quality labelled datasets,while the expenditures of high-performance computing and data annotation are expensive.In this paper,to reduce the dependence on massive calculation and labelled samples,we propose a deep Double-Channel dense network(DDCD)for Hyperspectral Image Classification.Specifically,we design a 3D Double-Channel dense layer to capture the local and global features of the input.And we propose a Linear Attention Mechanism that is approximate to dot-product attention with much less memory and computational costs.The number of parameters and the consumptions of calculation are observably less than contrapositive deep learning methods,which means DDCD owns simpler architecture and higher efficiency.A series of quantitative experiences on 6 widely used hyperspectral datasets show that the proposed DDCD obtains state-of-the-art performance,even though when the absence of labelled samples is severe.展开更多
Masking-based and spectrum mapping-based methods are the two main algorithms of speech enhancement with deep neural network(DNN).But the mapping-based methods only utilizes the phase of noisy speech,which limits the u...Masking-based and spectrum mapping-based methods are the two main algorithms of speech enhancement with deep neural network(DNN).But the mapping-based methods only utilizes the phase of noisy speech,which limits the upper bound of speech enhancement performance.Maskingbased methods need to accurately estimate the masking which is still the key problem.Combining the advantages of above two types of methods,this paper proposes the speech enhancement algorithm MM-RDN(maskingmapping residual dense network)based on masking-mapping(MM)and residual dense network(RDN).Using the logarithmic power spectrogram(LPS)of consecutive frames,MM estimates the ideal ratio masking(IRM)matrix of consecutive frames.RDN can make full use of feature maps of all layers.Meanwhile,using the global residual learning to combine the shallow features and deep features,RDN obtains the global dense features from the LPS,thereby improves estimated accuracy of the IRM matrix.Simulations show that the proposed method achieves attractive speech enhancement performance in various acoustic environments.Specifically,in the untrained acoustic test with limited priors,e.g.,unmatched signal-to-noise ratio(SNR)and unmatched noise category,MM-RDN can still outperform the existing convolutional recurrent network(CRN)method in themeasures of perceptual evaluation of speech quality(PESQ)and other evaluation indexes.It indicates that the proposed algorithm is more generalized in untrained conditions.展开更多
The precision and quality of machining in computer numerical control(CNC)machines are significantly impacted by the state of the tool.Therefore,it is essential and crucial to monitor the tool’s condition in real time...The precision and quality of machining in computer numerical control(CNC)machines are significantly impacted by the state of the tool.Therefore,it is essential and crucial to monitor the tool’s condition in real time during operation.To improve the monitoring accuracy of tool wear values,a tool wear monitoring approach is developed in this work,which is based on an improved integrated model of densely connected convolutional network(DenseNet)and gated recurrent unit(GRU),which incorporates data preprocessing via wavelet packet transform(WPT).Firstly,wavelet packet decomposition(WPD)is used to extract time-frequency domain features from the original timeseries monitoring signals of the tool.Secondly,the multidimensional deep features are extracted from DenseNet containing asymmetric convolution kernels,and feature fusion is performed.A dilation scheme is employed to acquire more historical data by utilizing dilated convolutional kernels with different dilation rates.Finally,the GRU is utilized to extract temporal features from the extracted deep-level signal features,and the feature mapping of these temporal features is then carried out by a fully connected neural network,which ultimately achieves the monitoring of tool wear values.Comprehensive experiments conducted on reference datasets show that the proposed model performs better in terms of accuracy and generalization than other cutting-edge tool wear monitoring algorithms.展开更多
In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules...In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules.It indicates that immoderately caching content would significantly change the interference distribution in CSCN,which may degrade the network area spectral efficiency(ASE).Meanwhile,it is shown that content-based rule outperforms the distance-based rule in terms of network ASE only when small cell base stations(BSs)are sparsely deployed with low decoding thresholds.Moreover,it is proved that network ASE under distance-based user association serves as the upper bound of that under content-based rule in dense BS regime.To enable more spectrum-efficient user association in dense CSCN,we further optimize network ASE by designing a probabilistic content retrieving strategy based on distance-based rule.With the optimized retrieving probability,network ASE could be substantially enhanced and even increase with the growing BS density in dense BS regime.展开更多
For the dense macro-femto coexistence networks scenario, a long-term-based handover(LTBH) algorithm is proposed. The handover decision algorithm is jointly determined by the angle of handover(AHO) and the time-tos...For the dense macro-femto coexistence networks scenario, a long-term-based handover(LTBH) algorithm is proposed. The handover decision algorithm is jointly determined by the angle of handover(AHO) and the time-tostay(TTS) to reduce the unnecessary handover numbers.First, the proposed AHO parameter is used to decrease the computation complexity in multiple candidate base stations(CBSs) scenario. Then, two types of TTS parameters are given for the fixed base stations and mobile base stations to make handover decisions among multiple CBSs. The simulation results show that the proposed LTBH algorithm can not only maintain the required transmission rate of users, but also effectively reduce the unnecessary numbers of handover in the dense macro-femto networks with the coexisting mobile BSs.展开更多
The main task of magnetic resonance imaging (MRI) automatic brain tumor segmentation is to automaticallysegment the brain tumor edema, peritumoral edema, endoscopic core, enhancing tumor core and nonenhancingtumor cor...The main task of magnetic resonance imaging (MRI) automatic brain tumor segmentation is to automaticallysegment the brain tumor edema, peritumoral edema, endoscopic core, enhancing tumor core and nonenhancingtumor core from 3D MR images. Because the location, size, shape and intensity of brain tumors vary greatly, itis very difficult to segment these brain tumor regions automatically. In this paper, by combining the advantagesof DenseNet and ResNet, we proposed a new 3D U-Net with dense encoder blocks and residual decoder blocks.We used dense blocks in the encoder part and residual blocks in the decoder part. The number of output featuremaps increases with the network layers in contracting path of encoder, which is consistent with the characteristicsof dense blocks. Using dense blocks can decrease the number of network parameters, deepen network layers,strengthen feature propagation, alleviate vanishing-gradient and enlarge receptive fields. The residual blockswere used in the decoder to replace the convolution neural block of original U-Net, which made the networkperformance better. Our proposed approach was trained and validated on the BraTS2019 training and validationdata set. We obtained dice scores of 0.901, 0.815 and 0.766 for whole tumor, tumor core and enhancing tumorcore respectively on the BraTS2019 validation data set. Our method has the better performance than the original3D U-Net. The results of our experiment demonstrate that compared with some state-of-the-art methods, ourapproach is a competitive automatic brain tumor segmentation method.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life d...Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life due to a large amount of calculation and poor detection accuracy.To solve the above problems,this paper proposes a dense spatial-temporal graph convolutional network based on lightweight OpenPose.Lightweight OpenPose uses MobileNet as a feature extraction network,and the prediction layer uses bottleneck-asymmetric structure,thus reducing the amount of the network.The bottleneck-asymmetrical structure compresses the number of input channels of feature maps by 1×1 convolution and replaces the 7×7 convolution structure with the asymmetric structure of 1×7 convolution,7×1 convolution,and 7×7 convolution in parallel.The spatial-temporal graph convolutional network divides the multi-layer convolution into dense blocks,and the convolutional layers in each dense block are connected,thus improving the feature transitivity,enhancing the network’s ability to extract features,thus improving the detection accuracy.Two representative datasets,Multiple Cameras Fall dataset(MCF),and Nanyang Technological University Red Green Blue+Depth Action Recognition dataset(NTU RGB+D),are selected for our experiments,among which NTU RGB+D has two evaluation benchmarks.The results show that the proposed model is superior to the current fall detection models.The accuracy of this network on the MCF dataset is 96.3%,and the accuracies on the two evaluation benchmarks of the NTU RGB+D dataset are 85.6%and 93.5%,respectively.展开更多
基金supported by National Natural Science Foundation of China(62271096,U20A20157)Natural Science Foundation of Chongqing,China(CSTB2023NSCQ-LZX0134)+3 种基金University Innovation Research Group of Chongqing(CXQT20017)Youth Innovation Group Support Program of ICE Discipline of CQUPT(SCIE-QN-2022-04)the Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202300632)the Chongqing Postdoctoral Special Funding Project(2022CQBSHTB2057).
文摘Aiming at the problem of mobile data traffic surge in 5G networks,this paper proposes an effective solution combining massive multiple-input multiple-output techniques with Ultra-Dense Network(UDN)and focuses on solving the resulting challenge of increased energy consumption.A base station control algorithm based on Multi-Agent Proximity Policy Optimization(MAPPO)is designed.In the constructed 5G UDN model,each base station is considered as an agent,and the MAPPO algorithm enables inter-base station collaboration and interference management to optimize the network performance.To reduce the extra power consumption due to frequent sleep mode switching of base stations,a sleep mode switching decision algorithm is proposed.The algorithm reduces unnecessary power consumption by evaluating the network state similarity and intelligently adjusting the agent’s action strategy.Simulation results show that the proposed algorithm reduces the power consumption by 24.61% compared to the no-sleep strategy and further reduces the power consumption by 5.36% compared to the traditional MAPPO algorithm under the premise of guaranteeing the quality of service of users.
基金supported by National Natural Science Foundation of China under Granted No.62202247。
文摘Although 6G networks combined with artificial intelligence present revolutionary prospects for healthcare delivery,resource management in dense medical device networks stays a basic issue.Reliable communication directly affects patient outcomes in these settings;nonetheless,current resource allocation techniques struggle with complicated interference patterns and different service needs of AI-native healthcare systems.In dense installations where conventional approaches fail,this paper tackles the challenge of combining network efficiency with medical care priority.Thus,we offer a Dueling Deep Q-Network(DDQN)-based resource allocation approach for AI-native healthcare systems in 6G dense networks.First,we create a point-line graph coloringbased interference model to capture the unique characteristics of medical device communications.Building on this foundation,we suggest a DDQN approach to optimal resource allocation over multiple medical services by combining advantage estimate with healthcare-aware state evaluation.Unlike traditional graph-based models,this one correctly depicts the overlapping coverage areas common in hospital environments.Building on this basis,our DDQN design allows the system to prioritize medical needs while distributing resources by separating healthcare state assessment from advantage estimation.Experimental findings show that the suggested DDQN outperforms state-of-the-art techniques in dense healthcare installations by 14.6%greater network throughput and 13.7%better resource use.The solution shows particularly strong in maintaining service quality under vital conditions with 5.5%greater Qo S satisfaction for emergency services and 8.2%quicker recovery from interruptions.
文摘5G sets an ambitious goal of increasing the capacity per area of current 4G network by 1000 fold. Due to the high splitting gain of dense small cells, ultra dense network(UDN) is widely considered as a key component in achieving this goal. In this paper, we outline the main challenges that come with dense cell deployment, including interference, mobility, power consumption and backhaul. Technologies designed to tackle these challenges in long term evolution system(LTE) and their deficiencies in UDN context are also analyzed. To combat these challenges more efficiently, a series of technologies are introduced along with some of our initial research results. Moreover, the trends of user-centric and peer-to-peer design in UDN are also elaborated.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(No.2018R1D1A1B07049877)and the Strengthening R&D Capability Program of Sejong University.
文摘Drone applications in 5th generation(5G)networks mainly focus on services and use cases such as providing connectivity during crowded events,human-instigated disasters,unmanned aerial vehicle traffic management,internet of things in the sky,and situation awareness.4G and 5G cellular networks face various challenges to ensure dynamic control and safe mobility of the drone when it is tasked with delivering these services.The drone can fly in three-dimensional space.The drone connectivity can suffer from increased handover cost due to several reasons,including variations in the received signal strength indicator,co-channel interference offered to the drone by neighboring cells,and abrupt drop in lobe edge signals due to antenna nulls.The baseline greedy handover algorithm only ensures the strongest connection between the drone and small cells so that the drone may experience several handovers.Intended for fast environment learning,machine learning techniques such as Q-learning help the drone fly with minimum handover cost along with robust connectivity.In this study,we propose a Q-learning-based approach evaluated in three different scenarios.The handover decision is optimized gradually using Q-learning to provide efficient mobility support with high data rate in time-sensitive applications,tactile internet,and haptics communication.Simulation results demonstrate that the proposed algorithm can effectively minimize the handover cost in a learning environment.This work presents a notable contribution to determine the optimal route of drones for researchers who are exploring UAV use cases in cellular networks where a large testing site comprised of several cells with multiple UAVs is under consideration.
基金supported by the Zhejiang Key Laboratory of General Aviation Operation Technology(No.JDGA2020-7)the National Natural Science Foundation of China(No.62173237)+3 种基金the Natural Science Foundation of Liaoning Province(No.2019-MS-251)the Talent Project of Revitalization Liaoning Province(No.XLYC1907022)the Key R&D Projects of Liaoning Province(No.2020JH2/10100045)the High-Level Innovation Talent Project of Shenyang(No.RC190030).
文摘Single object tracking based on deep learning has achieved the advanced performance in many applications of computer vision.However,the existing trackers have certain limitations owing to deformation,occlusion,movement and some other conditions.We propose a siamese attentional dense network called SiamADN in an end-to-end offline manner,especially aiming at unmanned aerial vehicle(UAV)tracking.First,it applies a dense network to reduce vanishing-gradient,which strengthens the features transfer.Second,the channel attention mechanism is involved into the Densenet structure,in order to focus on the possible key regions.The advance corner detection network is introduced to improve the following tracking process.Extensive experiments are carried out on four mainly tracking benchmarks as OTB-2015,UAV123,LaSOT and VOT.The accuracy rate on UAV123 is 78.9%,and the running speed is 32 frame per second(FPS),which demonstrates its efficiency in the practical real application.
基金supported by the Key Research and Development Program of Shaanxi,China under Grant 2023-YBGY-076the Fundamental Research Funds for the Central Universities,China under Grant XZY012022086the China Postdoctoral Science Foundation Project under Grant 2022M712509.
文摘The stratigraphic correlation of well logs plays an essential role in characterizing subsurface reservoirs.However,it suffers from a small amount of training data and expensive computing time.In this work,we propose the Attention Based Dense Network(ASDNet)for the stratigraphic correlation of well logs.To implement the suggested model,we first employ the attention mechanism to the input well logs,which can effectively generate the weighted well logs to serve for further feature extraction.Subsequently,the DenseNet is utilized to achieve good feature reuse and avoid gradient vanishing.After model training,we employ the ASDNet to the testing data set and evaluate its performance based on the well log data set from Northwest China.Finally,the numerical results demonstrate that the suggested ASDNet provides higher prediction accuracy for automated stratigraphic correlation of well logs than state-of-the-art contrastive UNet and SegNet.
基金supported in part by the Guangxi Natural Science Foundation under Grant 2021GXNSFBA196076in part by the General Project of Guangxi Natural Science Foundation Project(Guangdong-Guangxi Joint Fund Project)under Grant 2021GXNSFAA075031+1 种基金in part by the basic ability improvement project of young and middle-aged teachers in Guangxi Universities under Grant 2022KY0579in part by the Guangxi Key Laboratory of Precision Navigation Technology and Application,Guilin University of Electronic Technology under Grant DH202007.
文摘In this paper,we propose a low complexity spectrum resource allocation scheme cross the access points(APs)for the ultra dense networks(UDNs),in which all the APs are divided into several AP groups(APGs)and the total bandwidth is divided into several narrow band spectrum resources and each spectrum resource is allocated to APGs independently to decrease the interference among the cells.Furthermore,we investigate the joint spectrum and power allocation problem in UDNs to maximize the overall throughput.The problem is formulated as a mixed-integer nonconvex optimization(MINCP)problem which is difficult to solve in general.The joint optimization problem is decomposed into two subproblems in terms of the spectrum allocation and power allocation respectively.For the spectrum allocation,we model it as a auction problem and a combinatorial auction approach is proposed to tackle it.In addition,the DC programming method is adopted to optimize the power allocation subproblem.To decrease the signaling and computational overhead,we propose a distributed algorithm based on the Lagrangian dual method.Simulation results illustrate that the proposed algorithm can effectively improve the system throughput.
基金supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R66),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Next-generation networks,including the Internet of Things(IoT),fifth-generation cellular systems(5G),and sixth-generation cellular systems(6G),suf-fer from the dramatic increase of the number of deployed devices.This puts high constraints and challenges on the design of such networks.Structural changing of the network is one of such challenges that affect the network performance,includ-ing the required quality of service(QoS).The fractal dimension(FD)is consid-ered one of the main indicators used to represent the structure of the communication network.To this end,this work analyzes the FD of the network and its use for telecommunication networks investigation and planning.The clus-ter growing method for assessing the FD is introduced and analyzed.The article proposes a novel method for estimating the FD of a communication network,based on assessing the network’s connectivity,by searching for the shortest routes.Unlike the cluster growing method,the proposed method does not require multiple iterations,which reduces the number of calculations,and increases the stability of the results obtained.Thus,the proposed method requires less compu-tational cost than the cluster growing method and achieves higher stability.The method is quite simple to implement and can be used in the tasks of research and planning of modern and promising communication networks.The developed method is evaluated for two different network structures and compared with the cluster growing method.Results validate the developed method.
基金supported by the National Natural Science Foundation of China(61471313)the Natural Science Foundation of Hebei Province(F2019203318)
文摘Removing rain from a single image is a challenging task due to the absence of temporal information. Considering that a rainy image can be decomposed into the low-frequency(LF) and high-frequency(HF) components, where the coarse scale information is retained in the LF component and the rain streaks and texture correspond to the HF component, we propose a single image rain removal algorithm using image decomposition and a dense network. We design two task-driven sub-networks to estimate the LF and non-rain HF components of a rainy image. The high-frequency estimation sub-network employs a densely connected network structure, while the low-frequency sub-network uses a simple convolutional neural network(CNN).We add total variation(TV) regularization and LF-channel fidelity terms to the loss function to optimize the two subnetworks jointly. The method then obtains de-rained output by combining the estimated LF and non-rain HF components.Extensive experiments on synthetic and real-world rainy images demonstrate that our method removes rain streaks while preserving non-rain details, and achieves superior de-raining performance both perceptually and quantitatively.
基金supported by the NSFC project under grant No. 61101237the Fundamental Research Funds for the Central Universities No. 2014JBZ001China Postdoctoral Science Foundation No. 2014M560081
文摘The κ-μ fading model is an advanced channel model in super dense wireless networks.In this paper,we evaluate the performance of the system over κ-μ fading channel in super dense relay networks with consideration of multiple independent but not necessarily identically distributed(i.n.i.d.) cochannel interference(CCI) under interferencelimited environment.More specifically,we derive a useful and accurate cumulative distribution function(CDF) expression of the end-to-end signal-to-interference plus noise(SINR) ratio.Moreover,we derive novel analytical expressions of the outage probability(OP),average bit error probability(ABEP) and average capacity for binary modulation types and arbitrary positive values of κ-and μ of such system.Furthermore,we propose asymptotic analysis for both the OP and ABEP to give physical insights.A simplified analytical form for the ABEP at high-SNR regimes is provided as well.Finally,the accuracy of the derived expressions is well validated by Monte Carlo simulations.
基金National Natural Science Foundations of China(41671452)China Postdoctoral Science Foundation Funded Project(2017M612510)。
文摘Hyperspectral Image(HSI)classification based on deep learning has been an attractive area in recent years.However,as a kind of data-driven algorithm,the deep learning method usually requires numerous computational resources and high-quality labelled datasets,while the expenditures of high-performance computing and data annotation are expensive.In this paper,to reduce the dependence on massive calculation and labelled samples,we propose a deep Double-Channel dense network(DDCD)for Hyperspectral Image Classification.Specifically,we design a 3D Double-Channel dense layer to capture the local and global features of the input.And we propose a Linear Attention Mechanism that is approximate to dot-product attention with much less memory and computational costs.The number of parameters and the consumptions of calculation are observably less than contrapositive deep learning methods,which means DDCD owns simpler architecture and higher efficiency.A series of quantitative experiences on 6 widely used hyperspectral datasets show that the proposed DDCD obtains state-of-the-art performance,even though when the absence of labelled samples is severe.
基金supported by the National Key Research and Development Program of China under Grant 2020YFC2004003 and Grant 2020YFC2004002the National Nature Science Foundation of China(NSFC)under Grant No.61571106.
文摘Masking-based and spectrum mapping-based methods are the two main algorithms of speech enhancement with deep neural network(DNN).But the mapping-based methods only utilizes the phase of noisy speech,which limits the upper bound of speech enhancement performance.Maskingbased methods need to accurately estimate the masking which is still the key problem.Combining the advantages of above two types of methods,this paper proposes the speech enhancement algorithm MM-RDN(maskingmapping residual dense network)based on masking-mapping(MM)and residual dense network(RDN).Using the logarithmic power spectrogram(LPS)of consecutive frames,MM estimates the ideal ratio masking(IRM)matrix of consecutive frames.RDN can make full use of feature maps of all layers.Meanwhile,using the global residual learning to combine the shallow features and deep features,RDN obtains the global dense features from the LPS,thereby improves estimated accuracy of the IRM matrix.Simulations show that the proposed method achieves attractive speech enhancement performance in various acoustic environments.Specifically,in the untrained acoustic test with limited priors,e.g.,unmatched signal-to-noise ratio(SNR)and unmatched noise category,MM-RDN can still outperform the existing convolutional recurrent network(CRN)method in themeasures of perceptual evaluation of speech quality(PESQ)and other evaluation indexes.It indicates that the proposed algorithm is more generalized in untrained conditions.
基金supported by the National Natural Science Foundation of China(62020106003,62273177,62233009)the Natural Science Foundation of Jiangsu Province of China(BK20222012)+2 种基金the Programme of Introducing Talents of Discipline to Universities of China(B20007)the Fundamental Research Funds for the Central Universities(NI2024001)the National Key Laboratory of Space Intelligent Control(HTKJ2023KL502006).
文摘The precision and quality of machining in computer numerical control(CNC)machines are significantly impacted by the state of the tool.Therefore,it is essential and crucial to monitor the tool’s condition in real time during operation.To improve the monitoring accuracy of tool wear values,a tool wear monitoring approach is developed in this work,which is based on an improved integrated model of densely connected convolutional network(DenseNet)and gated recurrent unit(GRU),which incorporates data preprocessing via wavelet packet transform(WPT).Firstly,wavelet packet decomposition(WPD)is used to extract time-frequency domain features from the original timeseries monitoring signals of the tool.Secondly,the multidimensional deep features are extracted from DenseNet containing asymmetric convolution kernels,and feature fusion is performed.A dilation scheme is employed to acquire more historical data by utilizing dilated convolutional kernels with different dilation rates.Finally,the GRU is utilized to extract temporal features from the extracted deep-level signal features,and the feature mapping of these temporal features is then carried out by a fully connected neural network,which ultimately achieves the monitoring of tool wear values.Comprehensive experiments conducted on reference datasets show that the proposed model performs better in terms of accuracy and generalization than other cutting-edge tool wear monitoring algorithms.
基金supported in part by Natural Science Foundation of China(Grant No.62121001,62171344,61931005)in part by Young Elite Scientists Sponsorship Program by CAST+2 种基金in part by Key Industry Innovation Chain of Shaanxi(Grant No.2022ZDLGY0501,2022ZDLGY05-06)in part by Key Research and Development Program of Shannxi(Grant No.2021KWZ-05)in part by The Major Key Project of PCL(PCL2021A15)。
文摘In this paper,we reveal the fundamental limitation of network densification on the performance of caching enabled small cell network(CSCN)under two typical user association rules,namely,contentand distance-based rules.It indicates that immoderately caching content would significantly change the interference distribution in CSCN,which may degrade the network area spectral efficiency(ASE).Meanwhile,it is shown that content-based rule outperforms the distance-based rule in terms of network ASE only when small cell base stations(BSs)are sparsely deployed with low decoding thresholds.Moreover,it is proved that network ASE under distance-based user association serves as the upper bound of that under content-based rule in dense BS regime.To enable more spectrum-efficient user association in dense CSCN,we further optimize network ASE by designing a probabilistic content retrieving strategy based on distance-based rule.With the optimized retrieving probability,network ASE could be substantially enhanced and even increase with the growing BS density in dense BS regime.
基金The National Natural Science Foundation of China(No.61471164)the Fundamental Research Funds for the Central Universitiesthe Scientific Innovation Research of College Graduates in Jiangsu Province(No.KYLX-0133)
文摘For the dense macro-femto coexistence networks scenario, a long-term-based handover(LTBH) algorithm is proposed. The handover decision algorithm is jointly determined by the angle of handover(AHO) and the time-tostay(TTS) to reduce the unnecessary handover numbers.First, the proposed AHO parameter is used to decrease the computation complexity in multiple candidate base stations(CBSs) scenario. Then, two types of TTS parameters are given for the fixed base stations and mobile base stations to make handover decisions among multiple CBSs. The simulation results show that the proposed LTBH algorithm can not only maintain the required transmission rate of users, but also effectively reduce the unnecessary numbers of handover in the dense macro-femto networks with the coexisting mobile BSs.
基金This was supported partially by Sichuan Science and Technology Program under Grants 2019YJ0356,21ZDYF2484,21GJHZ0061Scientific Research Foundation of Education Department of Sichuan Province under Grant 18ZB0117.
文摘The main task of magnetic resonance imaging (MRI) automatic brain tumor segmentation is to automaticallysegment the brain tumor edema, peritumoral edema, endoscopic core, enhancing tumor core and nonenhancingtumor core from 3D MR images. Because the location, size, shape and intensity of brain tumors vary greatly, itis very difficult to segment these brain tumor regions automatically. In this paper, by combining the advantagesof DenseNet and ResNet, we proposed a new 3D U-Net with dense encoder blocks and residual decoder blocks.We used dense blocks in the encoder part and residual blocks in the decoder part. The number of output featuremaps increases with the network layers in contracting path of encoder, which is consistent with the characteristicsof dense blocks. Using dense blocks can decrease the number of network parameters, deepen network layers,strengthen feature propagation, alleviate vanishing-gradient and enlarge receptive fields. The residual blockswere used in the decoder to replace the convolution neural block of original U-Net, which made the networkperformance better. Our proposed approach was trained and validated on the BraTS2019 training and validationdata set. We obtained dice scores of 0.901, 0.815 and 0.766 for whole tumor, tumor core and enhancing tumorcore respectively on the BraTS2019 validation data set. Our method has the better performance than the original3D U-Net. The results of our experiment demonstrate that compared with some state-of-the-art methods, ourapproach is a competitive automatic brain tumor segmentation method.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
基金supported,in part,by the National Nature Science Foundation of China under Grant Numbers 62272236,62376128in part,by the Natural Science Foundation of Jiangsu Province under Grant Numbers BK20201136,BK20191401.
文摘Fall behavior is closely related to high mortality in the elderly,so fall detection becomes an important and urgent research area.However,the existing fall detection methods are difficult to be applied in daily life due to a large amount of calculation and poor detection accuracy.To solve the above problems,this paper proposes a dense spatial-temporal graph convolutional network based on lightweight OpenPose.Lightweight OpenPose uses MobileNet as a feature extraction network,and the prediction layer uses bottleneck-asymmetric structure,thus reducing the amount of the network.The bottleneck-asymmetrical structure compresses the number of input channels of feature maps by 1×1 convolution and replaces the 7×7 convolution structure with the asymmetric structure of 1×7 convolution,7×1 convolution,and 7×7 convolution in parallel.The spatial-temporal graph convolutional network divides the multi-layer convolution into dense blocks,and the convolutional layers in each dense block are connected,thus improving the feature transitivity,enhancing the network’s ability to extract features,thus improving the detection accuracy.Two representative datasets,Multiple Cameras Fall dataset(MCF),and Nanyang Technological University Red Green Blue+Depth Action Recognition dataset(NTU RGB+D),are selected for our experiments,among which NTU RGB+D has two evaluation benchmarks.The results show that the proposed model is superior to the current fall detection models.The accuracy of this network on the MCF dataset is 96.3%,and the accuracies on the two evaluation benchmarks of the NTU RGB+D dataset are 85.6%and 93.5%,respectively.