Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,...Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method.展开更多
The detection of small targets poses a significant challenge for infrared search and tracking (IRST) systems,as they must strike a delicate balance between accuracy and speed.In this paper,we propose a detection algor...The detection of small targets poses a significant challenge for infrared search and tracking (IRST) systems,as they must strike a delicate balance between accuracy and speed.In this paper,we propose a detection algorithm based on spatial attention density peaks searching (SADPS) and an adaptive window selection scheme.First,the difference-of-Gaussians (DoG) filter is introduced for preprocessing raw infrared images.Second,the image is processed by SADPS.Third,an adaptive window selection scheme is applied to obtain window templates for the target scale size.Then,the small target feature is used to enhance the target and suppress the background.Finally,the true targets are segmented through a threshold.The experimental results show that compared with the seven state-of-the-art small targets detection baseline algorithms,the proposed method not only has better detection accuracy,but also has reasonable time consumption.展开更多
To address the issues of unknown target size,blurred edges,background interference and low contrast in infrared small target detection,this paper proposes a method based on density peaks searching and weighted multi-f...To address the issues of unknown target size,blurred edges,background interference and low contrast in infrared small target detection,this paper proposes a method based on density peaks searching and weighted multi-feature local difference.Firstly,an improved high-boost filter is used for preprocessing to eliminate background clutter and high-brightness interference,thereby increasing the probability of capturing real targets in the density peak search.Secondly,a triple-layer window is used to extract features from the area surrounding candidate targets,addressing the uncertainty of small target sizes.By calculating multi-feature local differences between the triple-layer windows,the problems of blurred target edges and low contrast are resolved.To balance the contribution of different features,intra-class distance is used to calculate weights,achieving weighted fusion of multi-feature local differences to obtain the weighted multi-feature local differences of candidate targets.The real targets are then extracted using the interquartile range.Experiments on datasets such as SIRST and IRSTD-IK show that the proposed method is suitable for various complex types and demonstrates good robustness and detection performance.展开更多
Terrestrial laser scanning(TLS)accurately captures tree structural information and provides prerequisites for treescale estimations of forest biophysical attributes.Quantifying tree-scale attributes from TLS point clo...Terrestrial laser scanning(TLS)accurately captures tree structural information and provides prerequisites for treescale estimations of forest biophysical attributes.Quantifying tree-scale attributes from TLS point clouds requires segmentation,yet the occlusion effects severely affect the accuracy of automated individual tree segmentation.In this study,we proposed a novel method using ellipsoid directional searching and point compensation algorithms to alleviate occlusion effects.Firstly,region growing and point compensation algorithms are used to determine the location of tree roots.Secondly,the neighbor points are extracted within an ellipsoid neighborhood to mitigate occlusion effects compared with k-nearest neighbor(KNN).Thirdly,neighbor points are uniformly subsampled by the directional searching algorithm based on the Fibonacci principle in multiple spatial directions to reduce memory consumption.Finally,a graph describing connectivity between a point and its neighbors is constructed,and it is utilized to complete individual tree segmentation based on the shortest path algorithm.The proposed method was evaluated on a public TLS dataset comprising six forest plots with three complexity categories in Evo,Finland,and it reached the highest mean accuracy of 77.5%,higher than previous studies on tree detection.We also extracted and validated the tree structure attributes using manual segmentation reference values.The RMSE,RMSE%,bias,and bias%of tree height,crown base height,crown projection area,crown surface area,and crown volume were used to evaluate the segmentation accuracy,respectively.Overall,the proposed method avoids many inherent limitations of current methods and can accurately map canopy structures in occluded complex forest stands.展开更多
Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as t...Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as the sample size goes to infinity.In this paper,we consider one commonly used hyper-parameter estimator,the empirical Bayes(EB).Its convergence in distribution has been studied,and the explicit expression of the covariance matrix of its limiting distribution has been given.However,what we are truly interested in are factors contained in the covariance matrix of the EB hyper-parameter estimator,and then,the convergence of its covariance matrix to that of its limiting distribution is required.In general,the convergence in distribution of a sequence of random variables does not necessarily guarantee the convergence of its covariance matrix.Thus,the derivation of such convergence is a necessary complement to our theoretical analysis about factors that influence the convergence properties of the EB hyper-parameter estimator.In this paper,we consider the regularized finite impulse response(FIR)model estimation with deterministic inputs,and show that the covariance matrix of the EB hyper-parameter estimator converges to that of its limiting distribution.Moreover,we run numerical simulations to demonstrate the efficacy of ourtheoretical results.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads ...Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads to more precise decisions for cardiac disorders.Implementing Grid Search Optimization(GSO)machine training models is therefore a useful way to forecast the sickness as soon as possible.The state-of-the-art work is the tuning of the hyperparameter together with the selection of the feature by utilizing the model search to minimize the false-negative rate.Three models with a cross-validation approach do the required task.Feature Selection based on the use of statistical and correlation matrices for multivariate analysis.For Random Search and Grid Search models,extensive comparison findings are produced utilizing retrieval,F1 score,and precision measurements.The models are evaluated using the metrics and kappa statistics that illustrate the three models’comparability.The study effort focuses on optimizing function selection,tweaking hyperparameters to improve model accuracy and the prediction of heart disease by examining Framingham datasets using random forestry classification.Tuning the hyperparameter in the model of grid search thus decreases the erroneous rate achieves global optimization.展开更多
Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Ext...Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Extra Trees(ET),and Light Gradient Boosting Machine(LGBM),to predict SBS based on easily determinable input parameters.Also,the Grid Search technique was employed for hyper-parameter tuning of the ML models,and cross-validation and learning curve analysis were used for training the models.The models were built on a database of 240 experimental results and three input variables:temperature,normal pressure,and tack coat rate.Model validation was performed using three statistical criteria:the coefficient of determination(R2),the Root Mean Square Error(RMSE),and the mean absolute error(MAE).Additionally,SHAP analysis was also used to validate the importance of the input variables in the prediction of the SBS.Results show that these models accurately predict SBS,with LGBM providing outstanding performance.SHAP(Shapley Additive explanation)analysis for LGBM indicates that temperature is the most influential factor on SBS.Consequently,the proposed ML models can quickly and accurately predict SBS between two layers of asphalt concrete,serving practical applications in flexible pavement structure design.展开更多
The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making co...The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.展开更多
In order to meet the urgent need of infrared search and track applications for accurate identification and positioning of infrared guidance aircraft,an active-detection mid-wave infrared search and track system(ADMWIR...In order to meet the urgent need of infrared search and track applications for accurate identification and positioning of infrared guidance aircraft,an active-detection mid-wave infrared search and track system(ADMWIRSTS)based on"cat-eye effect"was developed.The ADMWIRSTS mainly consists of both a light beam control subsystem and an infrared search and track subsystem.The light beam control subsystem uses an integrated opto-mechanical two-dimensional pointing mirror to realize the control function of the azimuth and pitch directions of the system,which can cover the whole airspace range of 360°×90°.The infrared search and track subsystem uses two mid-wave infrared cooled 640×512 focal plane detectors for co-aperture beam expanding,infrared and illumination laser beam combining,infrared search,and two-stage track opto-mechanical design.In this work,the system integration design and structural finite-element analysis were conducted,the search imaging and two-stage track imaging for external scenes were performed,and the active-detection technologies were experimentally verified in the laboratory.The experimental investigation results show that the system can realize the infrared search and track imaging,and the accurate identification and positioning of the mid-wave infrared guidance,or infrared detection system through the echo of the illumination laser.The aforementioned work has important technical significance and practical application value for the development of compactly-integrated high-precision infrared search and track,and laser suppression system,and has broad application prospects in the protection of equipment,assets and infrastructures.展开更多
An in-pixel histogramming time-to-digital converter(hTDC)based on octonary search and 4-tap phase detection is presented,aiming to improve frame rate while ensuring high precicion.The proposed hTDC is a 12-bit two-ste...An in-pixel histogramming time-to-digital converter(hTDC)based on octonary search and 4-tap phase detection is presented,aiming to improve frame rate while ensuring high precicion.The proposed hTDC is a 12-bit two-step converter consisting of a 6-bit coarse quantization and a 6-bit fine quantization,which supports a time resolution of 120 ps and multiphoton counting up to 2 GHz without a GHz reference frequency.The proposed hTDC is designed in 0.11μm CMOS process with an area consumption of 6900μm^(2).The data from a behavioral-level model is imported into the designed hTDC circuit for simulation verification.The post-simulation results show that the proposed hTDC achieves 0.8%depth precision in 9 m range for short-range system design specifications and 0.2%depth precision in 48 m range for long-range system design specifications.Under 30×10^(3) lux background light conditions,the proposed hTDC can be used for SPAD-based flash LiDAR sensor to achieve a frame rate to 40 fps with 200 ps resolution in 9 m range.展开更多
The Runge-Kutta optimiser(RUN)algorithm,renowned for its powerful optimisation capabilities,faces challenges in dealing with increasing complexity in real-world problems.Specifically,it shows deficiencies in terms of ...The Runge-Kutta optimiser(RUN)algorithm,renowned for its powerful optimisation capabilities,faces challenges in dealing with increasing complexity in real-world problems.Specifically,it shows deficiencies in terms of limited local exploration capabilities and less precise solutions.Therefore,this research aims to integrate the topological search(TS)mechanism with the gradient search rule(GSR)into the framework of RUN,introducing an enhanced algorithm called TGRUN to improve the performance of the original algorithm.The TS mechanism employs a circular topological scheme to conduct a thorough exploration of solution regions surrounding each solution,enabling a careful examination of valuable solution areas and enhancing the algorithm’s effectiveness in local exploration.To prevent the algorithm from becoming trapped in local optima,the GSR also integrates gradient descent principles to direct the algorithm in a wider investigation of the global solution space.This study conducted a serious of experiments on the IEEE CEC2017 comprehensive benchmark function to assess the enhanced effectiveness of TGRUN.Additionally,the evaluation includes real-world engineering design and feature selection problems serving as an additional test for assessing the optimisation capabilities of the algorithm.The validation outcomes indicate a significant improvement in the optimisation capabilities and solution accuracy of TGRUN.展开更多
The requirement for precise detection and recognition of target pedestrians in unprocessed real-world imagery drives the formulation of person search as an integrated technological framework that unifies pedestrian de...The requirement for precise detection and recognition of target pedestrians in unprocessed real-world imagery drives the formulation of person search as an integrated technological framework that unifies pedestrian detection and person re-identification(Re-ID).However,the inherent discrepancy between the optimization objectives of coarse-grained localization in pedestrian detection and fine-grained discriminative learning in Re-ID,combined with the substantial performance degradation of Re-ID during joint training caused by the Faster R-CNN-based branch,collectively constitutes a critical bottleneck for person search.In this work,we propose a cascaded person searchmodel(SeqXt)based on SeqNet and ConvNeXt that adopts a sequential end-to-end network as its core architecture,artfully integrates the design logic of the two-stepmethod and one-step method framework,and concurrently incorporates the two-step method’s advantage in efficient subtask handling while preserving the one-step method’s efficiency in end-toend training.Firstly,we utilize ConvNeXt-Base as the feature extraction module,which incorporates part of the design concept of Transformer,enhances the consideration of global context information,and boosts feature discrimination through an implicit self-attention mechanism.Secondly,we introduce prototype-guided normalization for calibrating the feature distribution,which leverages the archetype features of individual identities to calibrate the feature distribution and thereby prevents features from being overly inclined towards frequently occurring IDs,notably improving the intra-class compactness and inter-class separability of person identities.Finally,we put forward an innovative loss function named the Dynamic Online Instance Matching Loss Function(DOIM),which employs the hard sample assistantmethod to adaptively update the lookup table(LUT)and the circular queue(CQ)and aims to further enhance the distinctiveness of features between classes.Experimental results on the public datasets CUHK-SYSU and PRWand the private dataset UESTC-PS show that the proposed method achieves state-of-the-art results.展开更多
A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the pote...A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the potential fairness issue that may arise from the maximum sum-rate based objective function and the optical power constraints are set considering the non-negativity of the transmit signal, the requirement of the human eyes safety and all users' quality of service(Qo S). Then, the SSA is utilized to solve this optimization problem. Moreover, to demonstrate the superiority of the proposed strategy, it is compared with the fixed power allocation(FPA) and the gain ratio power allocation(GRPA) schemes. Results show that regardless of the number of users considered, the sum-rate achieved by SSA consistently outperforms that of FPA and GRPA schemes. Specifically, compared to FPA and GRPA schemes, the sum-rate obtained by SSA is increased by 40.45% and 53.44% when the number of users is 7, respectively. The proposed SSA also has better performance in terms of user fairness. This work will benefit the design and development of the NOMA-visible light communication(VLC) systems.展开更多
The increasing complexity of on-orbit tasks imposes great demands on the flexible operation of space robotic arms, prompting the development of space robots from single-arm manipulation to multi-arm collaboration. In ...The increasing complexity of on-orbit tasks imposes great demands on the flexible operation of space robotic arms, prompting the development of space robots from single-arm manipulation to multi-arm collaboration. In this paper, a combined approach of Learning from Demonstration (LfD) and Reinforcement Learning (RL) is proposed for space multi-arm collaborative skill learning. The combination effectively resolves the trade-off between learning efficiency and feasible solution in LfD, as well as the time-consuming pursuit of the optimal solution in RL. With the prior knowledge of LfD, space robotic arms can achieve efficient guided learning in high-dimensional state-action space. Specifically, an LfD approach with Probabilistic Movement Primitives (ProMP) is firstly utilized to encode and reproduce the demonstration actions, generating a distribution as the initialization of policy. Then in the RL stage, a Relative Entropy Policy Search (REPS) algorithm modified in continuous state-action space is employed for further policy improvement. More importantly, the learned behaviors can maintain and reflect the characteristics of demonstrations. In addition, a series of supplementary policy search mechanisms are designed to accelerate the exploration process. The effectiveness of the proposed method has been verified both theoretically and experimentally. Moreover, comparisons with state-of-the-art methods have confirmed the outperformance of the approach.展开更多
Modern air battlefield operations are characterized by flexibility and change, and the battlefield evolves rapidly and intricately. However, traditional air target intent recognition methods, which mainly rely on manu...Modern air battlefield operations are characterized by flexibility and change, and the battlefield evolves rapidly and intricately. However, traditional air target intent recognition methods, which mainly rely on manually designed neural network models, find it difficult to maintain sustained and excellent performance in such a complex and changing environment. To address the problem of the adaptability of neural network models in complex environments, we propose a lightweight Transformer model(TransATIR) with a strong adaptive adjustment capability, based on the characteristics of air target intent recognition and the neural network architecture search technique. After conducting extensive experiments, it has been proved that TransATIR can efficiently extract the deep feature information from battlefield situation data by utilizing the neural architecture search algorithm, in order to quickly and accurately identify the real intention of the target. The experimental results indicate that TransATIR significantly improves recognition accuracy compared to the existing state-of-the-art methods, and also effectively reduces the computational complexity of the model.展开更多
This paper introduces a novel optimization approach called Recuperated Seed Search Optimization(RSSO),designed to address challenges in solving mechanical engineering design problems.Many optimization techniques strug...This paper introduces a novel optimization approach called Recuperated Seed Search Optimization(RSSO),designed to address challenges in solving mechanical engineering design problems.Many optimization techniques struggle with slow convergence and suboptimal solutions due to complex,nonlinear natures.The Sperm Swarm Optimization(SSO)algorithm,which mimics the sperm’s movement to reach an egg,is one such technique.To improve SSO,researchers combined it with three strategies:opposition-based learning(OBL),Cauchy mutation(CM),and position clamping.OBL introduces diversity to SSO by exploring opposite solutions,speeding up convergence.CM enhances both exploration and exploitation capabilities throughout the optimization process.This combined approach,RSSO,has been rigorously tested on standard benchmark functions,real-world engineering problems,and through statistical analysis(Wilcoxon test).The results demonstrate that RSSO significantly outperforms other optimization algorithms,achieving faster convergence and better solutions.The paper details the RSSO algorithm,discusses its implementation,and presents comparative results that validate its effectiveness in solving complex engineering design challenges.展开更多
Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we de...Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we decode the quantum search advantage by investigating the critical role of quantum state properties in random-walk-based algorithms.We propose three distinct variants of quantum random-walk search algorithms and derive exact analytical expressions for their success probabilities.These probabilities are fundamentally determined by specific initial state properties:the coherence fraction governs the first algorithm’s performance,while entanglement and coherence dominate the outcomes of the second and third algorithms,respectively.We show that increased coherence fraction enhances success probability,but greater entanglement and coherence reduce it in the latter two cases.These findings reveal fundamental insights into harnessing quantum properties for advantage and guide algorithm design.Our searches achieve Grover-like speedups and show significant potential for quantum-enhanced machine learning.展开更多
基金Supported by the Anhui Province Sports Health Information Monitoring Technology Engineering Research Center Open Project (KF2023012)。
文摘Deep learning algorithm is an effective data mining method and has been used in many fields to solve practical problems.However,the deep learning algorithms often contain some hyper-parameters which may be continuous,integer,or mixed,and are often given based on experience but largely affect the effectiveness of activity recognition.In order to adapt to different hyper-parameter optimization problems,our improved Cuckoo Search(CS)algorithm is proposed to optimize the mixed hyper-parameters in deep learning algorithm.The algorithm optimizes the hyper-parameters in the deep learning model robustly,and intelligently selects the combination of integer type and continuous hyper-parameters that make the model optimal.Then,the mixed hyper-parameter in Convolutional Neural Network(CNN),Long-Short-Term Memory(LSTM)and CNN-LSTM are optimized based on the methodology on the smart home activity recognition datasets.Results show that the methodology can improve the performance of the deep learning model and whether we are experienced or not,we can get a better deep learning model using our method.
基金supported by the National 14th Five-Year Plan Preliminary Research Project (No.514010405-207)。
文摘The detection of small targets poses a significant challenge for infrared search and tracking (IRST) systems,as they must strike a delicate balance between accuracy and speed.In this paper,we propose a detection algorithm based on spatial attention density peaks searching (SADPS) and an adaptive window selection scheme.First,the difference-of-Gaussians (DoG) filter is introduced for preprocessing raw infrared images.Second,the image is processed by SADPS.Third,an adaptive window selection scheme is applied to obtain window templates for the target scale size.Then,the small target feature is used to enhance the target and suppress the background.Finally,the true targets are segmented through a threshold.The experimental results show that compared with the seven state-of-the-art small targets detection baseline algorithms,the proposed method not only has better detection accuracy,but also has reasonable time consumption.
基金supported by the National Natural Science Foundation of China (No.52205548)。
文摘To address the issues of unknown target size,blurred edges,background interference and low contrast in infrared small target detection,this paper proposes a method based on density peaks searching and weighted multi-feature local difference.Firstly,an improved high-boost filter is used for preprocessing to eliminate background clutter and high-brightness interference,thereby increasing the probability of capturing real targets in the density peak search.Secondly,a triple-layer window is used to extract features from the area surrounding candidate targets,addressing the uncertainty of small target sizes.By calculating multi-feature local differences between the triple-layer windows,the problems of blurred target edges and low contrast are resolved.To balance the contribution of different features,intra-class distance is used to calculate weights,achieving weighted fusion of multi-feature local differences to obtain the weighted multi-feature local differences of candidate targets.The real targets are then extracted using the interquartile range.Experiments on datasets such as SIRST and IRSTD-IK show that the proposed method is suitable for various complex types and demonstrates good robustness and detection performance.
基金supported by the National Natural Science Foundation of China(Nos.32171789,32211530031,12411530088)the National Key Research and Development Program of China(No.2023YFF1303901)+2 种基金the Joint Open Funded Project of State Key Laboratory of Geo-Information Engineering and Key Laboratory of the Ministry of Natural Resources for Surveying and Mapping Science and Geo-spatial Information Technology(2022-02-02)Background Resources Survey in Shennongjia National Park(SNJNP2022001)the Open Project Fund of Hubei Provincial Key Laboratory for Conservation Biology of Shennongjia Snub-nosed Monkeys(SNJGKL2022001).
文摘Terrestrial laser scanning(TLS)accurately captures tree structural information and provides prerequisites for treescale estimations of forest biophysical attributes.Quantifying tree-scale attributes from TLS point clouds requires segmentation,yet the occlusion effects severely affect the accuracy of automated individual tree segmentation.In this study,we proposed a novel method using ellipsoid directional searching and point compensation algorithms to alleviate occlusion effects.Firstly,region growing and point compensation algorithms are used to determine the location of tree roots.Secondly,the neighbor points are extracted within an ellipsoid neighborhood to mitigate occlusion effects compared with k-nearest neighbor(KNN).Thirdly,neighbor points are uniformly subsampled by the directional searching algorithm based on the Fibonacci principle in multiple spatial directions to reduce memory consumption.Finally,a graph describing connectivity between a point and its neighbors is constructed,and it is utilized to complete individual tree segmentation based on the shortest path algorithm.The proposed method was evaluated on a public TLS dataset comprising six forest plots with three complexity categories in Evo,Finland,and it reached the highest mean accuracy of 77.5%,higher than previous studies on tree detection.We also extracted and validated the tree structure attributes using manual segmentation reference values.The RMSE,RMSE%,bias,and bias%of tree height,crown base height,crown projection area,crown surface area,and crown volume were used to evaluate the segmentation accuracy,respectively.Overall,the proposed method avoids many inherent limitations of current methods and can accurately map canopy structures in occluded complex forest stands.
基金supported in part by the National Natural Science Foundation of China(No.62273287)by the Shenzhen Science and Technology Innovation Council(Nos.JCYJ20220530143418040,JCY20170411102101881)the Thousand Youth Talents Plan funded by the central government of China.
文摘Regularized system identification has become the research frontier of system identification in the past decade.One related core subject is to study the convergence properties of various hyper-parameter estimators as the sample size goes to infinity.In this paper,we consider one commonly used hyper-parameter estimator,the empirical Bayes(EB).Its convergence in distribution has been studied,and the explicit expression of the covariance matrix of its limiting distribution has been given.However,what we are truly interested in are factors contained in the covariance matrix of the EB hyper-parameter estimator,and then,the convergence of its covariance matrix to that of its limiting distribution is required.In general,the convergence in distribution of a sequence of random variables does not necessarily guarantee the convergence of its covariance matrix.Thus,the derivation of such convergence is a necessary complement to our theoretical analysis about factors that influence the convergence properties of the EB hyper-parameter estimator.In this paper,we consider the regularized finite impulse response(FIR)model estimation with deterministic inputs,and show that the covariance matrix of the EB hyper-parameter estimator converges to that of its limiting distribution.Moreover,we run numerical simulations to demonstrate the efficacy of ourtheoretical results.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
文摘Diagnosing the cardiovascular disease is one of the biggest medical difficulties in recent years.Coronary cardiovascular(CHD)is a kind of heart and blood vascular disease.Predicting this sort of cardiac illness leads to more precise decisions for cardiac disorders.Implementing Grid Search Optimization(GSO)machine training models is therefore a useful way to forecast the sickness as soon as possible.The state-of-the-art work is the tuning of the hyperparameter together with the selection of the feature by utilizing the model search to minimize the false-negative rate.Three models with a cross-validation approach do the required task.Feature Selection based on the use of statistical and correlation matrices for multivariate analysis.For Random Search and Grid Search models,extensive comparison findings are produced utilizing retrieval,F1 score,and precision measurements.The models are evaluated using the metrics and kappa statistics that illustrate the three models’comparability.The study effort focuses on optimizing function selection,tweaking hyperparameters to improve model accuracy and the prediction of heart disease by examining Framingham datasets using random forestry classification.Tuning the hyperparameter in the model of grid search thus decreases the erroneous rate achieves global optimization.
基金the University of Transport Technology under grant number DTTD2022-12.
文摘Determination of Shear Bond strength(SBS)at interlayer of double-layer asphalt concrete is crucial in flexible pavement structures.The study used three Machine Learning(ML)models,including K-Nearest Neighbors(KNN),Extra Trees(ET),and Light Gradient Boosting Machine(LGBM),to predict SBS based on easily determinable input parameters.Also,the Grid Search technique was employed for hyper-parameter tuning of the ML models,and cross-validation and learning curve analysis were used for training the models.The models were built on a database of 240 experimental results and three input variables:temperature,normal pressure,and tack coat rate.Model validation was performed using three statistical criteria:the coefficient of determination(R2),the Root Mean Square Error(RMSE),and the mean absolute error(MAE).Additionally,SHAP analysis was also used to validate the importance of the input variables in the prediction of the SBS.Results show that these models accurately predict SBS,with LGBM providing outstanding performance.SHAP(Shapley Additive explanation)analysis for LGBM indicates that temperature is the most influential factor on SBS.Consequently,the proposed ML models can quickly and accurately predict SBS between two layers of asphalt concrete,serving practical applications in flexible pavement structure design.
基金co-supported by the Foundation of Shanghai Astronautics Science and Technology Innovation,China(No.SAST2022-114)the National Natural Science Foundation of China(No.62303378),the National Natural Science Foundation of China(Nos.124B2031,12202281)the Foundation of China National Key Laboratory of Science and Technology on Test Physics&Numerical Mathematics,China(No.08-YY-2023-R11)。
文摘The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.
基金Supported by the Fundamental Scientific Research Plan of China(JCKY2021130B033)。
文摘In order to meet the urgent need of infrared search and track applications for accurate identification and positioning of infrared guidance aircraft,an active-detection mid-wave infrared search and track system(ADMWIRSTS)based on"cat-eye effect"was developed.The ADMWIRSTS mainly consists of both a light beam control subsystem and an infrared search and track subsystem.The light beam control subsystem uses an integrated opto-mechanical two-dimensional pointing mirror to realize the control function of the azimuth and pitch directions of the system,which can cover the whole airspace range of 360°×90°.The infrared search and track subsystem uses two mid-wave infrared cooled 640×512 focal plane detectors for co-aperture beam expanding,infrared and illumination laser beam combining,infrared search,and two-stage track opto-mechanical design.In this work,the system integration design and structural finite-element analysis were conducted,the search imaging and two-stage track imaging for external scenes were performed,and the active-detection technologies were experimentally verified in the laboratory.The experimental investigation results show that the system can realize the infrared search and track imaging,and the accurate identification and positioning of the mid-wave infrared guidance,or infrared detection system through the echo of the illumination laser.The aforementioned work has important technical significance and practical application value for the development of compactly-integrated high-precision infrared search and track,and laser suppression system,and has broad application prospects in the protection of equipment,assets and infrastructures.
基金National Key Research and Development Program of China(2022YFB2804401)。
文摘An in-pixel histogramming time-to-digital converter(hTDC)based on octonary search and 4-tap phase detection is presented,aiming to improve frame rate while ensuring high precicion.The proposed hTDC is a 12-bit two-step converter consisting of a 6-bit coarse quantization and a 6-bit fine quantization,which supports a time resolution of 120 ps and multiphoton counting up to 2 GHz without a GHz reference frequency.The proposed hTDC is designed in 0.11μm CMOS process with an area consumption of 6900μm^(2).The data from a behavioral-level model is imported into the designed hTDC circuit for simulation verification.The post-simulation results show that the proposed hTDC achieves 0.8%depth precision in 9 m range for short-range system design specifications and 0.2%depth precision in 48 m range for long-range system design specifications.Under 30×10^(3) lux background light conditions,the proposed hTDC can be used for SPAD-based flash LiDAR sensor to achieve a frame rate to 40 fps with 200 ps resolution in 9 m range.
基金Natural Science Foundation of Zhejiang Province,Grant/Award Numbers:LTGS23E070001,LZ22F020005,LTGY24C060004National Natural Science Foundation of China,Grant/Award Numbers:62076185,62301367,62273263。
文摘The Runge-Kutta optimiser(RUN)algorithm,renowned for its powerful optimisation capabilities,faces challenges in dealing with increasing complexity in real-world problems.Specifically,it shows deficiencies in terms of limited local exploration capabilities and less precise solutions.Therefore,this research aims to integrate the topological search(TS)mechanism with the gradient search rule(GSR)into the framework of RUN,introducing an enhanced algorithm called TGRUN to improve the performance of the original algorithm.The TS mechanism employs a circular topological scheme to conduct a thorough exploration of solution regions surrounding each solution,enabling a careful examination of valuable solution areas and enhancing the algorithm’s effectiveness in local exploration.To prevent the algorithm from becoming trapped in local optima,the GSR also integrates gradient descent principles to direct the algorithm in a wider investigation of the global solution space.This study conducted a serious of experiments on the IEEE CEC2017 comprehensive benchmark function to assess the enhanced effectiveness of TGRUN.Additionally,the evaluation includes real-world engineering design and feature selection problems serving as an additional test for assessing the optimisation capabilities of the algorithm.The validation outcomes indicate a significant improvement in the optimisation capabilities and solution accuracy of TGRUN.
基金supported by the major science and technology special projects of Xinjiang(No.2024B03041)the scientific and technological projects of Kashgar(No.KS2024024).
文摘The requirement for precise detection and recognition of target pedestrians in unprocessed real-world imagery drives the formulation of person search as an integrated technological framework that unifies pedestrian detection and person re-identification(Re-ID).However,the inherent discrepancy between the optimization objectives of coarse-grained localization in pedestrian detection and fine-grained discriminative learning in Re-ID,combined with the substantial performance degradation of Re-ID during joint training caused by the Faster R-CNN-based branch,collectively constitutes a critical bottleneck for person search.In this work,we propose a cascaded person searchmodel(SeqXt)based on SeqNet and ConvNeXt that adopts a sequential end-to-end network as its core architecture,artfully integrates the design logic of the two-stepmethod and one-step method framework,and concurrently incorporates the two-step method’s advantage in efficient subtask handling while preserving the one-step method’s efficiency in end-toend training.Firstly,we utilize ConvNeXt-Base as the feature extraction module,which incorporates part of the design concept of Transformer,enhances the consideration of global context information,and boosts feature discrimination through an implicit self-attention mechanism.Secondly,we introduce prototype-guided normalization for calibrating the feature distribution,which leverages the archetype features of individual identities to calibrate the feature distribution and thereby prevents features from being overly inclined towards frequently occurring IDs,notably improving the intra-class compactness and inter-class separability of person identities.Finally,we put forward an innovative loss function named the Dynamic Online Instance Matching Loss Function(DOIM),which employs the hard sample assistantmethod to adaptively update the lookup table(LUT)and the circular queue(CQ)and aims to further enhance the distinctiveness of features between classes.Experimental results on the public datasets CUHK-SYSU and PRWand the private dataset UESTC-PS show that the proposed method achieves state-of-the-art results.
基金supported by the Cooperative Research Project between China Coal Energy Research Institute Co.,Ltd. and Xidian University (No.N-KY-HX-1101-202302-00725)the Key Research and Development Program of Shaanxi Province (No.2017ZDCXL-GY-06-02)。
文摘A non-orthogonal multiple access(NOMA) power allocation scheme on the basis of the sparrow search algorithm(SSA) is proposed in this work. Specifically, the logarithmic utility function is utilized to address the potential fairness issue that may arise from the maximum sum-rate based objective function and the optical power constraints are set considering the non-negativity of the transmit signal, the requirement of the human eyes safety and all users' quality of service(Qo S). Then, the SSA is utilized to solve this optimization problem. Moreover, to demonstrate the superiority of the proposed strategy, it is compared with the fixed power allocation(FPA) and the gain ratio power allocation(GRPA) schemes. Results show that regardless of the number of users considered, the sum-rate achieved by SSA consistently outperforms that of FPA and GRPA schemes. Specifically, compared to FPA and GRPA schemes, the sum-rate obtained by SSA is increased by 40.45% and 53.44% when the number of users is 7, respectively. The proposed SSA also has better performance in terms of user fairness. This work will benefit the design and development of the NOMA-visible light communication(VLC) systems.
基金co-supported by the National Natural Science Foundation of China(No.12372045)the Guangdong Basic and Applied Basic Research Foundation,China(No.2023B1515120018)the Shenzhen Science and Technology Program,China(No.JCYJ20220818102207015).
文摘The increasing complexity of on-orbit tasks imposes great demands on the flexible operation of space robotic arms, prompting the development of space robots from single-arm manipulation to multi-arm collaboration. In this paper, a combined approach of Learning from Demonstration (LfD) and Reinforcement Learning (RL) is proposed for space multi-arm collaborative skill learning. The combination effectively resolves the trade-off between learning efficiency and feasible solution in LfD, as well as the time-consuming pursuit of the optimal solution in RL. With the prior knowledge of LfD, space robotic arms can achieve efficient guided learning in high-dimensional state-action space. Specifically, an LfD approach with Probabilistic Movement Primitives (ProMP) is firstly utilized to encode and reproduce the demonstration actions, generating a distribution as the initialization of policy. Then in the RL stage, a Relative Entropy Policy Search (REPS) algorithm modified in continuous state-action space is employed for further policy improvement. More importantly, the learned behaviors can maintain and reflect the characteristics of demonstrations. In addition, a series of supplementary policy search mechanisms are designed to accelerate the exploration process. The effectiveness of the proposed method has been verified both theoretically and experimentally. Moreover, comparisons with state-of-the-art methods have confirmed the outperformance of the approach.
基金co-supported by the National Natural Science Foundation of China(Nos.61806219,61876189 and 61703426)the Young Talent Fund of University Association for Science and Technology in Shaanxi,China(Nos.20190108 and 20220106)the Innovation Talent Supporting Project of Shaanxi,China(No.2020KJXX-065).
文摘Modern air battlefield operations are characterized by flexibility and change, and the battlefield evolves rapidly and intricately. However, traditional air target intent recognition methods, which mainly rely on manually designed neural network models, find it difficult to maintain sustained and excellent performance in such a complex and changing environment. To address the problem of the adaptability of neural network models in complex environments, we propose a lightweight Transformer model(TransATIR) with a strong adaptive adjustment capability, based on the characteristics of air target intent recognition and the neural network architecture search technique. After conducting extensive experiments, it has been proved that TransATIR can efficiently extract the deep feature information from battlefield situation data by utilizing the neural architecture search algorithm, in order to quickly and accurately identify the real intention of the target. The experimental results indicate that TransATIR significantly improves recognition accuracy compared to the existing state-of-the-art methods, and also effectively reduces the computational complexity of the model.
文摘This paper introduces a novel optimization approach called Recuperated Seed Search Optimization(RSSO),designed to address challenges in solving mechanical engineering design problems.Many optimization techniques struggle with slow convergence and suboptimal solutions due to complex,nonlinear natures.The Sperm Swarm Optimization(SSO)algorithm,which mimics the sperm’s movement to reach an egg,is one such technique.To improve SSO,researchers combined it with three strategies:opposition-based learning(OBL),Cauchy mutation(CM),and position clamping.OBL introduces diversity to SSO by exploring opposite solutions,speeding up convergence.CM enhances both exploration and exploitation capabilities throughout the optimization process.This combined approach,RSSO,has been rigorously tested on standard benchmark functions,real-world engineering problems,and through statistical analysis(Wilcoxon test).The results demonstrate that RSSO significantly outperforms other optimization algorithms,achieving faster convergence and better solutions.The paper details the RSSO algorithm,discusses its implementation,and presents comparative results that validate its effectiveness in solving complex engineering design challenges.
基金supported by the Fundamental Research Funds for the Central Universities,the National Natural Science Foundation of China(Grant Nos.12371132,12075159,12171044,12071179,and 12405006)the specific research fund of the Innovation Platform for Academicians of Hainan Province.
文摘Quantum algorithms have demonstrated provable speedups over classical counterparts,yet establishing a comprehensive theoretical framework to understand the quantum advantage remains a core challenge.In this work,we decode the quantum search advantage by investigating the critical role of quantum state properties in random-walk-based algorithms.We propose three distinct variants of quantum random-walk search algorithms and derive exact analytical expressions for their success probabilities.These probabilities are fundamentally determined by specific initial state properties:the coherence fraction governs the first algorithm’s performance,while entanglement and coherence dominate the outcomes of the second and third algorithms,respectively.We show that increased coherence fraction enhances success probability,but greater entanglement and coherence reduce it in the latter two cases.These findings reveal fundamental insights into harnessing quantum properties for advantage and guide algorithm design.Our searches achieve Grover-like speedups and show significant potential for quantum-enhanced machine learning.