This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the compl...This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.展开更多
In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transform...In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transformation to derive the search direction.It is shown that the proximity measure reduces quadratically at each iteration.Moreover,the iteration bound of the algorithm is as good as the best-known polynomial complexity for these types of problems.Furthermore,numerical results are presented to show the efficiency of the proposed algorithm.展开更多
To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is pla...To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.展开更多
Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not ...Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not satisfactory. The contribution of the vector x(t) with different modules is theoretically proved to be unequal, and a weighted K-means clustering method is proposed on this grounds. The proposed algorithm is not only as fast as the conventional K-means clustering method, but can also achieve considerably accurate results, which is demonstrated by numerical experiments.展开更多
In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering a...In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering algorithm is proposed. First, the concept of a silhouette coefficient is introduced, and the optimal clustering number Kopt of a data set with unknown class information is confirmed by calculating the silhouette coefficient of objects in clusters under different K values. Then the distribution of the data set is obtained through hierarchical clustering and the initial clustering-centers are confirmed. Finally, the clustering is completed by the traditional k-means clustering. By the theoretical analysis, it is proved that the improved k-means clustering algorithm has proper computational complexity. The experimental results of IRIS testing data set show that the algorithm can distinguish different clusters reasonably and recognize the outliers efficiently, and the entropy generated by the algorithm is lower.展开更多
An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the ...An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.展开更多
In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared dista...In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this paper, we present a simple and efficient clustering algorithm based on the k-means algorithm, which we call enhanced k-means algorithm. This algorithm is easy to implement, requiring a simple data structure to keep some information in each iteration to be used in the next iteration. Our experimental results demonstrated that our scheme can improve the computational speed of the k-means algorithm by the magnitude in the total number of distance calculations and the overall time of computation.展开更多
Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is aut...Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is automatic when the signal to noise ratio(SNR) is unknown.If the frequency-locked loop(FLL) or the phase-locked loop(PLL) with fixed loop bandwidth, or Kalman filter with fixed noise variance is adopted, the accretion of estimation error and filter divergence may be caused.Therefore, the Kalman filter algorithm with adaptive capability is adopted to suppress filter divergence.Through analyzing the inadequacies of Sage–Husa adaptive filtering algorithm, this paper introduces a weighted adaptive filtering algorithm for autonomous radio.The introduced algorithm may resolve the defect of Sage–Husa adaptive filtering algorithm that the noise covariance matrix is negative definite in filtering process.In addition, the upper diagonal(UD) factorization and innovation adaptive control are used to reduce model estimation errors,suppress filter divergence and improve filtering accuracy.The simulation results indicate that compared with the Sage–Husa adaptive filtering algorithm, this algorithm has better capability to adapt to the loop, convergence performance and tracking accuracy, which contributes to the effective and accurate carrier tracking in low SNR environment, showing a better application prospect.展开更多
Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction m...Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction method,the photographic method has the advantages of simple operation and high extraction accuracy.However,when soil moisture and acquisition times vary,the extraction results are less accurate.To accommodate various conditions of FVC extraction,this study proposes a new FVC extraction method that extracts FVC from a normalized difference vegetation index(NDVI)greyscale image of wheat by using a density peak k-means(DPK-means)algorithm.In this study,Yangfumai 4(YF4)planted in pots and Yangmai 16(Y16)planted in the field were used as the research materials.With a hyperspectral imaging camera mounted on a tripod,ground hyperspectral images of winter wheat under different soil conditions(dry and wet)were collected at 1 m above the potted wheat canopy.Unmanned aerial vehicle(UAV)hyperspectral images of winter wheat at various stages were collected at 50 m above the field wheat canopy by a UAV equipped with a hyperspectral camera.The pixel dichotomy method and DPK-means algorithm were used to classify vegetation pixels and non-vegetation pixels in NDVI greyscale images of wheat,and the extraction effects of the two methods were compared and analysed.The results showed that extraction by pixel dichotomy was influenced by the acquisition conditions and its error distribution was relatively scattered,while the extraction effect of the DPK-means algorithm was less affected by the acquisition conditions and its error distribution was concentrated.The absolute values of error were 0.042 and 0.044,the root mean square errors(RMSE)were 0.028 and 0.030,and the fitting accuracy R2 of the FVC was 0.87 and 0.93,under dry and wet soil conditions and under various time conditions,respectively.This study found that the DPK-means algorithm was capable of achieving more accurate results than the pixel dichotomy method in various soil and time conditions and was an accurate and robust method for FVC extraction.展开更多
A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. ...A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. A one-way hash function and a secret key known only to the owner of the data are used to select tuples and bits to mark. By assigning high weight to significant attributes, the scheme ensures that important attributes take more chance to be marked than less important ones. Experimental results show that the proposed scheme is robust against various forms of attacks, and has perfect immunity to subset attack.展开更多
A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard...A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard, in the strong sense, or open problems, therefore approximation algorithms are studied. The review reveals that there exist some potential areas worthy of further research.展开更多
The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal...The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm. Finally, the validity of the algorithm is proved by a simple searching example.展开更多
In this paper,a new full-Newton step primal-dual interior-point algorithm for solving the special weighted linear complementarity problem is designed and analyzed.The algorithm employs a kernel function with a linear ...In this paper,a new full-Newton step primal-dual interior-point algorithm for solving the special weighted linear complementarity problem is designed and analyzed.The algorithm employs a kernel function with a linear growth term to derive the search direction,and by introducing new technical results and selecting suitable parameters,we prove that the iteration bound of the algorithm is as good as best-known polynomial complexity of interior-point methods.Furthermore,numerical results illustrate the efficiency of the proposed method.展开更多
In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem...In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem, variable voltage CT reconstruction has been proposed. The effective projective sequences of a structural component are obtained through the variable voltage. The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART). In the process of reconstruction, the reconstructive image of low voltage is used as an initial value of the effective proiective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm. Thereby the complete structural information is reconstructed. Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com- ponent, and the pixel values are more stable than those of the conventional.展开更多
As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results a...As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.展开更多
K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper propo...K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper proposes an improved K-means algorithm based on the similarity matrix. The im- proved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that the F-measure of the improved K-means algorithm has been greatly improved and the clustering results are more stable.展开更多
The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decis...The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decision diagram (ADD) or variants thereof provides canonical forms to represent and manipulate Boolean functions and pseudo-Boolean functions efficiently. ADD and OBDD-based symbolic algorithms give improved results for large-scale combinatorial optimization problems by searching nodes and edges implicitly. We present novel symbolic ADD formulation and algorithm for maximum weighted matching in bipartite graphs. The symbolic algorithm implements the Hungarian algorithm in the context of ADD and OBDD formulation and manipulations. It begins by setting feasible labelings of nodes and then iterates through a sequence of phases. Each phase is divided into two stages. The first stage is building equality bipartite graphs, and the second one is finding maximum cardinality matching in equality bipartite graph. The second stage iterates through the following steps: greedily searching initial matching, building layered network, backward traversing node-disjoint augmenting paths, updating cardinality matching and building residual network. The symbolic algorithm does not require explicit enumeration of the nodes and edges, and therefore can handle many complex executions in each step. Simulation experiments indicate that symbolic algorithm is competitive with traditional algorithms.展开更多
A high-precision nominal flight profile,involving controllers′intentions is critical for 4Dtrajectory estimation in modern automatic air traffic control systems.We proposed a novel method to effectively improve the a...A high-precision nominal flight profile,involving controllers′intentions is critical for 4Dtrajectory estimation in modern automatic air traffic control systems.We proposed a novel method to effectively improve the accuracy of the nominal flight profile,including the nominal altitude profile and the speed profile.First,considering the characteristics of trajectory data,we developed an improved K-means algorithm.The approach was to measure the similarity between different altitude profiles by integrating the space warp edit distance algorithm,thereby to acquire several fitted nominal flight altitude profiles.This approach breaks the constraints of traditional K-means algorithms.Second,to eliminate the influence of meteorological factors,we introduced historical gridded binary data to determine the en-route wind speed and temperature via inverse distance weighted interpolation.Finally,we facilitated the true airspeed determined by speed triangle relationships and the calibrated airspeed determined by aircraft data model to extract a more accurate nominal speed profile from each cluster,therefore we could describe the airspeed profiles above and below the airspeed transition altitude,respectively.Our experimental results showed that the proposed method could obtain a highly accurate nominal flight profile,which reflects the actual aircraft flight status.展开更多
With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In th...With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.展开更多
基金supported by the Research Project of China Southern Power Grid(No.056200KK52222031).
文摘This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.
基金Supported by the Optimisation Theory and Algorithm Research Team(Grant No.23kytdzd004)University Science Research Project of Anhui Province(Grant No.2024AH050631)the General Programs for Young Teacher Cultivation of Educational Commission of Anhui Province(Grant No.YQYB2023090).
文摘In this paper,we propose a new full-Newton step feasible interior-point algorithm for the special weighted linear complementarity problems.The proposed algorithm employs the technique of algebraic equivalent transformation to derive the search direction.It is shown that the proximity measure reduces quadratically at each iteration.Moreover,the iteration bound of the algorithm is as good as the best-known polynomial complexity for these types of problems.Furthermore,numerical results are presented to show the efficiency of the proposed algorithm.
基金financial support from the National Key R&D Program of China(Grant No.2020YFB1711100).
文摘To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.
基金the National Natural Science Foundation of China (60672061)
文摘Blind separation of sparse sources (BSSS) is discussed. The BSSS method based on the conventional K-means clustering is very fast and is also easy to implement. However, the accuracy of this method is generally not satisfactory. The contribution of the vector x(t) with different modules is theoretically proved to be unequal, and a weighted K-means clustering method is proposed on this grounds. The proposed algorithm is not only as fast as the conventional K-means clustering method, but can also achieve considerably accurate results, which is demonstrated by numerical experiments.
基金The National Natural Science Foundation of China(No50674086)Specialized Research Fund for the Doctoral Program of Higher Education (No20060290508)the Youth Scientific Research Foundation of China University of Mining and Technology (No2006A047)
文摘In allusion to the disadvantage of having to obtain the number of clusters of data sets in advance and the sensitivity to selecting initial clustering centers in the k-means algorithm, an improved k-means clustering algorithm is proposed. First, the concept of a silhouette coefficient is introduced, and the optimal clustering number Kopt of a data set with unknown class information is confirmed by calculating the silhouette coefficient of objects in clusters under different K values. Then the distribution of the data set is obtained through hierarchical clustering and the initial clustering-centers are confirmed. Finally, the clustering is completed by the traditional k-means clustering. By the theoretical analysis, it is proved that the improved k-means clustering algorithm has proper computational complexity. The experimental results of IRIS testing data set show that the algorithm can distinguish different clusters reasonably and recognize the outliers efficiently, and the entropy generated by the algorithm is lower.
基金The National High Technology Research and Development Program of China (863Program) ( No2009AA01Z235,2006AA01Z263)the Research Fund of the National Mobile Communications Research Laboratory of Southeast University(No2008A10)
文摘An improved parallel weighted bit-flipping(PWBF) algorithm is presented. To accelerate the information exchanges between check nodes and variable nodes, the bit-flipping step and the check node updating step of the original algorithm are parallelized. The simulation experiments demonstrate that the improved PWBF algorithm provides about 0. 1 to 0. 3 dB coding gain over the original PWBF algorithm. And the improved algorithm achieves a higher convergence rate. The choice of the threshold is also discussed, which is used to determine whether a bit should be flipped during each iteration. The appropriate threshold can ensure that most error bits be flipped, and keep the right ones untouched at the same time. The improvement is particularly effective for decoding quasi-cyclic low-density paritycheck(QC-LDPC) codes.
文摘In k-means clustering, we are given a set of n data points in d-dimensional space R^d and an integer k and the problem is to determine a set of k points in R^d, called centers, so as to minimize the mean squared distance from each data point to its nearest center. In this paper, we present a simple and efficient clustering algorithm based on the k-means algorithm, which we call enhanced k-means algorithm. This algorithm is easy to implement, requiring a simple data structure to keep some information in each iteration to be used in the next iteration. Our experimental results demonstrated that our scheme can improve the computational speed of the k-means algorithm by the magnitude in the total number of distance calculations and the overall time of computation.
基金supported by Program for New Century Excellent Talents in University of China (No.NCET-120030)National Natural Science Foundation of China (No.91438116)
文摘Carrier tracking is laid great emphasis and is the difficulty of signal processing in deep space communication system.For the autonomous radio receiving system in deep space, the tracking of the received signal is automatic when the signal to noise ratio(SNR) is unknown.If the frequency-locked loop(FLL) or the phase-locked loop(PLL) with fixed loop bandwidth, or Kalman filter with fixed noise variance is adopted, the accretion of estimation error and filter divergence may be caused.Therefore, the Kalman filter algorithm with adaptive capability is adopted to suppress filter divergence.Through analyzing the inadequacies of Sage–Husa adaptive filtering algorithm, this paper introduces a weighted adaptive filtering algorithm for autonomous radio.The introduced algorithm may resolve the defect of Sage–Husa adaptive filtering algorithm that the noise covariance matrix is negative definite in filtering process.In addition, the upper diagonal(UD) factorization and innovation adaptive control are used to reduce model estimation errors,suppress filter divergence and improve filtering accuracy.The simulation results indicate that compared with the Sage–Husa adaptive filtering algorithm, this algorithm has better capability to adapt to the loop, convergence performance and tracking accuracy, which contributes to the effective and accurate carrier tracking in low SNR environment, showing a better application prospect.
基金supported by the Beijing Natural Science Foundation,China(4202066)the Central Public-interest Scientific Institution Basal Research Fund,China(JBYWAII-2020-29 and JBYW-AII-2020-31)+1 种基金the Key Research and Development Program of Hebei Province,China(19227407D)the Technology Innovation Project Fund of Chinese Academy of Agricultural Sciences(CAAS-ASTIP2020-All)。
文摘Fractional vegetation cover(FVC)is an important parameter to measure crop growth.In studies of crop growth monitoring,it is very important to extract FVC quickly and accurately.As the most widely used FVC extraction method,the photographic method has the advantages of simple operation and high extraction accuracy.However,when soil moisture and acquisition times vary,the extraction results are less accurate.To accommodate various conditions of FVC extraction,this study proposes a new FVC extraction method that extracts FVC from a normalized difference vegetation index(NDVI)greyscale image of wheat by using a density peak k-means(DPK-means)algorithm.In this study,Yangfumai 4(YF4)planted in pots and Yangmai 16(Y16)planted in the field were used as the research materials.With a hyperspectral imaging camera mounted on a tripod,ground hyperspectral images of winter wheat under different soil conditions(dry and wet)were collected at 1 m above the potted wheat canopy.Unmanned aerial vehicle(UAV)hyperspectral images of winter wheat at various stages were collected at 50 m above the field wheat canopy by a UAV equipped with a hyperspectral camera.The pixel dichotomy method and DPK-means algorithm were used to classify vegetation pixels and non-vegetation pixels in NDVI greyscale images of wheat,and the extraction effects of the two methods were compared and analysed.The results showed that extraction by pixel dichotomy was influenced by the acquisition conditions and its error distribution was relatively scattered,while the extraction effect of the DPK-means algorithm was less affected by the acquisition conditions and its error distribution was concentrated.The absolute values of error were 0.042 and 0.044,the root mean square errors(RMSE)were 0.028 and 0.030,and the fitting accuracy R2 of the FVC was 0.87 and 0.93,under dry and wet soil conditions and under various time conditions,respectively.This study found that the DPK-means algorithm was capable of achieving more accurate results than the pixel dichotomy method in various soil and time conditions and was an accurate and robust method for FVC extraction.
基金Supported by the Aeronautics Science Foundation of China (02F52033), the High-Technology Research Project of Jiangsu Province (BG2004005) and Youth Research Foundation of Qufu Normal Univer-sity(XJ02057)
文摘A weighted algorithm for watermarking relational databases for copyright protection is presented. The possibility of watermarking an attribute is assigned according to its weight decided by the owner of the database. A one-way hash function and a secret key known only to the owner of the data are used to select tuples and bits to mark. By assigning high weight to significant attributes, the scheme ensures that important attributes take more chance to be marked than less important ones. Experimental results show that the proposed scheme is robust against various forms of attacks, and has perfect immunity to subset attack.
基金the National Natural Science Foundation of China (70631003)the Hefei University of Technology Foundation (071102F).
文摘A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time. Models and relaxations are collected. Most of these problems are NP-hard, in the strong sense, or open problems, therefore approximation algorithms are studied. The review reveals that there exist some potential areas worthy of further research.
基金the National Natural Science Foundation of China (60773065).
文摘The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm. Finally, the validity of the algorithm is proved by a simple searching example.
基金Supported by University Science Research Project of Anhui Province(2023AH052921)Outstanding Youth Talent Project of Anhui Province(gxyq2021254)。
文摘In this paper,a new full-Newton step primal-dual interior-point algorithm for solving the special weighted linear complementarity problem is designed and analyzed.The algorithm employs a kernel function with a linear growth term to derive the search direction,and by introducing new technical results and selecting suitable parameters,we prove that the iteration bound of the algorithm is as good as best-known polynomial complexity of interior-point methods.Furthermore,numerical results illustrate the efficiency of the proposed method.
文摘In conventional computed tomography (CT) reconstruction based on fixed voltage, the projective data often ap- pear overexposed or underexposed, as a result, the reconstructive results are poor. To solve this problem, variable voltage CT reconstruction has been proposed. The effective projective sequences of a structural component are obtained through the variable voltage. The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART). In the process of reconstruction, the reconstructive image of low voltage is used as an initial value of the effective proiective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm. Thereby the complete structural information is reconstructed. Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com- ponent, and the pixel values are more stable than those of the conventional.
文摘As data grows in size,search engines face new challenges in extracting more relevant content for users’searches.As a result,a number of retrieval and ranking algorithms have been employed to ensure that the results are relevant to the user’s requirements.Unfortunately,most existing indexes and ranking algo-rithms crawl documents and web pages based on a limited set of criteria designed to meet user expectations,making it impossible to deliver exceptionally accurate results.As a result,this study investigates and analyses how search engines work,as well as the elements that contribute to higher ranks.This paper addresses the issue of bias by proposing a new ranking algorithm based on the PageRank(PR)algorithm,which is one of the most widely used page ranking algorithms We pro-pose weighted PageRank(WPR)algorithms to test the relationship between these various measures.The Weighted Page Rank(WPR)model was used in three dis-tinct trials to compare the rankings of documents and pages based on one or more user preferences criteria.Thefindings of utilizing the Weighted Page Rank model showed that using multiple criteria to rankfinal pages is better than using only one,and that some criteria had a greater impact on ranking results than others.
文摘K-means algorithm is one of the most widely used algorithms in the clustering analysis. To deal with the problem caused by the random selection of initial center points in the traditional al- gorithm, this paper proposes an improved K-means algorithm based on the similarity matrix. The im- proved algorithm can effectively avoid the random selection of initial center points, therefore it can provide effective initial points for clustering process, and reduce the fluctuation of clustering results which are resulted from initial points selections, thus a better clustering quality can be obtained. The experimental results also show that the F-measure of the improved K-means algorithm has been greatly improved and the clustering results are more stable.
文摘The maximum weighted matching problem in bipartite graphs is one of the classic combinatorial optimization problems, and arises in many different applications. Ordered binary decision diagram (OBDD) or algebraic decision diagram (ADD) or variants thereof provides canonical forms to represent and manipulate Boolean functions and pseudo-Boolean functions efficiently. ADD and OBDD-based symbolic algorithms give improved results for large-scale combinatorial optimization problems by searching nodes and edges implicitly. We present novel symbolic ADD formulation and algorithm for maximum weighted matching in bipartite graphs. The symbolic algorithm implements the Hungarian algorithm in the context of ADD and OBDD formulation and manipulations. It begins by setting feasible labelings of nodes and then iterates through a sequence of phases. Each phase is divided into two stages. The first stage is building equality bipartite graphs, and the second one is finding maximum cardinality matching in equality bipartite graph. The second stage iterates through the following steps: greedily searching initial matching, building layered network, backward traversing node-disjoint augmenting paths, updating cardinality matching and building residual network. The symbolic algorithm does not require explicit enumeration of the nodes and edges, and therefore can handle many complex executions in each step. Simulation experiments indicate that symbolic algorithm is competitive with traditional algorithms.
基金supported by the National Natural Science Foundation of China(Nos.61174180,U1433125)the Jiangsu Province Science Foundation (No.BK20141413)the Chinese Postdoctoral Science Foundation (No.2014M550291)
文摘A high-precision nominal flight profile,involving controllers′intentions is critical for 4Dtrajectory estimation in modern automatic air traffic control systems.We proposed a novel method to effectively improve the accuracy of the nominal flight profile,including the nominal altitude profile and the speed profile.First,considering the characteristics of trajectory data,we developed an improved K-means algorithm.The approach was to measure the similarity between different altitude profiles by integrating the space warp edit distance algorithm,thereby to acquire several fitted nominal flight altitude profiles.This approach breaks the constraints of traditional K-means algorithms.Second,to eliminate the influence of meteorological factors,we introduced historical gridded binary data to determine the en-route wind speed and temperature via inverse distance weighted interpolation.Finally,we facilitated the true airspeed determined by speed triangle relationships and the calibrated airspeed determined by aircraft data model to extract a more accurate nominal speed profile from each cluster,therefore we could describe the airspeed profiles above and below the airspeed transition altitude,respectively.Our experimental results showed that the proposed method could obtain a highly accurate nominal flight profile,which reflects the actual aircraft flight status.
文摘With the increasing variety of application software of meteorological satellite ground system, how to provide reasonable hardware resources and improve the efficiency of software is paid more and more attention. In this paper, a set of software classification method based on software operating characteristics is proposed. The method uses software run-time resource consumption to describe the software running characteristics. Firstly, principal component analysis (PCA) is used to reduce the dimension of software running feature data and to interpret software characteristic information. Then the modified K-means algorithm was used to classify the meteorological data processing software. Finally, it combined with the results of principal component analysis to explain the significance of various types of integrated software operating characteristics. And it is used as the basis for optimizing the allocation of software hardware resources and improving the efficiency of software operation.