In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd dat...In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.展开更多
Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust l...Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.展开更多
This study explored the application value of iterative decomposition of water and fatwith echo asymmetry and least-squares estimation(IDEAL-IQ)technology in the early diagnosis of ageing osteoporosis(OP).172 participa...This study explored the application value of iterative decomposition of water and fatwith echo asymmetry and least-squares estimation(IDEAL-IQ)technology in the early diagnosis of ageing osteoporosis(OP).172 participants were enrolled and underwentmagnetic resonance imaging(MRI)examinations on a 3.0T scanner.100 cases were included in the normal group(50 males and 50 females;mean age:45 years;age range:20e84 years).33 cases were included in the osteopenia group(17 males and 16 females;mean age:55 years;age range:43e83 years).39 caseswere includedintheOP group(19males and20females;meanage:58years;age range:48 e82 years).Conventional T1WI and T2WI were first obtained,followed by 3D-IDEAL-IQ-acqui-sition.Fat fraction(FF)and apparent transverse relaxation rate(R2*)resultswere automatically calculated from IDEAL-IQ-images on the console.Based on T1Wand T2W-images,300 ROIs for each participantweremanually delineated in L1-L5 vertebral bodies of five middle slices.In each age group of all normal subjects,each parameter was significantly correlated with gender.In male participants from the normal,osteopenia,and OP groups,statistical analysis revealed F values of 11319.292 and 180.130 for comparisons involving FF and R2*values,respectively(all p<0.0001).The sensitivity and specificity of FF values were 0.906 and 0.950,0.994 and 0.997,0.865 and 0.820,respectively.For R2*,they were 0.665 and 0.616,0.563 and 0.519,0.571 and 0.368,respectively.In female participants from the normal,osteopenia,and OP-groups,statis-tical analysis revealed F values of 12461.658 and 548.274 for comparisons involving FF and R2*values,respectively(all p<0.0001).The sensitivity and specificity of FF values were 0.985 and 0.991,0.996 and 0.996,0.581 and 0.678,respectively.For R2*,they were 0.698 and 0.730,0.603 and 0.665,0.622 and 0.525,respectively.Significant differences were indicated in the quanti-tative values among the three groups.FF value had good performance,while R2*value had poor performance indiscriminatingosteopenia andOP-groups.Overall,the IDEAL-IQ techniqueoffers specific reference indices that enable noninvasive and quantitative assessment of lumbar vertebrae bone metabolism,thereby providing diagnostic information for OP.展开更多
This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating populatio...This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating population figures and studying their movement,thereby implying significant contributions to urban planning.However,existing research grapples with issues pertinent to preprocessing base station data and the modeling of population prediction.To address this,we propose methodologies for preprocessing cellular station data to eliminate any irregular or redundant data.The preprocessing reveals a distinct cyclical characteristic and high-frequency variation in population shift.Further,we devise a multi-view enhancement model grounded on the Transformer(MVformer),targeting the improvement of the accuracy of extended time-series population predictions.Comparative experiments,conducted on the above-mentioned population dataset using four alternate Transformer-based models,indicate that our proposedMVformer model enhances prediction accuracy by approximately 30%for both univariate and multivariate time-series prediction assignments.The performance of this model in tasks pertaining to population prediction exhibits commendable results.展开更多
A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies t...A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies the generalized Lipschitz condition.As a complex nonlinear system primarily governed by statistical laws rather than Newtonian mechanics,the output of non-Newtonian mechanics systems is difficult to describe through deterministic variables such as state variables,which poses difficulties in predicting and estimating the system’s output.In this article,the temporal variation of the system is described by constructing pattern category variables,which are non-deterministic variables.Since pattern category variables have statistical attributes but not operational attributes,operational attributes are assigned to them by posterior probability density,and a method for analyzing their motion laws using probability density evolution is proposed.Furthermore,a data-driven form of pattern motion probabilistic density evolution prediction method is designed by combining pseudo partial derivative(PPD),achieving prediction of the probability density satisfying the system’s output uncertainty.Based on this,the final prediction estimation of the system’s output value is realized by minimum variance unbiased estimation.Finally,a corresponding PPD estimation algorithm is designed using an extended state observer(ESO)to estimate the parameters to be estimated in the proposed prediction method.The effectiveness of the parameter estimation algorithm and prediction method is demonstrated through theoretical analysis,and the accuracy of the algorithm is verified by two numerical simulation examples.展开更多
In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate pr...In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.展开更多
The sixth-generation fighter has superior stealth performance,but for the traditional kernel density estimation(KDE),precision requirements are difficult to satisfy when dealing with the fluctuation characteristics of...The sixth-generation fighter has superior stealth performance,but for the traditional kernel density estimation(KDE),precision requirements are difficult to satisfy when dealing with the fluctuation characteristics of complex radar cross section(RCS).To solve this problem,this paper studies the KDE algorithm for F/AXX stealth fighter.By considering the accuracy lack of existing fixed bandwidth algorithms,a novel adaptive kernel density estimation(AKDE)algorithm equipped with least square cross validation and integrated squared error criterion is proposed to optimize the bandwidth.Meanwhile,an adaptive RCS density estimation can be obtained according to the optimized bandwidth.Finally,simulations verify that the estimation accuracy of the adaptive bandwidth RCS density estimation algorithm is more than 50%higher than that of the traditional algorithm.Based on the proposed algorithm(i.e.,AKDE),statistical characteristics of the considered fighter are more accurately acquired,and then the significant advantages of the AKDE algorithm in solving cumulative distribution function estimation of RCS less than 1 m2 are analyzed.展开更多
One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of t...One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.展开更多
An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as dron...An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.展开更多
The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two c...The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two commonly used tools are the kernel density estimation and reduced chi-squared statistic used in combination with a weighted mean.Due to the wide applicability of these tools,we present a Java-based computer application called KDX to facilitate the visualization of data and the utilization of these numerical tools.展开更多
It is a common practice to evaluate probability density function or matter spatial density function from statistical samples. Kernel density estimation is a frequently used method, but to select an optimal bandwidth o...It is a common practice to evaluate probability density function or matter spatial density function from statistical samples. Kernel density estimation is a frequently used method, but to select an optimal bandwidth of kernel estimation, which is completely based on data samples, is a long-term issue that has not been well settled so far. There exist analytic formulae of optimal kernel bandwidth, but they cannot be applied directly to data samples,since they depend on the unknown underlying density functions from which the samples are drawn. In this work, we devise an approach to pick out the totally data-based optimal bandwidth. First, we derive correction formulae for the analytic formulae of optimal bandwidth to compute the roughness of the sample's density function. Then substitute the correction formulae into the analytic formulae for optimal bandwidth, and through iteration we obtain the sample's optimal bandwidth. Compared with analytic formulae, our approach gives very good results, with relative differences from the analytic formulae being only 2%~3% for sample size larger than 10~4. This approach can also be generalized easily to cases of variable kernel estimations.展开更多
Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the t...Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.展开更多
The probability hypothesis density(PHD) filter has been recognized as a promising technique for tracking an unknown number of targets. The performance of the PHD filter, however, is sensitive to the available knowledg...The probability hypothesis density(PHD) filter has been recognized as a promising technique for tracking an unknown number of targets. The performance of the PHD filter, however, is sensitive to the available knowledge on model parameters such as the measurement noise variance and those associated with the changes in the maneuvering target trajectories. If these parameters are unknown in advance, the tracking performance may degrade greatly. To address this aspect, this paper proposes to incorporate the adaptive parameter estimation(APE) method in the PHD filter so that the model parameters, which may be static and/or time-varying, can be estimated jointly with target states. The resulting APE-PHD algorithm is implemented using the particle filter(PF), which leads to the PF-APE-PHD filter. Simulations show that the newly proposed algorithm can correctly identify the unknown measurement noise variances, and it is capable of tracking multiple maneuvering targets with abrupt changing parameters in a more robust manner, compared to the multi-model approaches.展开更多
An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only smal...An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.展开更多
In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity...In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity assumptions,some asymptotic normality results of the residual density estimator are obtained when the autoregressive models are stationary process and explosive process.In order to illustrate these results,some simulations such as con dence intervals and mean integrated square errors are provided in this paper.It shows that the residual density estimator can replace the density\estimator"which contains errors.展开更多
The drag-free satellites are widely used in the field of fundamental science as they enable the high-precision measurement in pure gravity fields. This paper investigates the estimation of local orbital reference fram...The drag-free satellites are widely used in the field of fundamental science as they enable the high-precision measurement in pure gravity fields. This paper investigates the estimation of local orbital reference frame(LORF) for drag-free satellites. An approach, taking account of the combination of the minimum estimation error and power spectral density(PSD) constraint in frequency domain, is proposed. Firstly, the relationship between eigenvalues of estimator and transfer function is built to analyze the suppression and amplification effect on input signals and obtain the eigenvalue range. Secondly, an optimization model for state estimator design with minimum estimation error in time domain and PSD constraint in frequency domain is established. It is solved by the sequential quadratic programming(SQP) algorithm. Finally, the orbital reference frame estimation of low-earth-orbit satellite is taken as an example, and the estimator of minimum variance with PSD constraint is designed and analyzed using the method proposed in this paper.展开更多
Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation ...Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation method using Geographic Information Systems(GIS) to monitor crowd size for large areas.The proposed method mapped crowd images to GIS.Then we can estimate crowd density for each camera in GIS using an estimation model obtained by one camera.Test results show that one model obtained by one camera in GIS can be adaptively applied to other cameras in outdoor video scenes.A real-time monitoring system for crowd size in large areas based on scene invariant model has been successfully used in 'Jiangsu Qinhuai Lantern Festival,2012'.It can provide early warning information and scientific basis for safety and security decision making.展开更多
A new algorithm for linear instantaneous independent component analysis is proposed based on maximizing the log-likelihood contrast function which can be changed into a gradient equation.An iterative method is introdu...A new algorithm for linear instantaneous independent component analysis is proposed based on maximizing the log-likelihood contrast function which can be changed into a gradient equation.An iterative method is introduced to solve this equation efficiently.The unknown probability density functions as well as their first and second derivatives in the gradient equation are estimated by kernel density method.Computer simulations on artificially generated signals and gray scale natural scene images confirm the efficiency and accuracy of the proposed algorithm.展开更多
This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering t...This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the popu- lation density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.展开更多
We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper boun...We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.展开更多
基金the Humanities and Social Science Fund of the Ministry of Education of China(21YJAZH077)。
文摘In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation metrics.In this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised information.For this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the network.Specifically,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd images.In addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density estimation.The experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
基金the financial support provided by the National Key Research and Development Program for Young Scientists(No.2021YFC2900400)Postdoctoral Fellowship Program of China Postdoctoral Science Foundation(CPSF)(No.GZB20230914)+2 种基金National Natural Science Foundation of China(No.52304123)China Postdoctoral Science Foundation(No.2023M730412)Chongqing Outstanding Youth Science Foundation Program(No.CSTB2023NSCQ-JQX0027).
文摘Monitoring sensors in complex engineering environments often record abnormal data,leading to significant positioning errors.To reduce the influence of abnormal arrival times,we introduce an innovative,outlier-robust localization method that integrates kernel density estimation(KDE)with damping linear correction to enhance the precision of microseismic/acoustic emission(MS/AE)source positioning.Our approach systematically addresses abnormal arrival times through a three-step process:initial location by 4-arrival combinations,elimination of outliers based on three-dimensional KDE,and refinement using a linear correction with an adaptive damping factor.We validate our method through lead-breaking experiments,demonstrating over a 23%improvement in positioning accuracy with a maximum error of 9.12 mm(relative error of 15.80%)—outperforming 4 existing methods.Simulations under various system errors,outlier scales,and ratios substantiate our method’s superior performance.Field blasting experiments also confirm the practical applicability,with an average positioning error of 11.71 m(relative error of 7.59%),compared to 23.56,66.09,16.95,and 28.52 m for other methods.This research is significant as it enhances the robustness of MS/AE source localization when confronted with data anomalies.It also provides a practical solution for real-world engineering and safety monitoring applications.
基金supported by the Planned Project Grant(Grant No.3502Z20199064)from the Science and Technology Bureau of Xiamen(CN)the training project(Grant No.2020GGB067)of the youth and middle-aged talents of Fujian Provincial Health Commission(CN).
文摘This study explored the application value of iterative decomposition of water and fatwith echo asymmetry and least-squares estimation(IDEAL-IQ)technology in the early diagnosis of ageing osteoporosis(OP).172 participants were enrolled and underwentmagnetic resonance imaging(MRI)examinations on a 3.0T scanner.100 cases were included in the normal group(50 males and 50 females;mean age:45 years;age range:20e84 years).33 cases were included in the osteopenia group(17 males and 16 females;mean age:55 years;age range:43e83 years).39 caseswere includedintheOP group(19males and20females;meanage:58years;age range:48 e82 years).Conventional T1WI and T2WI were first obtained,followed by 3D-IDEAL-IQ-acqui-sition.Fat fraction(FF)and apparent transverse relaxation rate(R2*)resultswere automatically calculated from IDEAL-IQ-images on the console.Based on T1Wand T2W-images,300 ROIs for each participantweremanually delineated in L1-L5 vertebral bodies of five middle slices.In each age group of all normal subjects,each parameter was significantly correlated with gender.In male participants from the normal,osteopenia,and OP groups,statistical analysis revealed F values of 11319.292 and 180.130 for comparisons involving FF and R2*values,respectively(all p<0.0001).The sensitivity and specificity of FF values were 0.906 and 0.950,0.994 and 0.997,0.865 and 0.820,respectively.For R2*,they were 0.665 and 0.616,0.563 and 0.519,0.571 and 0.368,respectively.In female participants from the normal,osteopenia,and OP-groups,statis-tical analysis revealed F values of 12461.658 and 548.274 for comparisons involving FF and R2*values,respectively(all p<0.0001).The sensitivity and specificity of FF values were 0.985 and 0.991,0.996 and 0.996,0.581 and 0.678,respectively.For R2*,they were 0.698 and 0.730,0.603 and 0.665,0.622 and 0.525,respectively.Significant differences were indicated in the quanti-tative values among the three groups.FF value had good performance,while R2*value had poor performance indiscriminatingosteopenia andOP-groups.Overall,the IDEAL-IQ techniqueoffers specific reference indices that enable noninvasive and quantitative assessment of lumbar vertebrae bone metabolism,thereby providing diagnostic information for OP.
基金Guangdong Basic and Applied Basic Research Foundation under Grant No.2024A1515012485in part by the Shenzhen Fundamental Research Program under Grant JCYJ20220810112354002.
文摘This paper addresses the problem of predicting population density leveraging cellular station data.As wireless communication devices are commonly used,cellular station data has become integral for estimating population figures and studying their movement,thereby implying significant contributions to urban planning.However,existing research grapples with issues pertinent to preprocessing base station data and the modeling of population prediction.To address this,we propose methodologies for preprocessing cellular station data to eliminate any irregular or redundant data.The preprocessing reveals a distinct cyclical characteristic and high-frequency variation in population shift.Further,we devise a multi-view enhancement model grounded on the Transformer(MVformer),targeting the improvement of the accuracy of extended time-series population predictions.Comparative experiments,conducted on the above-mentioned population dataset using four alternate Transformer-based models,indicate that our proposedMVformer model enhances prediction accuracy by approximately 30%for both univariate and multivariate time-series prediction assignments.The performance of this model in tasks pertaining to population prediction exhibits commendable results.
文摘A prediction framework based on the evolution of pattern motion probability density is proposed for the output prediction and estimation problem of non-Newtonian mechanical systems,assuming that the system satisfies the generalized Lipschitz condition.As a complex nonlinear system primarily governed by statistical laws rather than Newtonian mechanics,the output of non-Newtonian mechanics systems is difficult to describe through deterministic variables such as state variables,which poses difficulties in predicting and estimating the system’s output.In this article,the temporal variation of the system is described by constructing pattern category variables,which are non-deterministic variables.Since pattern category variables have statistical attributes but not operational attributes,operational attributes are assigned to them by posterior probability density,and a method for analyzing their motion laws using probability density evolution is proposed.Furthermore,a data-driven form of pattern motion probabilistic density evolution prediction method is designed by combining pseudo partial derivative(PPD),achieving prediction of the probability density satisfying the system’s output uncertainty.Based on this,the final prediction estimation of the system’s output value is realized by minimum variance unbiased estimation.Finally,a corresponding PPD estimation algorithm is designed using an extended state observer(ESO)to estimate the parameters to be estimated in the proposed prediction method.The effectiveness of the parameter estimation algorithm and prediction method is demonstrated through theoretical analysis,and the accuracy of the algorithm is verified by two numerical simulation examples.
文摘In real-world applications, datasets frequently contain outliers, which can hinder the generalization ability of machine learning models. Bayesian classifiers, a popular supervised learning method, rely on accurate probability density estimation for classifying continuous datasets. However, achieving precise density estimation with datasets containing outliers poses a significant challenge. This paper introduces a Bayesian classifier that utilizes optimized robust kernel density estimation to address this issue. Our proposed method enhances the accuracy of probability density distribution estimation by mitigating the impact of outliers on the training sample’s estimated distribution. Unlike the conventional kernel density estimator, our robust estimator can be seen as a weighted kernel mapping summary for each sample. This kernel mapping performs the inner product in the Hilbert space, allowing the kernel density estimation to be considered the average of the samples’ mapping in the Hilbert space using a reproducing kernel. M-estimation techniques are used to obtain accurate mean values and solve the weights. Meanwhile, complete cross-validation is used as the objective function to search for the optimal bandwidth, which impacts the estimator. The Harris Hawks Optimisation optimizes the objective function to improve the estimation accuracy. The experimental results show that it outperforms other optimization algorithms regarding convergence speed and objective function value during the bandwidth search. The optimal robust kernel density estimator achieves better fitness performance than the traditional kernel density estimator when the training data contains outliers. The Naïve Bayesian with optimal robust kernel density estimation improves the generalization in the classification with outliers.
基金the National Natural Science Foundation of China(Nos.61074090 and 60804025)。
文摘The sixth-generation fighter has superior stealth performance,but for the traditional kernel density estimation(KDE),precision requirements are difficult to satisfy when dealing with the fluctuation characteristics of complex radar cross section(RCS).To solve this problem,this paper studies the KDE algorithm for F/AXX stealth fighter.By considering the accuracy lack of existing fixed bandwidth algorithms,a novel adaptive kernel density estimation(AKDE)algorithm equipped with least square cross validation and integrated squared error criterion is proposed to optimize the bandwidth.Meanwhile,an adaptive RCS density estimation can be obtained according to the optimized bandwidth.Finally,simulations verify that the estimation accuracy of the adaptive bandwidth RCS density estimation algorithm is more than 50%higher than that of the traditional algorithm.Based on the proposed algorithm(i.e.,AKDE),statistical characteristics of the considered fighter are more accurately acquired,and then the significant advantages of the AKDE algorithm in solving cumulative distribution function estimation of RCS less than 1 m2 are analyzed.
基金Supported by the National Natural Science Foundation of China(60603029)the Natural Science Foundation of Jiangsu Province(BK2007074)the Natural Science Foundation for Colleges and Universities in Jiangsu Province(06KJB520132)~~
文摘One-class support vector machine (OCSVM) and support vector data description (SVDD) are two main domain-based one-class (kernel) classifiers. To reveal their relationship with density estimation in the case of the Gaussian kernel, OCSVM and SVDD are firstly unified into the framework of kernel density estimation, and the essential relationship between them is explicitly revealed. Then the result proves that the density estimation induced by OCSVM or SVDD is in agreement with the true density. Meanwhile, it can also reduce the integrated squared error (ISE). Finally, experiments on several simulated datasets verify the revealed relationships.
基金supported by the National Natural Science Foundation of China (61773142)。
文摘An algorithm to track multiple sharply maneuvering targets without prior knowledge about new target birth is proposed. These targets are capable of achieving sharp maneuvers within a short period of time, such as drones and agile missiles.The probability hypothesis density (PHD) filter, which propagates only the first-order statistical moment of the full target posterior, has been shown to be a computationally efficient solution to multitarget tracking problems. However, the standard PHD filter operates on the single dynamic model and requires prior information about target birth distribution, which leads to many limitations in terms of practical applications. In this paper,we introduce a nonzero mean, white noise turn rate dynamic model and generalize jump Markov systems to multitarget case to accommodate sharply maneuvering dynamics. Moreover, to adaptively estimate newborn targets’information, a measurement-driven method based on the recursive random sampling consensus (RANSAC) algorithm is proposed. Simulation results demonstrate that the proposed method achieves significant improvement in tracking multiple sharply maneuvering targets with adaptive birth estimation.
文摘The application of frequency distribution statistics to data provides objective means to assess the nature of the data distribution and viability of numerical models that are used to visualize and interpret data.Two commonly used tools are the kernel density estimation and reduced chi-squared statistic used in combination with a weighted mean.Due to the wide applicability of these tools,we present a Java-based computer application called KDX to facilitate the visualization of data and the utilization of these numerical tools.
基金Supported by the National Science Foundation of China under Grant No.11273013by the Natural Science Foundation of Jilin Province under Grant No.20180101228JC
文摘It is a common practice to evaluate probability density function or matter spatial density function from statistical samples. Kernel density estimation is a frequently used method, but to select an optimal bandwidth of kernel estimation, which is completely based on data samples, is a long-term issue that has not been well settled so far. There exist analytic formulae of optimal kernel bandwidth, but they cannot be applied directly to data samples,since they depend on the unknown underlying density functions from which the samples are drawn. In this work, we devise an approach to pick out the totally data-based optimal bandwidth. First, we derive correction formulae for the analytic formulae of optimal bandwidth to compute the roughness of the sample's density function. Then substitute the correction formulae into the analytic formulae for optimal bandwidth, and through iteration we obtain the sample's optimal bandwidth. Compared with analytic formulae, our approach gives very good results, with relative differences from the analytic formulae being only 2%~3% for sample size larger than 10~4. This approach can also be generalized easily to cases of variable kernel estimations.
基金Supported by the National Natural Science Foundation of China (No.60574047) and the Doctorate Foundation of the State Education Ministry of China (No.20050335018).
文摘Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.
基金supported by the National Natural Science Foundation of China (Nos. 61305017, 61304264)the Natural Science Foundation of Jiangsu Province (No. BK20130154)
文摘The probability hypothesis density(PHD) filter has been recognized as a promising technique for tracking an unknown number of targets. The performance of the PHD filter, however, is sensitive to the available knowledge on model parameters such as the measurement noise variance and those associated with the changes in the maneuvering target trajectories. If these parameters are unknown in advance, the tracking performance may degrade greatly. To address this aspect, this paper proposes to incorporate the adaptive parameter estimation(APE) method in the PHD filter so that the model parameters, which may be static and/or time-varying, can be estimated jointly with target states. The resulting APE-PHD algorithm is implemented using the particle filter(PF), which leads to the PF-APE-PHD filter. Simulations show that the newly proposed algorithm can correctly identify the unknown measurement noise variances, and it is capable of tracking multiple maneuvering targets with abrupt changing parameters in a more robust manner, compared to the multi-model approaches.
基金Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11_0193)NUAA Research Funding (NJ2010009)
文摘An improved method using kernel density estimation (KDE) and confidence level is presented for model validation with small samples. Decision making is a challenging problem because of input uncertainty and only small samples can be used due to the high costs of experimental measurements. However, model validation provides more confidence for decision makers when improving prediction accuracy at the same time. The confidence level method is introduced and the optimum sample variance is determined using a new method in kernel density estimation to increase the credibility of model validation. As a numerical example, the static frame model validation challenge problem presented by Sandia National Laboratories has been chosen. The optimum bandwidth is selected in kernel density estimation in order to build the probability model based on the calibration data. The model assessment is achieved using validation and accreditation experimental data respectively based on the probability model. Finally, the target structure prediction is performed using validated model, which are consistent with the results obtained by other researchers. The results demonstrate that the method using the improved confidence level and kernel density estimation is an effective approach to solve the model validation problem with small samples.
基金supported by the National Natural Science Foundation of China(12131015,12071422)。
文摘In this paper,we consider the limit distribution of the error density function estima-tor in the rst-order autoregressive models with negatively associated and positively associated random errors.Under mild regularity assumptions,some asymptotic normality results of the residual density estimator are obtained when the autoregressive models are stationary process and explosive process.In order to illustrate these results,some simulations such as con dence intervals and mean integrated square errors are provided in this paper.It shows that the residual density estimator can replace the density\estimator"which contains errors.
基金co-supported by the Open Fund of Joint Key Laboratory of Microsatellite of CAS (No. KFKT15SYS1)the Innovation Foundation of CAS (No. CXJJ-14-Q52)
文摘The drag-free satellites are widely used in the field of fundamental science as they enable the high-precision measurement in pure gravity fields. This paper investigates the estimation of local orbital reference frame(LORF) for drag-free satellites. An approach, taking account of the combination of the minimum estimation error and power spectral density(PSD) constraint in frequency domain, is proposed. Firstly, the relationship between eigenvalues of estimator and transfer function is built to analyze the suppression and amplification effect on input signals and obtain the eigenvalue range. Secondly, an optimization model for state estimator design with minimum estimation error in time domain and PSD constraint in frequency domain is established. It is solved by the sequential quadratic programming(SQP) algorithm. Finally, the orbital reference frame estimation of low-earth-orbit satellite is taken as an example, and the estimator of minimum variance with PSD constraint is designed and analyzed using the method proposed in this paper.
基金The authors would like to thank the reviewers for their detailed reviews and constructive comments. We are also grateful for Sophie Song's help on the improving English. This work was supported in part by the ‘Fivetwelfh' National Science and Technology Support Program of the Ministry of Science and Technology of China (No. 2012BAH35B02), the National Natural Science Foundation of China (NSFC) (No. 41401107, No. 41201402, and No. 41201417).
文摘Crowd density is an important factor of crowd stability.Previous crowd density estimation methods are highly dependent on the specific video scene.This paper presented a video scene invariant crowd density estimation method using Geographic Information Systems(GIS) to monitor crowd size for large areas.The proposed method mapped crowd images to GIS.Then we can estimate crowd density for each camera in GIS using an estimation model obtained by one camera.Test results show that one model obtained by one camera in GIS can be adaptively applied to other cameras in outdoor video scenes.A real-time monitoring system for crowd size in large areas based on scene invariant model has been successfully used in 'Jiangsu Qinhuai Lantern Festival,2012'.It can provide early warning information and scientific basis for safety and security decision making.
文摘A new algorithm for linear instantaneous independent component analysis is proposed based on maximizing the log-likelihood contrast function which can be changed into a gradient equation.An iterative method is introduced to solve this equation efficiently.The unknown probability density functions as well as their first and second derivatives in the gradient equation are estimated by kernel density method.Computer simulations on artificially generated signals and gray scale natural scene images confirm the efficiency and accuracy of the proposed algorithm.
基金funded by the National Natural Science Foundation of China (Nos.71271069,71540015,71532004)Foundation of Beijing University of Civil Engineering and Architecture (No.ZF15069)
文摘This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the popu- lation density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.
文摘We consider n observations from the GARCH-type model: Z = UY, where U and Y are independent random variables. We aim to estimate density function Y where Y have a weighted distribution. We determine a sharp upper bound of the associated mean integrated square error. We also make use of the measure of expected true evidence, so as to determine when model leads to a crisis and causes data to be lost.