The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is crit...The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.展开更多
A novel method under the interactive multiple model (IMM) filtering framework is presented in this paper, in which the expectation-maximization (EM) algorithm is used to identify the process noise covariance Q online....A novel method under the interactive multiple model (IMM) filtering framework is presented in this paper, in which the expectation-maximization (EM) algorithm is used to identify the process noise covariance Q online. For the existing IMM filtering theory, the matrix Q is determined by means of design experience, but Q is actually changed with the state of the maneuvering target. Meanwhile it is severely influenced by the environment around the target, i.e., it is a variable of time. Therefore, the experiential covariance Q can not represent the influence of state noise in the maneuvering process exactly. Firstly, it is assumed that the evolved state and the initial conditions of the system can be modeled by using Gaussian distribution, although the dynamic system is of a nonlinear measurement equation, and furthermore the EM algorithm based on IMM filtering with the Q identification online is proposed. Secondly, the truncated error analysis is performed. Finally, the Monte Carlo simulation results are given to show that the proposed algorithm outperforms the existing algorithms and the tracking precision for the maneuvering targets is improved efficiently.展开更多
The quality of synthetic aperture radar(SAR)image degrades in the case of multiple imaging projection planes(IPPs)and multiple overlapping ship targets,and then the performance of target classification and recognition...The quality of synthetic aperture radar(SAR)image degrades in the case of multiple imaging projection planes(IPPs)and multiple overlapping ship targets,and then the performance of target classification and recognition can be influenced.For addressing this issue,a method for extracting ship targets with overlaps via the expectation maximization(EM)algorithm is pro-posed.First,the scatterers of ship targets are obtained via the target detection technique.Then,the EM algorithm is applied to extract the scatterers of a single ship target with a single IPP.Afterwards,a novel image amplitude estimation approach is pro-posed,with which the radar image of a single target with a sin-gle IPP can be generated.The proposed method can accom-plish IPP selection and targets separation in the image domain,which can improve the image quality and reserve the target information most possibly.Results of simulated and real mea-sured data demonstrate the effectiveness of the proposed method.展开更多
In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood i...In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood inference using EM algorithm. Asymptotic properties of the MLEs are obtained and extensive simulations are conducted to assess the performance of parameter estimation. A numerical example is used to illustrate the application.展开更多
Detrital geochronology fundamentally involves the quantification of major age ranges and their weights winthin an age distribution.This study presents a streamlined approach,modeling the age distribution of detrital z...Detrital geochronology fundamentally involves the quantification of major age ranges and their weights winthin an age distribution.This study presents a streamlined approach,modeling the age distribution of detrital zircons using a normal mixture model,and employs the Expectation-Maximization(EM)algorithm for precise estimations.A method is introduced to automatically select appropriate initial mean values for EM algorithm,enhancing its efficacy in detrital geochronology.This process entails multiple trials with varying numbers of age components leading to diverse k-component models.The model with the lowest Bayesian Information Criterion(BIC)is identified as the most suitable.For accurate component number and weight determination,a substantial sample size(n>200)is advisable. Our findings based on both synthetic and empirical datasets confirm that the normal mixture model,refined by the EM algorithm,reliably identifies key age parameters with minimal error.As a kind of probability density estimator,the normal mixture model offers a novel visualization tool for detrital data and an alternative foundation for KDE in calculating existing similarity metrics.Another focus of this study is the critical examination of quantitative metrics for comparing detrital zircon age patterns.Through a case study,this study demonstrates that metrics based on empirical cumulative probability distribution(such as K-S and Kuiper statistics)may lead to erroneous conclusions.The employment of the Kullback-Leibler(KL)divergence,a metric grounded in probability density estimation,is proposed.Reference critical values,simulated via the Monte Carlo method,provide more objective benchmarks for these quantitative metrics. All methodologies discussed are encapsulated in a series of MATLAB scripts,available as open-source code and a standalone application,facilitating wider adoption and application in the field.展开更多
A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in ge...A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in generic statistical problems, the EM algorithm has been widely used in many domains. But it often requires significant computational resources. So it is needed to develop more elaborate methods to adapt the databases to a large number of records or large dimensionality. The parallel EM algorithm is based on partial Esteps which has the standard convergence guarantee of EM. The algorithm utilizes fully the advantage of parallel computation. It was confirmed that the algorithm obtains about 2.6 speedups in contrast with the standard EM algorithm through its application to large databases. The running time will decrease near linearly when the number of processors increasing.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
文摘The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.
基金Supported by the National Key Fundamental Research & Development Programs of P. R. China (2001CB309403)
文摘A novel method under the interactive multiple model (IMM) filtering framework is presented in this paper, in which the expectation-maximization (EM) algorithm is used to identify the process noise covariance Q online. For the existing IMM filtering theory, the matrix Q is determined by means of design experience, but Q is actually changed with the state of the maneuvering target. Meanwhile it is severely influenced by the environment around the target, i.e., it is a variable of time. Therefore, the experiential covariance Q can not represent the influence of state noise in the maneuvering process exactly. Firstly, it is assumed that the evolved state and the initial conditions of the system can be modeled by using Gaussian distribution, although the dynamic system is of a nonlinear measurement equation, and furthermore the EM algorithm based on IMM filtering with the Q identification online is proposed. Secondly, the truncated error analysis is performed. Finally, the Monte Carlo simulation results are given to show that the proposed algorithm outperforms the existing algorithms and the tracking precision for the maneuvering targets is improved efficiently.
基金This work was supported by the National Science Fund for Distinguished Young Scholars(62325104).
文摘The quality of synthetic aperture radar(SAR)image degrades in the case of multiple imaging projection planes(IPPs)and multiple overlapping ship targets,and then the performance of target classification and recognition can be influenced.For addressing this issue,a method for extracting ship targets with overlaps via the expectation maximization(EM)algorithm is pro-posed.First,the scatterers of ship targets are obtained via the target detection technique.Then,the EM algorithm is applied to extract the scatterers of a single ship target with a single IPP.Afterwards,a novel image amplitude estimation approach is pro-posed,with which the radar image of a single target with a sin-gle IPP can be generated.The proposed method can accom-plish IPP selection and targets separation in the image domain,which can improve the image quality and reserve the target information most possibly.Results of simulated and real mea-sured data demonstrate the effectiveness of the proposed method.
基金Supported by the program for the Fundamental Research Funds for the Central Universities(2014RC042,2015JBM109)
文摘In this article, we consider a lifetime distribution, the Weibull-Logarithmic distri- bution introduced by [6]. We investigate some new statistical characterizations and properties. We develop the maximum likelihood inference using EM algorithm. Asymptotic properties of the MLEs are obtained and extensive simulations are conducted to assess the performance of parameter estimation. A numerical example is used to illustrate the application.
文摘Detrital geochronology fundamentally involves the quantification of major age ranges and their weights winthin an age distribution.This study presents a streamlined approach,modeling the age distribution of detrital zircons using a normal mixture model,and employs the Expectation-Maximization(EM)algorithm for precise estimations.A method is introduced to automatically select appropriate initial mean values for EM algorithm,enhancing its efficacy in detrital geochronology.This process entails multiple trials with varying numbers of age components leading to diverse k-component models.The model with the lowest Bayesian Information Criterion(BIC)is identified as the most suitable.For accurate component number and weight determination,a substantial sample size(n>200)is advisable. Our findings based on both synthetic and empirical datasets confirm that the normal mixture model,refined by the EM algorithm,reliably identifies key age parameters with minimal error.As a kind of probability density estimator,the normal mixture model offers a novel visualization tool for detrital data and an alternative foundation for KDE in calculating existing similarity metrics.Another focus of this study is the critical examination of quantitative metrics for comparing detrital zircon age patterns.Through a case study,this study demonstrates that metrics based on empirical cumulative probability distribution(such as K-S and Kuiper statistics)may lead to erroneous conclusions.The employment of the Kullback-Leibler(KL)divergence,a metric grounded in probability density estimation,is proposed.Reference critical values,simulated via the Monte Carlo method,provide more objective benchmarks for these quantitative metrics. All methodologies discussed are encapsulated in a series of MATLAB scripts,available as open-source code and a standalone application,facilitating wider adoption and application in the field.
基金the National Natural Science Foundation of China(79990584)
文摘A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in generic statistical problems, the EM algorithm has been widely used in many domains. But it often requires significant computational resources. So it is needed to develop more elaborate methods to adapt the databases to a large number of records or large dimensionality. The parallel EM algorithm is based on partial Esteps which has the standard convergence guarantee of EM. The algorithm utilizes fully the advantage of parallel computation. It was confirmed that the algorithm obtains about 2.6 speedups in contrast with the standard EM algorithm through its application to large databases. The running time will decrease near linearly when the number of processors increasing.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.