This paper presents the calibration of Omori’s aftershock occurrence rate model for Turkey and the resulting likelihoods.Aftershock occurrence rate models are used for estimating the probability of an aftershock that...This paper presents the calibration of Omori’s aftershock occurrence rate model for Turkey and the resulting likelihoods.Aftershock occurrence rate models are used for estimating the probability of an aftershock that exceeds a specific magnitude threshold within a time interval after the mainshock.Critical decisions on the post-earthquake safety of structures directly depend on the aftershock hazard estimated using the occurrence model.It is customary to calibrate models in a region-specific manner.These models depend on rate parameters(a,b,c and p)related to the seismicity characteristics of the investigated region.In this study,the available well-recorded aftershock sequences for a set of Mw≥5.9 mainshock events that were observed in Turkey until 2012 are considered to develop the aftershock occurrence model.Mean estimates of the model parameters identified for Turkey are a=-1.90,b=1.11,c=0.05 and p=1.20.Based on the developed model,aftershock likelihoods are computed for a range of different time intervals and mainshock magnitudes.Also,the sensitivity of aftershock probabilities to the model parameters is investigated.Aftershock occurrence probabilities estimated using the model are expected to be useful for post-earthquake safety evaluations in Turkey.展开更多
The general functional form of composite likelihoods is derived by minimizing the Kullback-Leibler distance under structural constraints associated with low dimensional densities. Connections with the I-projection and...The general functional form of composite likelihoods is derived by minimizing the Kullback-Leibler distance under structural constraints associated with low dimensional densities. Connections with the I-projection and the maximum entropy distributions are shown. Asymptotic properties of composite likelihood inference under the proposed information-theoretical framework are established.展开更多
Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the ...Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.展开更多
In this paper,we present a novel particle filter(PF)-based direct position tracking method utilizing multiple distributed observation stations.Traditional passive tracking methods are anchored on repetitive position e...In this paper,we present a novel particle filter(PF)-based direct position tracking method utilizing multiple distributed observation stations.Traditional passive tracking methods are anchored on repetitive position estimation,where the set of consecutive estimates provides the tracking trajectory,such as Two-step and direct position determination methods.However,duplicate estimates can be computationally expensive.In addition,these techniques suffer from data association problems.The PF algorithm is a tracking method that avoids these drawbacks,but the conventional PF algorithm is unable to construct a likelihood function from the received signals of multiple observatories to determine the weights of particles.Therefore,we developed an improved PF algorithm with the likelihood function modified by the projection approximation subspace tracking with deflation(PASTd)algorithm.The proposed algorithm uses the projection subspace and spectral function to replace the likelihood function of PF.Then,the weights of particles are calculated jointly by multiple likelihood functions.Finally,the tracking problem of multiple targets is solved by multiple sets of particles.Simulations demonstrate the effectiveness of the proposed method in terms of computational complexity and tracking accuracy.展开更多
Blade Tip Timing(BTT)enables non-contact measurements of rotating blades by placing probes strategically.Due to the uneven probe layout,BTT signals exhibit periodic irregularities.While recovering parameters like freq...Blade Tip Timing(BTT)enables non-contact measurements of rotating blades by placing probes strategically.Due to the uneven probe layout,BTT signals exhibit periodic irregularities.While recovering parameters like frequency from such signals is possible,achieving high-precision vibration parameters remains challenging.This paper proposed a novel two-stage off-grid estimation method.It leverages a unique array layout(coprime array)to obtain a regular augmented covariance matrix.Subsequently,parameters in the matrix are recovered using the sparse iterative covariance-based estimation method based on covariance fitting criteria.Finally,high-precision estimates of imprecise parameters are obtained using unconditional maximum likelihood estimation,effectively eliminating the effects of basis mismatch.Through substantial numerical and experimental validation,the proposed method demonstrates significantly higher accuracy compared to classical BTT parameter estimation methods,approaching the lower bound of unbiased estimation variance.Furthermore,due to its immunity to frequency gridding,it can track minor frequency deviations,making it more suitable for indicating blade condition.展开更多
Accelerated life tests play a vital role in reliability analysis,especially as advanced technologies lead to the production of highly reliable products to meet market demands and competition.Among these tests,progress...Accelerated life tests play a vital role in reliability analysis,especially as advanced technologies lead to the production of highly reliable products to meet market demands and competition.Among these tests,progressive-stress accelerated life tests(PSALT)allow for continuous changes in applied stress.Additionally,the generalized progressive hybrid censoring(GPHC)scheme has attracted significant attention in reliability and survival analysis,particularly for handling censored data in accelerated testing.It has been applied to various failure models,including competing risks and step-stress models.However,despite its growing relevance,a notable gap remains in the literature regarding the application of GPHC in PSALT models.This paper addresses that gap by studying PSALT under a GPHC scheme with binomial removal.Specifically,it considers lifetimes following the quasi-Xgamma distribution.Model parameters are estimated using both maximum likelihood and Bayesian methods under gamma priors.Interval estimation is provided through approximate confidence intervals,bootstrap methods,and Bayesian credible intervals.Bayesian estimators are derived under squared error and entropy loss functions,using informative priors in simulation and non-informative priors in real data applications.A simulation study is conducted to evaluate various censoring schemes,with coverage probabilities and interval widths assessed via Monte Carlo simulations.Additionally,Bayesian predictive estimates and intervals are presented.The proposed methodology is illustrated through the analysis of two real-world accelerated life test datasets.展开更多
In this present work,we propose the expected Bayesian and hierarchical Bayesian approaches to estimate the shape parameter and hazard rate under a generalized progressive hybrid censoring scheme for the Kumaraswamy di...In this present work,we propose the expected Bayesian and hierarchical Bayesian approaches to estimate the shape parameter and hazard rate under a generalized progressive hybrid censoring scheme for the Kumaraswamy distribution.These estimates have been obtained using gamma priors based on various loss functions such as squared error,entropy,weighted balance,and minimum expected loss functions.An investigation is carried out using Monte Carlo simulation to evaluate the effectiveness of the suggested estimators.The simulation provides a quantitative assessment of the estimates accuracy and efficiency under various conditions by comparing them in terms of mean squared error.Additionally,the monthly water capacity of the Shasta reservoir is examined to offer real-world examples of how the suggested estimations may be used and performed.展开更多
Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it ...Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it often lacks the flexibility needed to capture complex hazard patterns.In this article,we propose a novel extension of the classical log-logistic distribution,termed the new exponential log-logistic(NExLL)distribution,designed to provide enhanced flexibility in modeling time-to-event data with complex failure behaviors.The NExLL model incorporates a new exponential generator to expand the shape adaptability of the baseline log-logistic distribution,allowing it to capture a wide range of hazard rate shapes,including increasing,decreasing,J-shaped,reversed J-shaped,modified bathtub,and unimodal forms.A key feature of the NExLL distribution is its formulation as a mixture of log-logistic densities,offering both symmetric and asymmetric patterns suitable for diverse real-world reliability scenarios.We establish several theoretical properties of the model,including closed-form expressions for its probability density function,cumulative distribution function,moments,hazard rate function,and quantiles.Parameter estimation is performed using seven classical estimation techniques,with extensive Monte Carlo simulations used to evaluate and compare their performance under various conditions.The practical utility and flexibility of the proposed model are illustrated using two real-world datasets from reliability and engineering applications,where the NExLL model demonstrates superior fit and predictive performance compared to existing log-logistic-basedmodels.This contribution advances the toolbox of parametric survivalmodels,offering a robust alternative formodeling complex aging and failure patterns in reliability,engineering,and other applied domains.展开更多
Weighted exponential distribution W ED(α,λ)with shape parameterαand scale parameterλpossesses some good properties and can be used as a good fit to survival time data compared to other distributions such as gamma,...Weighted exponential distribution W ED(α,λ)with shape parameterαand scale parameterλpossesses some good properties and can be used as a good fit to survival time data compared to other distributions such as gamma,Weibull,or generalized exponential distribution.In this article,we proved the existence and uniqueness of the maximum likelihood estimator(MLE)of the parameters of W ED(α,λ)in simple random sampling(SRS)and provided explicit expressions for the Fisher information number in SRS.Moreover,we also proved the existence and uniqueness of the MLE of the parameters of W ED(α,λ)in ranked set sampling(RSS)and provided explicit expressions for the Fisher information number in RSS.Simulation studies show that these MLEs in RSS can be real competitors for those in SRS.展开更多
Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when ...Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when both the process noise and measurement noise covariance matrices are relatively inaccurate. Maximum likelihood estimation(MLE) possesses the potential to achieve this goal, since its theoretical accuracy is guaranteed by asymptotic optimality and the convergence speed is fast due to weak dependence on accurate state estimation.Unfortunately, the maximum likelihood cost function is so intricate that the existing MLE methods can only simply ignore all historical measurement information to achieve online estimation,which cannot adequately realize the potential of MLE. In order to design online MLE-based AKFs with high estimation accuracy and fast convergence speed, an online exploratory MLE approach is proposed, based on which a mini-batch coordinate descent noise covariance matrix estimation framework is developed. In this framework, the maximum likelihood cost function is simplified for online estimation with fewer and simpler terms which are selected in a mini-batch and calculated with a backtracking method. This maximum likelihood cost function is sidestepped and solved by exploring possible estimated noise covariance matrices adaptively while the historical measurement information is adequately utilized. Furthermore, four specific algorithms are derived under this framework to meet different practical requirements in terms of convergence speed, estimation accuracy,and calculation load. Abundant simulations and experiments are carried out to verify the validity and superiority of the proposed algorithms as compared with existing state-of-the-art AKFs.展开更多
The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurr...The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurrence of a traumatic event[1].Fear generalization within the normal range represents an adaptive evolutionary mechanism to facilitate prompt reactions to potential threats and to enhance the likelihood of survival.展开更多
The existence and uniqueness of the maximum likelihood estimator(MLE)of parameter for the exponential-Poisson distribution is discussed by Ku s[2007.A new lifetime distribution.Computational Statistics and Data Analys...The existence and uniqueness of the maximum likelihood estimator(MLE)of parameter for the exponential-Poisson distribution is discussed by Ku s[2007.A new lifetime distribution.Computational Statistics and Data Analysis 51(9):4497-4509]in simple random sampling(SRS).As an alternative to the MLEs in SRS,Joukar et al.[2021.Parameter estimation for the exponential-poisson distribution based on ranked set samples.Communication in Statistics-Theory and Methods 50(3):560-581]discussed the MLE of parameter for this distribution in ranked set sampling(RSS).However,they did not discuss the existence and uniqueness of the MLE in RSS and did not provide explicit expressions for the Fisher information in RSS.In this article,we discuss the existence and uniqueness of the MLE of parameter in RSS and give explicit expressions for the Fisher information in RSS.The MLEs will be compared in terms of asymptotic efficiencies.Numerical studies and a real data application show that these MLEs in RSS can be real competitors for those in SRS.展开更多
As vehicular networks become increasingly pervasive,enhancing connectivity and reliability has emerged as a critical objective.Among the enabling technologies for advanced wireless communication,particularly those tar...As vehicular networks become increasingly pervasive,enhancing connectivity and reliability has emerged as a critical objective.Among the enabling technologies for advanced wireless communication,particularly those targeting low latency and high reliability,time synchronization is critical,especially in vehicular networks.However,due to the inherent mobility of vehicular environments,consistently exchanging synchronization packets with a fixed base station or access point is challenging.This issue is further exacerbated in signal shadowed areas such as urban canyons,tunnels,or large-scale indoor hallswhere other technologies,such as global navigation satellite system(GNSS),are unavailable.One-way synchronization techniques offer a feasible approach under such transient connectivity conditions.One-way schemes still suffer from long convergence times to reach the required synchronization accuracy in these circumstances.In this paper,we propose a WLAN-based multi-stage clock synchronization scheme(WMC)tailored for vehicular networks.The proposed method comprises an initial hard update stage to rapidly achieve synchronization,followed by a high-precision stable stage based on Maximum Likelihood Estimation(MLE).By implementing the scheme directly at the network driver,we address key limitations of hard update mechanisms.Our approach significantly reduces the initial period to collect high-quality samples and offset estimation time to reach sub-50μs accuracy,and subsequently transitions to a refined MLE-based synchronization stage,achieving stable accuracy at approximately 30μs.The windowed moving average stabilized(reaching 90%of the baseline)in approximately 35 s,which corresponds to just 5.1%of the baseline time accuracy.Finally,the impact of synchronization performance on the localization model was validated using the Simulation of Urban Mobility(SUMO).The results demonstrate that more accurate conditions for position estimation can be supported,with an improvement about 38.5%in the mean error.展开更多
BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emoti...BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emotional functioning.Resting-state functional magnetic resonance imaging(rs-fMRI)provides critical insights into the functional architecture of the brain in ADHD.Despite extensive research,specific brain regions consistently affected in ADHD patients during these formative years have not been comprehensively delineated.AIM To identify consistent vulnerable brain regions in adolescent ADHD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We conducted a comprehensive literature search up to August 31,2024,to identify studies investigating functional brain alterations in adolescents with ADHD.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF),dynamic ALFF(dALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with ADHD with those in healthy controls(HCs)using ALE.RESULTS Fifteen studies(468 adolescent ADHD patients and 466 HCs)were included.Combining the ReHo and ALFF/fALFF/dALFF data,the results revealed increased activity in the right lingual gyrus[LING,Brodmann Area(BA)18],left LING(BA 18),and right cuneus(CUN,BA 23)in adolescent ADHD patients compared with HCs(voxel size:592-32 mm³,P<0.05).Decreased activity was observed in the left medial frontal gyrus(MFG,BA 9)and left precuneus(PCUN,BA 31)in adolescent ADHD patients compared with HCs(voxel size:960-456 mm³,P<0.05).Jackknife sensitivity analyses demonstrated robust reproducibility in 11 of the 13 tests for the right LING,left LING,and right CUN and in 11 of the 14 tests for the left MFG and left PCUN.CONCLUSION We identified specific brain regions with both increased and decreased activity in adolescent ADHD patients,enhancing our understanding of the neural alterations that occur during this pivotal stage of development.展开更多
A new extended distribution called the Odd Exponential Generalized Exponential-Exponential distribution(EOEGE-E)is proposed based on generalization of the odd generalized exponential family(OEGE-E).The statistical pro...A new extended distribution called the Odd Exponential Generalized Exponential-Exponential distribution(EOEGE-E)is proposed based on generalization of the odd generalized exponential family(OEGE-E).The statistical properties of the proposed distribution are derived.The study evaluates the accuracy of six estimation methods under complete samples.Estimation techniques include maximumlikelihood,ordinary least squares,weighted least squares,maximumproduct of spacing,Cramer vonMises,and Anderson-Darling methods.Twomethods of estimation for the involved parameters are considered based on progressively type Ⅱ censored data(PTⅡC).These methods are maximum likelihood and maximum product of spacing.The proposed distribution’s effectiveness was evaluated using different data sets from various fields.The proposed distribution provides a better fit for these datasets than existing probability distributions.展开更多
Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2....Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2.5 concentrations is essential for mitigating health risks;however,the inherent nonlinearity and dynamic variability of air quality data present significant challenges.This study conducts a systematic evaluation of deep learning algorithms including Convolutional Neural Network(CNN),Long Short-Term Memory(LSTM),and the hybrid CNN-LSTM as well as statistical models,AutoRegressive Integrated Moving Average(ARIMA)and Maximum Likelihood Estimation(MLE)for hourly PM2.5 forecasting.Model performance is quantified using Root Mean Squared Error(RMSE),Mean Absolute Error(MAE),Mean Absolute Percentage Error(MAPE),and the Coefficient of Determination(R^(2))metrics.The comparative analysis identifies optimal predictive approaches for air quality modeling,emphasizing computational efficiency and accuracy.Additionally,CNN classification performance is evaluated using a confusion matrix,accuracy,precision,and F1-score.The results demonstrate that the Hybrid CNN-LSTM model outperforms standalone models,exhibiting lower error rates and higher R^(2) values,thereby highlighting the efficacy of deep learning-based hybrid architectures in achieving robust and precise PM2.5 forecasting.This study underscores the potential of advanced computational techniques in enhancing air quality prediction systems for environmental and public health applications.展开更多
基金Supported by:Scientific and Technological Research Council of Turkey(TUBITAK)with Grant No.213M454
文摘This paper presents the calibration of Omori’s aftershock occurrence rate model for Turkey and the resulting likelihoods.Aftershock occurrence rate models are used for estimating the probability of an aftershock that exceeds a specific magnitude threshold within a time interval after the mainshock.Critical decisions on the post-earthquake safety of structures directly depend on the aftershock hazard estimated using the occurrence model.It is customary to calibrate models in a region-specific manner.These models depend on rate parameters(a,b,c and p)related to the seismicity characteristics of the investigated region.In this study,the available well-recorded aftershock sequences for a set of Mw≥5.9 mainshock events that were observed in Turkey until 2012 are considered to develop the aftershock occurrence model.Mean estimates of the model parameters identified for Turkey are a=-1.90,b=1.11,c=0.05 and p=1.20.Based on the developed model,aftershock likelihoods are computed for a range of different time intervals and mainshock magnitudes.Also,the sensitivity of aftershock probabilities to the model parameters is investigated.Aftershock occurrence probabilities estimated using the model are expected to be useful for post-earthquake safety evaluations in Turkey.
文摘The general functional form of composite likelihoods is derived by minimizing the Kullback-Leibler distance under structural constraints associated with low dimensional densities. Connections with the I-projection and the maximum entropy distributions are shown. Asymptotic properties of composite likelihood inference under the proposed information-theoretical framework are established.
基金Supported by the National Natural Science Foundation of China(12061017,12361055)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.
基金supported by China NSF Grants(62371225,62371227)Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX250590).
文摘In this paper,we present a novel particle filter(PF)-based direct position tracking method utilizing multiple distributed observation stations.Traditional passive tracking methods are anchored on repetitive position estimation,where the set of consecutive estimates provides the tracking trajectory,such as Two-step and direct position determination methods.However,duplicate estimates can be computationally expensive.In addition,these techniques suffer from data association problems.The PF algorithm is a tracking method that avoids these drawbacks,but the conventional PF algorithm is unable to construct a likelihood function from the received signals of multiple observatories to determine the weights of particles.Therefore,we developed an improved PF algorithm with the likelihood function modified by the projection approximation subspace tracking with deflation(PASTd)algorithm.The proposed algorithm uses the projection subspace and spectral function to replace the likelihood function of PF.Then,the weights of particles are calculated jointly by multiple likelihood functions.Finally,the tracking problem of multiple targets is solved by multiple sets of particles.Simulations demonstrate the effectiveness of the proposed method in terms of computational complexity and tracking accuracy.
基金the National Natural Science Foundation of China(Nos.52105117,52222504&51875433)the Funds for Distinguished Young talent of Shaanxi Province,China(No.2019JC-04)。
文摘Blade Tip Timing(BTT)enables non-contact measurements of rotating blades by placing probes strategically.Due to the uneven probe layout,BTT signals exhibit periodic irregularities.While recovering parameters like frequency from such signals is possible,achieving high-precision vibration parameters remains challenging.This paper proposed a novel two-stage off-grid estimation method.It leverages a unique array layout(coprime array)to obtain a regular augmented covariance matrix.Subsequently,parameters in the matrix are recovered using the sparse iterative covariance-based estimation method based on covariance fitting criteria.Finally,high-precision estimates of imprecise parameters are obtained using unconditional maximum likelihood estimation,effectively eliminating the effects of basis mismatch.Through substantial numerical and experimental validation,the proposed method demonstrates significantly higher accuracy compared to classical BTT parameter estimation methods,approaching the lower bound of unbiased estimation variance.Furthermore,due to its immunity to frequency gridding,it can track minor frequency deviations,making it more suitable for indicating blade condition.
基金supported and funded by the Deanship of Scientifc Research at ImamMohammad Ibn Saud Islamic University(IMSIU)(grant number IMSIU-DDRSP2503).
文摘Accelerated life tests play a vital role in reliability analysis,especially as advanced technologies lead to the production of highly reliable products to meet market demands and competition.Among these tests,progressive-stress accelerated life tests(PSALT)allow for continuous changes in applied stress.Additionally,the generalized progressive hybrid censoring(GPHC)scheme has attracted significant attention in reliability and survival analysis,particularly for handling censored data in accelerated testing.It has been applied to various failure models,including competing risks and step-stress models.However,despite its growing relevance,a notable gap remains in the literature regarding the application of GPHC in PSALT models.This paper addresses that gap by studying PSALT under a GPHC scheme with binomial removal.Specifically,it considers lifetimes following the quasi-Xgamma distribution.Model parameters are estimated using both maximum likelihood and Bayesian methods under gamma priors.Interval estimation is provided through approximate confidence intervals,bootstrap methods,and Bayesian credible intervals.Bayesian estimators are derived under squared error and entropy loss functions,using informative priors in simulation and non-informative priors in real data applications.A simulation study is conducted to evaluate various censoring schemes,with coverage probabilities and interval widths assessed via Monte Carlo simulations.Additionally,Bayesian predictive estimates and intervals are presented.The proposed methodology is illustrated through the analysis of two real-world accelerated life test datasets.
基金funded by Researchers Supporting Project number(RSPD2025R969),King Saud University,Riyadh,Saudi Arabia.
文摘In this present work,we propose the expected Bayesian and hierarchical Bayesian approaches to estimate the shape parameter and hazard rate under a generalized progressive hybrid censoring scheme for the Kumaraswamy distribution.These estimates have been obtained using gamma priors based on various loss functions such as squared error,entropy,weighted balance,and minimum expected loss functions.An investigation is carried out using Monte Carlo simulation to evaluate the effectiveness of the suggested estimators.The simulation provides a quantitative assessment of the estimates accuracy and efficiency under various conditions by comparing them in terms of mean squared error.Additionally,the monthly water capacity of the Shasta reservoir is examined to offer real-world examples of how the suggested estimations may be used and performed.
文摘Parametric survival models are essential for analyzing time-to-event data in fields such as engineering and biomedicine.While the log-logistic distribution is popular for its simplicity and closed-form expressions,it often lacks the flexibility needed to capture complex hazard patterns.In this article,we propose a novel extension of the classical log-logistic distribution,termed the new exponential log-logistic(NExLL)distribution,designed to provide enhanced flexibility in modeling time-to-event data with complex failure behaviors.The NExLL model incorporates a new exponential generator to expand the shape adaptability of the baseline log-logistic distribution,allowing it to capture a wide range of hazard rate shapes,including increasing,decreasing,J-shaped,reversed J-shaped,modified bathtub,and unimodal forms.A key feature of the NExLL distribution is its formulation as a mixture of log-logistic densities,offering both symmetric and asymmetric patterns suitable for diverse real-world reliability scenarios.We establish several theoretical properties of the model,including closed-form expressions for its probability density function,cumulative distribution function,moments,hazard rate function,and quantiles.Parameter estimation is performed using seven classical estimation techniques,with extensive Monte Carlo simulations used to evaluate and compare their performance under various conditions.The practical utility and flexibility of the proposed model are illustrated using two real-world datasets from reliability and engineering applications,where the NExLL model demonstrates superior fit and predictive performance compared to existing log-logistic-basedmodels.This contribution advances the toolbox of parametric survivalmodels,offering a robust alternative formodeling complex aging and failure patterns in reliability,engineering,and other applied domains.
基金Supported by the National Science Foundation of China(11901236,12261036)Scientific Research Fund of Hunan Provincial Education Department(21A0328)+2 种基金Provincial Natural Science Foundation of Hunan(2022JJ30469)Young Core Teacher Foundation of Hunan Province([2020]43)Provincial Postgraduate Innovation Foundation of Hunan(CX20221113)。
文摘Weighted exponential distribution W ED(α,λ)with shape parameterαand scale parameterλpossesses some good properties and can be used as a good fit to survival time data compared to other distributions such as gamma,Weibull,or generalized exponential distribution.In this article,we proved the existence and uniqueness of the maximum likelihood estimator(MLE)of the parameters of W ED(α,λ)in simple random sampling(SRS)and provided explicit expressions for the Fisher information number in SRS.Moreover,we also proved the existence and uniqueness of the MLE of the parameters of W ED(α,λ)in ranked set sampling(RSS)and provided explicit expressions for the Fisher information number in RSS.Simulation studies show that these MLEs in RSS can be real competitors for those in SRS.
基金supported in part by the National Key Research and Development Program of China(2023YFB3906403)the National Natural Science Foundation of China(62373118,62173105)the Natural Science Foundation of Heilongjiang Province of China(ZD2023F002)
文摘Over the past few decades, numerous adaptive Kalman filters(AKFs) have been proposed. However, achieving online estimation with both high estimation accuracy and fast convergence speed is challenging, especially when both the process noise and measurement noise covariance matrices are relatively inaccurate. Maximum likelihood estimation(MLE) possesses the potential to achieve this goal, since its theoretical accuracy is guaranteed by asymptotic optimality and the convergence speed is fast due to weak dependence on accurate state estimation.Unfortunately, the maximum likelihood cost function is so intricate that the existing MLE methods can only simply ignore all historical measurement information to achieve online estimation,which cannot adequately realize the potential of MLE. In order to design online MLE-based AKFs with high estimation accuracy and fast convergence speed, an online exploratory MLE approach is proposed, based on which a mini-batch coordinate descent noise covariance matrix estimation framework is developed. In this framework, the maximum likelihood cost function is simplified for online estimation with fewer and simpler terms which are selected in a mini-batch and calculated with a backtracking method. This maximum likelihood cost function is sidestepped and solved by exploring possible estimated noise covariance matrices adaptively while the historical measurement information is adequately utilized. Furthermore, four specific algorithms are derived under this framework to meet different practical requirements in terms of convergence speed, estimation accuracy,and calculation load. Abundant simulations and experiments are carried out to verify the validity and superiority of the proposed algorithms as compared with existing state-of-the-art AKFs.
基金supported by the Shandong Provincial Natural Science Foundation(ZR2022QH144).
文摘The phenomenon of fear memory generalization can be defined as the expansion of an individual's originally specific fear responses to a similar yet genuinely harmless stimulus or situation subsequent to the occurrence of a traumatic event[1].Fear generalization within the normal range represents an adaptive evolutionary mechanism to facilitate prompt reactions to potential threats and to enhance the likelihood of survival.
基金Supported by the National Natural Science Foundation of China(11901236,12261036)Scientific Research Fund of Hunan Provincial Education Department(21A0328)Young Core Teacher Foundation of Hunan Province([2020]43).
文摘The existence and uniqueness of the maximum likelihood estimator(MLE)of parameter for the exponential-Poisson distribution is discussed by Ku s[2007.A new lifetime distribution.Computational Statistics and Data Analysis 51(9):4497-4509]in simple random sampling(SRS).As an alternative to the MLEs in SRS,Joukar et al.[2021.Parameter estimation for the exponential-poisson distribution based on ranked set samples.Communication in Statistics-Theory and Methods 50(3):560-581]discussed the MLE of parameter for this distribution in ranked set sampling(RSS).However,they did not discuss the existence and uniqueness of the MLE in RSS and did not provide explicit expressions for the Fisher information in RSS.In this article,we discuss the existence and uniqueness of the MLE of parameter in RSS and give explicit expressions for the Fisher information in RSS.The MLEs will be compared in terms of asymptotic efficiencies.Numerical studies and a real data application show that these MLEs in RSS can be real competitors for those in SRS.
基金supported by Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(MOTIE)(No.20224B10300090)supported by the MSIT(Ministry of Science and ICT),Republic of Korea,under the ITRC(Information Technology Research Center)support program(IITP-2025-RS-2021-II211835)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation).
文摘As vehicular networks become increasingly pervasive,enhancing connectivity and reliability has emerged as a critical objective.Among the enabling technologies for advanced wireless communication,particularly those targeting low latency and high reliability,time synchronization is critical,especially in vehicular networks.However,due to the inherent mobility of vehicular environments,consistently exchanging synchronization packets with a fixed base station or access point is challenging.This issue is further exacerbated in signal shadowed areas such as urban canyons,tunnels,or large-scale indoor hallswhere other technologies,such as global navigation satellite system(GNSS),are unavailable.One-way synchronization techniques offer a feasible approach under such transient connectivity conditions.One-way schemes still suffer from long convergence times to reach the required synchronization accuracy in these circumstances.In this paper,we propose a WLAN-based multi-stage clock synchronization scheme(WMC)tailored for vehicular networks.The proposed method comprises an initial hard update stage to rapidly achieve synchronization,followed by a high-precision stable stage based on Maximum Likelihood Estimation(MLE).By implementing the scheme directly at the network driver,we address key limitations of hard update mechanisms.Our approach significantly reduces the initial period to collect high-quality samples and offset estimation time to reach sub-50μs accuracy,and subsequently transitions to a refined MLE-based synchronization stage,achieving stable accuracy at approximately 30μs.The windowed moving average stabilized(reaching 90%of the baseline)in approximately 35 s,which corresponds to just 5.1%of the baseline time accuracy.Finally,the impact of synchronization performance on the localization model was validated using the Simulation of Urban Mobility(SUMO).The results demonstrate that more accurate conditions for position estimation can be supported,with an improvement about 38.5%in the mean error.
基金Supported by National Natural Science Foundation of China,No.82460282Guizhou Province Science and Technology Plan Project,No.ZK-2023-195+1 种基金Guizhou High-Level Innovative Talent Project,No.gzwjrs2022-013Health Commission of Guizhou Province Project,No.gzwkj2024-475 and No.gzwkj2021-150.
文摘BACKGROUND Attention deficit hyperactivity disorder(ADHD)is a prevalent neurodevelopmental disorder in adolescents characterized by inattention,hyperactivity,and impulsivity,which impact cognitive,behavioral,and emotional functioning.Resting-state functional magnetic resonance imaging(rs-fMRI)provides critical insights into the functional architecture of the brain in ADHD.Despite extensive research,specific brain regions consistently affected in ADHD patients during these formative years have not been comprehensively delineated.AIM To identify consistent vulnerable brain regions in adolescent ADHD patients using rs-fMRI and activation likelihood estimation(ALE)meta-analysis.METHODS We conducted a comprehensive literature search up to August 31,2024,to identify studies investigating functional brain alterations in adolescents with ADHD.We utilized regional homogeneity(ReHo),amplitude of low-frequency fluctuations(ALFF),dynamic ALFF(dALFF)and fractional ALFF(fALFF)analyses.We compared the regions of aberrant spontaneous neural activity in adolescents with ADHD with those in healthy controls(HCs)using ALE.RESULTS Fifteen studies(468 adolescent ADHD patients and 466 HCs)were included.Combining the ReHo and ALFF/fALFF/dALFF data,the results revealed increased activity in the right lingual gyrus[LING,Brodmann Area(BA)18],left LING(BA 18),and right cuneus(CUN,BA 23)in adolescent ADHD patients compared with HCs(voxel size:592-32 mm³,P<0.05).Decreased activity was observed in the left medial frontal gyrus(MFG,BA 9)and left precuneus(PCUN,BA 31)in adolescent ADHD patients compared with HCs(voxel size:960-456 mm³,P<0.05).Jackknife sensitivity analyses demonstrated robust reproducibility in 11 of the 13 tests for the right LING,left LING,and right CUN and in 11 of the 14 tests for the left MFG and left PCUN.CONCLUSION We identified specific brain regions with both increased and decreased activity in adolescent ADHD patients,enhancing our understanding of the neural alterations that occur during this pivotal stage of development.
文摘A new extended distribution called the Odd Exponential Generalized Exponential-Exponential distribution(EOEGE-E)is proposed based on generalization of the odd generalized exponential family(OEGE-E).The statistical properties of the proposed distribution are derived.The study evaluates the accuracy of six estimation methods under complete samples.Estimation techniques include maximumlikelihood,ordinary least squares,weighted least squares,maximumproduct of spacing,Cramer vonMises,and Anderson-Darling methods.Twomethods of estimation for the involved parameters are considered based on progressively type Ⅱ censored data(PTⅡC).These methods are maximum likelihood and maximum product of spacing.The proposed distribution’s effectiveness was evaluated using different data sets from various fields.The proposed distribution provides a better fit for these datasets than existing probability distributions.
文摘Air pollution,specifically fine particulate matter(PM2.5),represents a critical environmental and public health concern due to its adverse effects on respiratory and cardiovascular systems.Accurate forecasting of PM2.5 concentrations is essential for mitigating health risks;however,the inherent nonlinearity and dynamic variability of air quality data present significant challenges.This study conducts a systematic evaluation of deep learning algorithms including Convolutional Neural Network(CNN),Long Short-Term Memory(LSTM),and the hybrid CNN-LSTM as well as statistical models,AutoRegressive Integrated Moving Average(ARIMA)and Maximum Likelihood Estimation(MLE)for hourly PM2.5 forecasting.Model performance is quantified using Root Mean Squared Error(RMSE),Mean Absolute Error(MAE),Mean Absolute Percentage Error(MAPE),and the Coefficient of Determination(R^(2))metrics.The comparative analysis identifies optimal predictive approaches for air quality modeling,emphasizing computational efficiency and accuracy.Additionally,CNN classification performance is evaluated using a confusion matrix,accuracy,precision,and F1-score.The results demonstrate that the Hybrid CNN-LSTM model outperforms standalone models,exhibiting lower error rates and higher R^(2) values,thereby highlighting the efficacy of deep learning-based hybrid architectures in achieving robust and precise PM2.5 forecasting.This study underscores the potential of advanced computational techniques in enhancing air quality prediction systems for environmental and public health applications.