Correlation power analysis(CPA)combined with genetic algorithms(GA)now achieves greater attack efficiency and can recover all subkeys simultaneously.However,two issues in GA-based CPA still need to be addressed:key de...Correlation power analysis(CPA)combined with genetic algorithms(GA)now achieves greater attack efficiency and can recover all subkeys simultaneously.However,two issues in GA-based CPA still need to be addressed:key degeneration and slow evolution within populations.These challenges significantly hinder key recovery efforts.This paper proposes a screening correlation power analysis framework combined with a genetic algorithm,named SFGA-CPA,to address these issues.SFGA-CPA introduces three operations designed to exploit CPA characteris-tics:propagative operation,constrained crossover,and constrained mutation.Firstly,the propagative operation accelerates population evolution by maximizing the number of correct bytes in each individual.Secondly,the constrained crossover and mutation operations effectively address key degeneration by preventing the compromise of correct bytes.Finally,an intelligent search method is proposed to identify optimal parameters,further improving attack efficiency.Experiments were conducted on both simulated environments and real power traces collected from the SAKURA-G platform.In the case of simulation,SFGA-CPA reduces the number of traces by 27.3%and 60%compared to CPA based on multiple screening methods(MS-CPA)and CPA based on simple GA method(SGA-CPA)when the success rate reaches 90%.Moreover,real experimental results on the SAKURA-G platform demonstrate that our approach outperforms other methods.展开更多
Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quan...Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quantitative traits, which include major gene detection and its effect and variation estimation. The effect and variation of major gene are estimated by the maximum likelihood method implemented via expectation-maximization (EM) algorithm. Major gene is tested with the likelihood ratio (LR) test statistic. Extensive simulation studies showed that joint analysis not only increases the statistical power of major gene detection but also improves the precision and accuracy of major gene effect estimates. An example of the plant height and the number of tiller of F2 population in rice cross Duonieai x Zhonghua 11 was used in the illustration. The results indicated that the genetic difference of these two traits in this cross refers to only one pleiotropic major gene. The additive effect and dominance effect of the major gene are estimated as -21.3 and 40.6 cm on plant height, and 22.7 and -25.3 on number of tiller, respectively. The major gene shows overdominance for plant height and close to complete dominance for number of tillers.展开更多
The Least Squares Residual(LSR)algorithm,one of the classical Receiver Autonomous Integrity Monitoring(RAIM)algorithms for Global Navigation Satellite System(GNSS),presents a high Missed Detection Risk(MDR)for a large...The Least Squares Residual(LSR)algorithm,one of the classical Receiver Autonomous Integrity Monitoring(RAIM)algorithms for Global Navigation Satellite System(GNSS),presents a high Missed Detection Risk(MDR)for a large-slope faulty satellite and a high False Alarm Risk(FAR)for a small-slope faulty satellite.From the theoretical analysis of the high MDR and FAR cause,the optimal slope is determined,and thereby the optimal test statistic for fault detection is conceived,which can minimize the FAR with the MDR not exceeding its allowable value.To construct a test statistic approximate to the optimal one,the CorrelationWeighted LSR(CW-LSR)algorithm is proposed.The CW-LSR test statistic remains the sum of pseudorange residual squares,but the square for the most potentially faulty satellite,judged by correlation analysis between the pseudorange residual and observation error,is weighted with an optimal-slope-based factor.It does not obey the same distribution but has the same noncentral parameter with the optimal test statistic.The superior performance of the CW-LSR algorithm is verified via simulation,both reducing the FAR for a small-slope faulty satellite with the MDR not exceeding its allowable value and reducing the MDR for a large-slope faulty satellite at the expense of FAR addition.展开更多
Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics o...Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics of gas migration in working face airflow direction are qualitatively analyzed. The calculation method of asynchronous correlation delay step and the prediction and inversion formulas of gas concentration changing with time and space after gas emission in the air return roadway are provided. By calculating one hundred and fifty groups of gas sensors data series from a coal mine which have the theoretical correlativity, the correlative coefficient values range of eight kinds of data anomaly is obtained. Then the gas moni- toring data anomaly identification algorithm based on spatio-temporal correlativity analysis is accordingly presented. In order to improve the efficiency of analysis, the gas sensors code rules which can express the spatial topological relations are sug- gested. The experiments indicate that methods presented in this article can effectively compensate the defects of methods based on a single gas sensor monitoring data.展开更多
Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data ...Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.展开更多
Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and diffe...Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.展开更多
Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.Th...Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.The inherent laws reflected by the historical data of the distribution network are ignored,which affects the objectivity of the planning scheme.In this study,to improve the efficiency and accuracy of distribution network planning,the characteristics of distribution network data were extracted using a data-mining technique,and correlation knowledge of existing problems in the network was obtained.A data-mining model based on correlation rules was established.The inputs of the model were the electrical characteristic indices screened using the gray correlation method.The Apriori algorithm was used to extract correlation knowledge from the operational data of the distribution network and obtain strong correlation rules.Degree of promotion and chi-square tests were used to verify the rationality of the strong correlation rules of the model output.In this study,the correlation relationship between heavy load or overload problems of distribution network feeders in different regions and related characteristic indices was determined,and the confidence of the correlation rules was obtained.These results can provide an effective basis for the formulation of a distribution network planning scheme.展开更多
In most of real operational conditions only response data are measurable while the actual excitations are unknown, so modal parameter must be extracted only from responses. This paper gives a theoretical formulation f...In most of real operational conditions only response data are measurable while the actual excitations are unknown, so modal parameter must be extracted only from responses. This paper gives a theoretical formulation for the cross-correlation functions and cross-power spectra between the outputs under the assumption of white-noise excitation. It widens the field of modal analysis under ambient excitation because many classical methods by impulse response functions or frequency response functions can be used easily for modal analysis under unknown excitation. The Polyreference Complex Exponential method and Eigensystem Realization Algorithm using cross-correlation functions in time domain and Orthogonal Polynomial method using cross-power spectra in frequency domain are applied to a steel frame to extract modal parameters under operational conditions. The modal properties of the steel frame from these three methods are compared with those from frequency response functions analysis. The results show that the modal analysis method using cross-correlation functions or cross-power spectra presented in this paper can extract modal parameters efficiently under unknown excitation.展开更多
In a Digital Particle Image Velocimetry (DPW) system, the correlation of digital images is normally used to acquire the displacement information of particles and give estimates of the flow field. The accuracy and ro...In a Digital Particle Image Velocimetry (DPW) system, the correlation of digital images is normally used to acquire the displacement information of particles and give estimates of the flow field. The accuracy and robustness of the correlation algorithm directly affect the validity of the analysis result. In this article, an improved algorithm for the correlation analysis was proposed which could be used to optimize the selection/determination of the correlation window, analysis area and search path. This algorithm not only reduces largely the amount of calculation, but also improves effectively the accuracy and reliability of the correlation analysis. The algorithm was demonstrated to be accurate and efficient in the measurement of the velocity field in a flocculation pool.展开更多
In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. Th...In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.展开更多
In this paper the simple generation algorithms are improved. According to the geometric meaning of the structural reliability index, a method is proposed to deal with the variables in the standard normal space. With c...In this paper the simple generation algorithms are improved. According to the geometric meaning of the structural reliability index, a method is proposed to deal with the variables in the standard normal space. With consideration of variable distribution, the correlation coefficient of the variables and its fuzzy reliability index, the feasibility and the reliability of the algorithms are proved with an example of structural reliability analysis and optimization.展开更多
In the field of nonlinear filtering(NLF),it is well-known that the unnormalized conditional density of the states satisfies the Zakai’s equation.The splitting-up algorithm has been first studied in the independent no...In the field of nonlinear filtering(NLF),it is well-known that the unnormalized conditional density of the states satisfies the Zakai’s equation.The splitting-up algorithm has been first studied in the independent noises case by Bensoussan,et al.(1990).In this paper,the authors extend this convergence analysis of the splitting-up algorithm to the correlated noises’case.Given a time discretization,one splits the solution of the Zakai’s equation into two interlacing processes(with possibly computational advantage).These two processes correspond respectively to the prediction and updating.Under certain conditions,the authors show that both processes tend to the solution of the Zakai’s equation,as the time step goes to zero.The authors specify the conditions imposed on the way of splitting-up to guarantee the convergence.The major technical difficulty in the correlated noises’case,compared with the independent case,is to control the gradient of the second process in some sense.To illustrate the potentially computational advantage of the schemes based on the splitting-up ways,the authors experiment on a toy NLF model using the feedback particle filter(FPF)developed based on the splitting-up method and the sampling importance and resampling(SIR)as comparison.The FPF outperforms in both accuracy and efficiency.展开更多
In recent years, since airspace restrictions and the volume of passenger traffic are increasing, the rate of flight delay is rising rapidly and the contradiction about it is outstanding. Flight delay event not only ho...In recent years, since airspace restrictions and the volume of passenger traffic are increasing, the rate of flight delay is rising rapidly and the contradiction about it is outstanding. Flight delay event not only holds up passengers’ time, but also makes the airlines suffer a lot. So a scientific and reasonable guidance is necessary to reduce the delay effect. This paper firstly establishes a method to assess the degree of airport delays and get all factors which caused the flight delays quantification, and ultimately we offer a proposal to deal with the flow factor, which is the principal reason for flight delays.展开更多
In order to solve the defect of large error in current employment quality evaluation,an employment quality evaluation model based on grey correlation degree method and fuzzy C-means(FCM)is proposed.Firstly,it analyzes...In order to solve the defect of large error in current employment quality evaluation,an employment quality evaluation model based on grey correlation degree method and fuzzy C-means(FCM)is proposed.Firstly,it analyzes the related research work of employment quality evaluation,establishes the employment quality evaluation index system,collects the index data,and normalizes the index data;Then,the weight value of employment quality evaluation index is determined by Grey relational analysis method,and some unimportant indexes are removed;Finally,the employment quality evaluation model is established by using fuzzy cluster analysis algorithm,and compared with other employment quality evaluation models.The test results show that the employment quality evaluation accuracy of the design model exceeds 93%,the employment quality evaluation error can meet the requirements of practical application,and the employment quality evaluation effect is much better than the comparison model.The comparison test verifies the superiority of the model.展开更多
基金supported by the Hunan Provincial Natrual Science Foundation of China(2022JJ30103)“the 14th Five-Year”Key Disciplines and Application Oriented Special Disciplines of Hunan Province(Xiangjiaotong[2022],351)the Science and Technology Innovation Program of Hunan Province(2016TP1020).
文摘Correlation power analysis(CPA)combined with genetic algorithms(GA)now achieves greater attack efficiency and can recover all subkeys simultaneously.However,two issues in GA-based CPA still need to be addressed:key degeneration and slow evolution within populations.These challenges significantly hinder key recovery efforts.This paper proposes a screening correlation power analysis framework combined with a genetic algorithm,named SFGA-CPA,to address these issues.SFGA-CPA introduces three operations designed to exploit CPA characteris-tics:propagative operation,constrained crossover,and constrained mutation.Firstly,the propagative operation accelerates population evolution by maximizing the number of correct bytes in each individual.Secondly,the constrained crossover and mutation operations effectively address key degeneration by preventing the compromise of correct bytes.Finally,an intelligent search method is proposed to identify optimal parameters,further improving attack efficiency.Experiments were conducted on both simulated environments and real power traces collected from the SAKURA-G platform.In the case of simulation,SFGA-CPA reduces the number of traces by 27.3%and 60%compared to CPA based on multiple screening methods(MS-CPA)and CPA based on simple GA method(SGA-CPA)when the success rate reaches 90%.Moreover,real experimental results on the SAKURA-G platform demonstrate that our approach outperforms other methods.
基金This research was supported by the National Natural Science Foundation of China to Xu Chenwu (39900080, 30270724 and 30370758).
文摘Based on the major gene and polygene mixed inheritance model for multiple correlated quantitative traits, the authors proposed a new joint segregation analysis method of major gene controlling multiple correlated quantitative traits, which include major gene detection and its effect and variation estimation. The effect and variation of major gene are estimated by the maximum likelihood method implemented via expectation-maximization (EM) algorithm. Major gene is tested with the likelihood ratio (LR) test statistic. Extensive simulation studies showed that joint analysis not only increases the statistical power of major gene detection but also improves the precision and accuracy of major gene effect estimates. An example of the plant height and the number of tiller of F2 population in rice cross Duonieai x Zhonghua 11 was used in the illustration. The results indicated that the genetic difference of these two traits in this cross refers to only one pleiotropic major gene. The additive effect and dominance effect of the major gene are estimated as -21.3 and 40.6 cm on plant height, and 22.7 and -25.3 on number of tiller, respectively. The major gene shows overdominance for plant height and close to complete dominance for number of tillers.
基金co-supported by the National Natural Science Foundation of China (Nos. 41804024, 41804026)the Open Fund of Shaanxi Key Laboratory of Integrated and Intelligent Navigation of China (No. SKLIIN-20190205)
文摘The Least Squares Residual(LSR)algorithm,one of the classical Receiver Autonomous Integrity Monitoring(RAIM)algorithms for Global Navigation Satellite System(GNSS),presents a high Missed Detection Risk(MDR)for a large-slope faulty satellite and a high False Alarm Risk(FAR)for a small-slope faulty satellite.From the theoretical analysis of the high MDR and FAR cause,the optimal slope is determined,and thereby the optimal test statistic for fault detection is conceived,which can minimize the FAR with the MDR not exceeding its allowable value.To construct a test statistic approximate to the optimal one,the CorrelationWeighted LSR(CW-LSR)algorithm is proposed.The CW-LSR test statistic remains the sum of pseudorange residual squares,but the square for the most potentially faulty satellite,judged by correlation analysis between the pseudorange residual and observation error,is weighted with an optimal-slope-based factor.It does not obey the same distribution but has the same noncentral parameter with the optimal test statistic.The superior performance of the CW-LSR algorithm is verified via simulation,both reducing the FAR for a small-slope faulty satellite with the MDR not exceeding its allowable value and reducing the MDR for a large-slope faulty satellite at the expense of FAR addition.
基金Supported by the National Natural Science Foundation of China (40971275, 50811120111)
文摘Based on spatio-temporal correlativity analysis method, the automatic identification techniques for data anomaly monitoring of coal mining working face gas are presented. The asynchronous correlative characteristics of gas migration in working face airflow direction are qualitatively analyzed. The calculation method of asynchronous correlation delay step and the prediction and inversion formulas of gas concentration changing with time and space after gas emission in the air return roadway are provided. By calculating one hundred and fifty groups of gas sensors data series from a coal mine which have the theoretical correlativity, the correlative coefficient values range of eight kinds of data anomaly is obtained. Then the gas moni- toring data anomaly identification algorithm based on spatio-temporal correlativity analysis is accordingly presented. In order to improve the efficiency of analysis, the gas sensors code rules which can express the spatial topological relations are sug- gested. The experiments indicate that methods presented in this article can effectively compensate the defects of methods based on a single gas sensor monitoring data.
基金supported by the Natural Science Foundation of Tianjin (No. 22JCQNJC01070)the National Natural Science Foundation of China (No. 42404079)the Key Project of Tianjin Earthquake Agency (No. Zd202402)。
文摘Disaster mitigation necessitates scientifi c and accurate aftershock forecasting during the critical 2 h after an earthquake. However, this action faces immense challenges due to the lack of early postearthquake data and the unreliability of forecasts. To obtain foundational data for sequence parameters of the land-sea adjacent zone and establish a reliable and operational aftershock forecasting framework, we combined the initial sequence parameters extracted from envelope functions and incorporated small-earthquake information into our model to construct a Bayesian algorithm for the early postearthquake stage. We performed parameter fitting and early postearthquake aftershock occurrence rate forecasting and effectiveness evaluation for 36 earthquake sequences with M ≥ 4.0 in the Bohai Rim region since 2010. According to the results, during the early stage after the mainshock, earthquake sequence parameters exhibited relatively drastic fl uctuations with signifi cant errors. The integration of prior information can mitigate the intensity of these changes and reduce errors. The initial and stable sequence parameters generally display advantageous distribution characteristics, with each parameter’s distribution being relatively concentrated and showing good symmetry and remarkable consistency. The sequence parameter p-values were relatively small, which indicates the comparatively slow attenuation of signifi cant earthquake events in the Bohai Rim region. A certain positive correlation was observed between earthquake sequence parameters b and p. However, sequence parameters are unrelated to the mainshock magnitude, which implies that their statistical characteristics and trends are universal. The Bayesian algorithm revealed a good forecasting capability for aftershocks in the early postearthquake period (2 h) in the Bohai Rim region, with an overall forecasting effi cacy rate of 76.39%. The proportion of “too low” failures exceeded that of “too high” failures, and the number of forecasting failures for the next three days was greater than that for the next day.
文摘Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.
基金supported by the Science and Technology Project of China Southern Power Grid(GZHKJXM20210043-080041KK52210002).
文摘Traditional distribution network planning relies on the professional knowledge of planners,especially when analyzing the correlations between the problems existing in the network and the crucial influencing factors.The inherent laws reflected by the historical data of the distribution network are ignored,which affects the objectivity of the planning scheme.In this study,to improve the efficiency and accuracy of distribution network planning,the characteristics of distribution network data were extracted using a data-mining technique,and correlation knowledge of existing problems in the network was obtained.A data-mining model based on correlation rules was established.The inputs of the model were the electrical characteristic indices screened using the gray correlation method.The Apriori algorithm was used to extract correlation knowledge from the operational data of the distribution network and obtain strong correlation rules.Degree of promotion and chi-square tests were used to verify the rationality of the strong correlation rules of the model output.In this study,the correlation relationship between heavy load or overload problems of distribution network feeders in different regions and related characteristic indices was determined,and the confidence of the correlation rules was obtained.These results can provide an effective basis for the formulation of a distribution network planning scheme.
基金Item of the 9-th F ive Plan of the Aeronautical Industrial Corporation
文摘In most of real operational conditions only response data are measurable while the actual excitations are unknown, so modal parameter must be extracted only from responses. This paper gives a theoretical formulation for the cross-correlation functions and cross-power spectra between the outputs under the assumption of white-noise excitation. It widens the field of modal analysis under ambient excitation because many classical methods by impulse response functions or frequency response functions can be used easily for modal analysis under unknown excitation. The Polyreference Complex Exponential method and Eigensystem Realization Algorithm using cross-correlation functions in time domain and Orthogonal Polynomial method using cross-power spectra in frequency domain are applied to a steel frame to extract modal parameters under operational conditions. The modal properties of the steel frame from these three methods are compared with those from frequency response functions analysis. The results show that the modal analysis method using cross-correlation functions or cross-power spectra presented in this paper can extract modal parameters efficiently under unknown excitation.
文摘In a Digital Particle Image Velocimetry (DPW) system, the correlation of digital images is normally used to acquire the displacement information of particles and give estimates of the flow field. The accuracy and robustness of the correlation algorithm directly affect the validity of the analysis result. In this article, an improved algorithm for the correlation analysis was proposed which could be used to optimize the selection/determination of the correlation window, analysis area and search path. This algorithm not only reduces largely the amount of calculation, but also improves effectively the accuracy and reliability of the correlation analysis. The algorithm was demonstrated to be accurate and efficient in the measurement of the velocity field in a flocculation pool.
基金This work was supported by the National Natural Science Foundation of China (No. 60574051).
文摘In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.
基金This work was financially supported by the National Science Foundation of China
文摘In this paper the simple generation algorithms are improved. According to the geometric meaning of the structural reliability index, a method is proposed to deal with the variables in the standard normal space. With consideration of variable distribution, the correlation coefficient of the variables and its fuzzy reliability index, the feasibility and the reliability of the algorithms are proved with an example of structural reliability analysis and optimization.
基金financially supported by the National Key R&D Program of China under Grant No.2022YFA1005103National Natural Science Foundation of China under Grant Nos. 12271019, 11871003,12201376, 11961141005the Fundamental Research Funds for the Central Universities under Grant Nos.GK202103002, YWF-22-L-640
文摘In the field of nonlinear filtering(NLF),it is well-known that the unnormalized conditional density of the states satisfies the Zakai’s equation.The splitting-up algorithm has been first studied in the independent noises case by Bensoussan,et al.(1990).In this paper,the authors extend this convergence analysis of the splitting-up algorithm to the correlated noises’case.Given a time discretization,one splits the solution of the Zakai’s equation into two interlacing processes(with possibly computational advantage).These two processes correspond respectively to the prediction and updating.Under certain conditions,the authors show that both processes tend to the solution of the Zakai’s equation,as the time step goes to zero.The authors specify the conditions imposed on the way of splitting-up to guarantee the convergence.The major technical difficulty in the correlated noises’case,compared with the independent case,is to control the gradient of the second process in some sense.To illustrate the potentially computational advantage of the schemes based on the splitting-up ways,the authors experiment on a toy NLF model using the feedback particle filter(FPF)developed based on the splitting-up method and the sampling importance and resampling(SIR)as comparison.The FPF outperforms in both accuracy and efficiency.
文摘In recent years, since airspace restrictions and the volume of passenger traffic are increasing, the rate of flight delay is rising rapidly and the contradiction about it is outstanding. Flight delay event not only holds up passengers’ time, but also makes the airlines suffer a lot. So a scientific and reasonable guidance is necessary to reduce the delay effect. This paper firstly establishes a method to assess the degree of airport delays and get all factors which caused the flight delays quantification, and ultimately we offer a proposal to deal with the flow factor, which is the principal reason for flight delays.
基金supported by the project of science and technology of Henan province under Grant No.222102240024 and 202102210269the Key Scientific Research projects in Colleges and Universities in Henan Grant No.22A460013 and No.22B413004.
文摘In order to solve the defect of large error in current employment quality evaluation,an employment quality evaluation model based on grey correlation degree method and fuzzy C-means(FCM)is proposed.Firstly,it analyzes the related research work of employment quality evaluation,establishes the employment quality evaluation index system,collects the index data,and normalizes the index data;Then,the weight value of employment quality evaluation index is determined by Grey relational analysis method,and some unimportant indexes are removed;Finally,the employment quality evaluation model is established by using fuzzy cluster analysis algorithm,and compared with other employment quality evaluation models.The test results show that the employment quality evaluation accuracy of the design model exceeds 93%,the employment quality evaluation error can meet the requirements of practical application,and the employment quality evaluation effect is much better than the comparison model.The comparison test verifies the superiority of the model.