The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and ...The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors.展开更多
In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (...In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a pytho...The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.展开更多
To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore...To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.展开更多
The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium i...The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.展开更多
The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from whi...The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.展开更多
Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to un...Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.展开更多
Weighted fusion algorithms,which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method.A weighted fusion algorithm,in which the relationship between weight coefficie...Weighted fusion algorithms,which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method.A weighted fusion algorithm,in which the relationship between weight coefficients and measurement noise is established,is proposed by giving attention to the correlation of measurement noise.Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated.In addition,an algorithm,which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements,is presented.It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.展开更多
Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary co...Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary conditions are needed for such of these numerical methods. Predication and simulations for different case studies are major sources for the great importance of these models. Satellite data from different wide ranges of sensors provide observations that indicate system state. So both numerical models and satellite data provide estimation of system states, and between the different estimations it is required the best estimate for system state. Assimilation of observations in numerical weather models with data assimilation techniques provide an improved estimate of system states. In this work, highlights on the mathematical perspective for data assimilation methods are introduced. Least square estimation techniques are introduced because it is considered the basic mathematical building block for data assimilation methods. Stochastic version of least square is included to handle the error in both model and observation. Then the three and four dimensional variational assimilation 3dvar and 4dvar respectively will be handled. Kalman filters and its derivatives Extended, (KF, EKF, ENKF) and hybrid filters are introduced.展开更多
This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invarian...This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invariance under change of scale, and also gives a simple convergent iterativealgorithm and some numerical examples. The well-known eigenvector method(EM) is then compared.Theoretical analysis and the numerical results show that the iterative times of the GMLSM are generallyfewer than that of the MLSM, and the GMLSM are preferable to the EM in several important respects.展开更多
For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method...For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.展开更多
In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the...In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the arcsegment difference method.First,a matching error threshold is set to match the observation data with the known catalog database.Second,the matching results for the same day are sorted on the basis of target identity and observation residuals.Different matching error thresholds and arc-segment dynamic association thresholds are then applied to categorize the observation residuals of the same target across different arc-segments,yielding matching results under various thresholds.Finally,the orbital residual is computed through orbit determination(OD),and the positional error is derived by comparing the OD results with the orbit track from the catalog database.The appropriate matching error threshold is then selected on the basis of these results,leading to the final matching and association of the fuzzy correlation data.Experimental results showed that the correct matching rate for data arc-segments is 92.34% when the matching error threshold is set to 720″,with the arc-segment difference method processing the results of an average matching rate of 97.62% within 8 days.The remaining 5.28% of the fuzzy correlation data are correctly matched and associated,enabling identification of orbital maneuver targets through further processing and analysis.This method substantially enhances the efficiency and accuracy of space target cataloging,offering robust technical support for dynamic maintenance of the space target database.展开更多
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion a...With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
The effluent total phosphorus(ETP) is an important parameter to evaluate the performance of wastewater treatment process(WWTP). In this study, a novel method, using a data-derived soft-sensor method, is proposed to ob...The effluent total phosphorus(ETP) is an important parameter to evaluate the performance of wastewater treatment process(WWTP). In this study, a novel method, using a data-derived soft-sensor method, is proposed to obtain the reliable values of ETP online. First, a partial least square(PLS) method is introduced to select the related secondary variables of ETP based on the experimental data. Second, a radial basis function neural network(RBFNN) is developed to identify the relationship between the related secondary variables and ETP. This RBFNN easily optimizes the model parameters to improve the generalization ability of the soft-sensor. Finally, a monitoring system, based on the above PLS and RBFNN, named PLS-RBFNN-based soft-sensor system, is developed and tested in a real WWTP. Experimental results show that the proposed monitoring system can obtain the values of ETP online and own better predicting performance than some existing methods.展开更多
By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expressi...By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.展开更多
The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic respo...The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.展开更多
We study the problem of parameter estimation for mean-reverting α-stable motion, dXt = (a0 - θ0Xt)dt + dZt, observed at discrete time instants. A least squares estimator is obtained and its asymptotics is discuss...We study the problem of parameter estimation for mean-reverting α-stable motion, dXt = (a0 - θ0Xt)dt + dZt, observed at discrete time instants. A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0) = (0, 0). If a0 = 0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodic case θ0 〉 0. For the Ornstein-Uhlenbeck process, asymptotics of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 〉 0) are completely different.展开更多
A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival (FDOA) measurements only. We first analyze the measurements of FDOA, and further deriv...A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival (FDOA) measurements only. We first analyze the measurements of FDOA, and further derive the Cramer-Rao lower bound (CRLB) of geoloeation using FDOA measurements. For the localization model is a nonlinear least squares(LS) estimator with a nonlinear constrained, a linearizing method is used to convert the model to a linear least squares estimator with a nonlinear con- strained. The Gauss-Newton iteration method is developed to conquer the source localization problem. From the analysis of solving Lagrange multiplier, the algorithm is a generalization of linear-correction least squares estimation procedure under the condition of geolocation using FDOA measurements only. The algorithm is compared with common least squares estimation. Comparisons of their estimation accuracy and the CRLB are made, and the proposed method attains the CRLB. Simulation re- sults are included to corroborate the theoretical development.展开更多
基金the National Natural Science Foundation of China(Grant Nos.52308403 and 52079068)the Yunlong Lake Laboratory of Deep Underground Science and Engineering(No.104023005)the China Postdoctoral Science Foundation(Grant No.2023M731998)for funding provided to this work.
文摘The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors.
文摘In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金supported by the National Natural Science Foundation of China(NSFC,12173012 and 12473050)the Guangdong Natural Science Funds for Distinguished Young Scholars(2023B1515020049)+2 种基金the Shenzhen Science and Technology Project(JCYJ20240813104805008)the Shenzhen Key Laboratory Launching Project(No.ZDSYS20210702140800001)the Specialized Research Fund for State Key Laboratory of Solar Activity and Space Weather。
文摘The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.
基金Supported by the National Natural Science Foundation of China
文摘To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.
文摘The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.
基金supported by the National Natural Science Foundation of China (Grant No. 11173038)
文摘The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.
基金The foundation for Teaching Research Project of Hubei University of Technology in Hubei Province in 2020(grant number 2020017).
文摘Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.
文摘Weighted fusion algorithms,which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method.A weighted fusion algorithm,in which the relationship between weight coefficients and measurement noise is established,is proposed by giving attention to the correlation of measurement noise.Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated.In addition,an algorithm,which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements,is presented.It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
文摘Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary conditions are needed for such of these numerical methods. Predication and simulations for different case studies are major sources for the great importance of these models. Satellite data from different wide ranges of sensors provide observations that indicate system state. So both numerical models and satellite data provide estimation of system states, and between the different estimations it is required the best estimate for system state. Assimilation of observations in numerical weather models with data assimilation techniques provide an improved estimate of system states. In this work, highlights on the mathematical perspective for data assimilation methods are introduced. Least square estimation techniques are introduced because it is considered the basic mathematical building block for data assimilation methods. Stochastic version of least square is included to handle the error in both model and observation. Then the three and four dimensional variational assimilation 3dvar and 4dvar respectively will be handled. Kalman filters and its derivatives Extended, (KF, EKF, ENKF) and hybrid filters are introduced.
文摘This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invariance under change of scale, and also gives a simple convergent iterativealgorithm and some numerical examples. The well-known eigenvector method(EM) is then compared.Theoretical analysis and the numerical results show that the iterative times of the GMLSM are generallyfewer than that of the MLSM, and the GMLSM are preferable to the EM in several important respects.
基金supported by the Preeminent Youth Fund of Sichuan Province,China(Grant No.2012JQ0012)the National Natural Science Foundation of China(Grant Nos.11173008,10974202,and 60978049)the National Key Scientific and Research Equipment Development Project of China(Grant No.ZDYZ2013-2)
文摘For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.
基金supported by National Natural Science Foundation of China(12273080).
文摘In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the arcsegment difference method.First,a matching error threshold is set to match the observation data with the known catalog database.Second,the matching results for the same day are sorted on the basis of target identity and observation residuals.Different matching error thresholds and arc-segment dynamic association thresholds are then applied to categorize the observation residuals of the same target across different arc-segments,yielding matching results under various thresholds.Finally,the orbital residual is computed through orbit determination(OD),and the positional error is derived by comparing the OD results with the orbit track from the catalog database.The appropriate matching error threshold is then selected on the basis of these results,leading to the final matching and association of the fuzzy correlation data.Experimental results showed that the correct matching rate for data arc-segments is 92.34% when the matching error threshold is set to 720″,with the arc-segment difference method processing the results of an average matching rate of 97.62% within 8 days.The remaining 5.28% of the fuzzy correlation data are correctly matched and associated,enabling identification of orbital maneuver targets through further processing and analysis.This method substantially enhances the efficiency and accuracy of space target cataloging,offering robust technical support for dynamic maintenance of the space target database.
基金partially supported by the National Natural Science Foundation of China (No.41230318)
文摘With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金Supported by the National Science Foundation of China(61622301,61533002)Beijing Natural Science Foundation(4172005)Major National Science and Technology Project(2017ZX07104)
文摘The effluent total phosphorus(ETP) is an important parameter to evaluate the performance of wastewater treatment process(WWTP). In this study, a novel method, using a data-derived soft-sensor method, is proposed to obtain the reliable values of ETP online. First, a partial least square(PLS) method is introduced to select the related secondary variables of ETP based on the experimental data. Second, a radial basis function neural network(RBFNN) is developed to identify the relationship between the related secondary variables and ETP. This RBFNN easily optimizes the model parameters to improve the generalization ability of the soft-sensor. Finally, a monitoring system, based on the above PLS and RBFNN, named PLS-RBFNN-based soft-sensor system, is developed and tested in a real WWTP. Experimental results show that the proposed monitoring system can obtain the values of ETP online and own better predicting performance than some existing methods.
文摘By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.
基金supported by the National Key Research and Development Program Project of China(Grant No.2023YFF0718003)the key research and development plan project of Yunnan Province(Grant No.202303AA080006).
文摘The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.
基金Hu is supported by the National Science Foundation under Grant No.DMS0504783Long is supported by FAU Start-up funding at the C. E. Schmidt College of Science
文摘We study the problem of parameter estimation for mean-reverting α-stable motion, dXt = (a0 - θ0Xt)dt + dZt, observed at discrete time instants. A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0) = (0, 0). If a0 = 0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodic case θ0 〉 0. For the Ornstein-Uhlenbeck process, asymptotics of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 〉 0) are completely different.
基金National High-tech Research and Development Program of China (2011AA7072043)National Defense Key Laboratory Foundation of China (9140C860304)Innovation Fund of Graduate School of NUDT (B120406)
文摘A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival (FDOA) measurements only. We first analyze the measurements of FDOA, and further derive the Cramer-Rao lower bound (CRLB) of geoloeation using FDOA measurements. For the localization model is a nonlinear least squares(LS) estimator with a nonlinear constrained, a linearizing method is used to convert the model to a linear least squares estimator with a nonlinear con- strained. The Gauss-Newton iteration method is developed to conquer the source localization problem. From the analysis of solving Lagrange multiplier, the algorithm is a generalization of linear-correction least squares estimation procedure under the condition of geolocation using FDOA measurements only. The algorithm is compared with common least squares estimation. Comparisons of their estimation accuracy and the CRLB are made, and the proposed method attains the CRLB. Simulation re- sults are included to corroborate the theoretical development.