The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and ...The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors.展开更多
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coeff...Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.展开更多
In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (...In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore...To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.展开更多
The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium i...The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.展开更多
The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from whi...The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.展开更多
Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to un...Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.展开更多
In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the...In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the arcsegment difference method.First,a matching error threshold is set to match the observation data with the known catalog database.Second,the matching results for the same day are sorted on the basis of target identity and observation residuals.Different matching error thresholds and arc-segment dynamic association thresholds are then applied to categorize the observation residuals of the same target across different arc-segments,yielding matching results under various thresholds.Finally,the orbital residual is computed through orbit determination(OD),and the positional error is derived by comparing the OD results with the orbit track from the catalog database.The appropriate matching error threshold is then selected on the basis of these results,leading to the final matching and association of the fuzzy correlation data.Experimental results showed that the correct matching rate for data arc-segments is 92.34% when the matching error threshold is set to 720″,with the arc-segment difference method processing the results of an average matching rate of 97.62% within 8 days.The remaining 5.28% of the fuzzy correlation data are correctly matched and associated,enabling identification of orbital maneuver targets through further processing and analysis.This method substantially enhances the efficiency and accuracy of space target cataloging,offering robust technical support for dynamic maintenance of the space target database.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary co...Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary conditions are needed for such of these numerical methods. Predication and simulations for different case studies are major sources for the great importance of these models. Satellite data from different wide ranges of sensors provide observations that indicate system state. So both numerical models and satellite data provide estimation of system states, and between the different estimations it is required the best estimate for system state. Assimilation of observations in numerical weather models with data assimilation techniques provide an improved estimate of system states. In this work, highlights on the mathematical perspective for data assimilation methods are introduced. Least square estimation techniques are introduced because it is considered the basic mathematical building block for data assimilation methods. Stochastic version of least square is included to handle the error in both model and observation. Then the three and four dimensional variational assimilation 3dvar and 4dvar respectively will be handled. Kalman filters and its derivatives Extended, (KF, EKF, ENKF) and hybrid filters are introduced.展开更多
The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a pytho...The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.展开更多
This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invarian...This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invariance under change of scale, and also gives a simple convergent iterativealgorithm and some numerical examples. The well-known eigenvector method(EM) is then compared.Theoretical analysis and the numerical results show that the iterative times of the GMLSM are generallyfewer than that of the MLSM, and the GMLSM are preferable to the EM in several important respects.展开更多
For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method...For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data be...In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.展开更多
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion a...With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.展开更多
The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfreq...The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfrequency radio telescope arrays,such as the Low Frequency Array and the Square Kilometre Array Phase 1 Low Frequency.These instruments are pivotal for radio astronomy research.However,challenges such as ionospheric plasma interference,ambient radio noise,and instrument-related effects have become increasingly prominent,posing major obstacles in cosmology research.To address these issues,this paper proposes an efficient signal processing method that combines wavelet transform and mathematical morphology.The method involves the following steps:Background Subtraction:Background interference in radio observation signals is eliminated.Wavelet Transform:The signal,after removing background noise,undergoes a two-dimensional discrete wavelet transform.Threshold processing is then applied to the wavelet coefficients to effectively remove interference components.Wavelet Inversion:The processed signal is reconstructed using wavelet inversion.Mathematical Morphology:The reconstructed signal is further optimized using mathematical morphology to refine the results.Experimental verification was conducted using solar observation data from the Xinjiang Observatory and the Yunnan Observatory.The results demonstrate that this method successfully removes interference signals while preserving useful signals,thus improving the accuracy of radio astronomy observations and reducing the impact of radio frequency interference.展开更多
As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the...As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the precise data processing pipeline designed for wide-field,CMOS-based devices,including the removal of instrumental effects,astrometry,photometry,and flux calibration.When applying this pipeline to approximately3000 observations taken in the Field 02(f02)region by MST,the results demonstrate a remarkable astrometric precision of approximately 70–80 mas(about 0.1 pixel),an impressive calibration accuracy of approximately1 mmag in the MST zero points,and a photometric accuracy of about 4 mmag for bright stars.Our studies demonstrate that MST CMOS can achieve photometric accuracy comparable to that of CCDs,highlighting the feasibility of large-scale CMOS-based optical time-domain surveys and their potential applications for cost optimization in future large-scale time-domain surveys,like the SiTian project.展开更多
Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are...Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.展开更多
基金the National Natural Science Foundation of China(Grant Nos.52308403 and 52079068)the Yunlong Lake Laboratory of Deep Underground Science and Engineering(No.104023005)the China Postdoctoral Science Foundation(Grant No.2023M731998)for funding provided to this work.
文摘The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors.
文摘Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
文摘In this paper,a dynamic linear detecting method,that the non-linear coefficient NL% was led and the non-linearity of data were estimated continuously and dynamically and determined when NL% exceeded reference value (5%),was used for data processing and could solve the problem caused by the phenomenon of substrate depleting occurred following the redox reaction in portable blood sugar analyzer.By contrast to the conventional end-point method,the dynamic linear detecting method is based on multipoint data collecting.Experiments of measuring the calibration glucose solution with 8 various concentrations from 50 mg/dl to 400 mg/dl were carried out with the analyzer developed by our group.The linear regression curve,whose correlation for the data was 0.9995 and the residual was 2.8080,were obtained.The obtained correlation,residual, and the computation workload are all fit for the portable blood sugar analyzer.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金Supported by the National Natural Science Foundation of China
文摘To improve our understanding of the formation and evolution of the Moon, one of the payloads onboard the Chang'e-3 (CE-3) rover is Lunar Penetrating Radar (LPR). This investigation is the first attempt to explore the lunar subsurface structure by using ground penetrating radar with high resolution. We have probed the subsur- face to a depth of several hundred meters using LPR. In-orbit testing, data processing and the preliminary results are presented. These observations have revealed the con- figuration of regolith where the thickness of regolith varies from about 4 m to 6 m. In addition, one layer of lunar rock, which is about 330 m deep and might have been accumulated during the depositional hiatus of mare basalts, was detected.
文摘The Extreme Ultraviolet Camera (EUVC) onboard the Chang'e-3 (CE-3) lander is used to observe the structure and dynamics of Earth's plasmasphere from the Moon. By detecting the resonance line emission of helium ions (He+) at 30.4 nm, the EUVC images the entire plasmasphere with a time resolution of 10 min and a spatial resolution of about 0.1 Earth radius (RE) in a single frame. We first present details about the data processing from EUVC and the data acquisition in the commissioning phase, and then report some initial results, which reflect the basic features of the plas- masphere well. The photon count and emission intensity of EUVC are consistent with previous observations and models, which indicate that the EUVC works normally and can provide high quality data for future studies.
基金supported by the National Natural Science Foundation of China (Grant No. 11173038)
文摘The microwave radiometer (MRM) onboard the Chang' E-1 (CE-I) lu- nar orbiter is a 4-frequency microwave radiometer, and it is mainly used to obtain the brightness temperature (TB) of the lunar surface, from which the thickness, temperature, dielectric constant and other related properties of the lunar regolith can be derived. The working mode of the CE-1 MRM, the ground calibration (including the official calibration coefficients), as well as the acquisition and processing of the raw data are introduced. Our data analysis shows that TB increases with increasing frequency, decreases towards the lunar poles and is significantly affected by solar illumination. Our analysis also reveals that the main uncertainty in TB comes from ground calibration.
基金The foundation for Teaching Research Project of Hubei University of Technology in Hubei Province in 2020(grant number 2020017).
文摘Experimental Design and Data Processing is an important core professional basic course for food science majors.This course is theoretical and practical,and there are many formulas,abstract contents and difficult to understand,and there are some problems in the teaching process,such as students1 poor interest in learning,insufficient mastery of what they have learned,and inability to combine theory with practice organically.Through analyzing the existing problems,this paper puts forward some reform measures for the teaching mode of experimental design and data processing by using the intelligent teaching of Superstar platform.
基金supported by National Natural Science Foundation of China(12273080).
文摘In response to the issue of fuzzy matching and association when optical observation data are matched with the orbital elements in a catalog database,this paper proposes a matching and association strategy based on the arcsegment difference method.First,a matching error threshold is set to match the observation data with the known catalog database.Second,the matching results for the same day are sorted on the basis of target identity and observation residuals.Different matching error thresholds and arc-segment dynamic association thresholds are then applied to categorize the observation residuals of the same target across different arc-segments,yielding matching results under various thresholds.Finally,the orbital residual is computed through orbit determination(OD),and the positional error is derived by comparing the OD results with the orbit track from the catalog database.The appropriate matching error threshold is then selected on the basis of these results,leading to the final matching and association of the fuzzy correlation data.Experimental results showed that the correct matching rate for data arc-segments is 92.34% when the matching error threshold is set to 720″,with the arc-segment difference method processing the results of an average matching rate of 97.62% within 8 days.The remaining 5.28% of the fuzzy correlation data are correctly matched and associated,enabling identification of orbital maneuver targets through further processing and analysis.This method substantially enhances the efficiency and accuracy of space target cataloging,offering robust technical support for dynamic maintenance of the space target database.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
文摘Environmental systems including our atmosphere oceans, biological… etc. can be modeled by mathematical equations to estimate their states. These equations can be solved with numerical methods. Initial and boundary conditions are needed for such of these numerical methods. Predication and simulations for different case studies are major sources for the great importance of these models. Satellite data from different wide ranges of sensors provide observations that indicate system state. So both numerical models and satellite data provide estimation of system states, and between the different estimations it is required the best estimate for system state. Assimilation of observations in numerical weather models with data assimilation techniques provide an improved estimate of system states. In this work, highlights on the mathematical perspective for data assimilation methods are introduced. Least square estimation techniques are introduced because it is considered the basic mathematical building block for data assimilation methods. Stochastic version of least square is included to handle the error in both model and observation. Then the three and four dimensional variational assimilation 3dvar and 4dvar respectively will be handled. Kalman filters and its derivatives Extended, (KF, EKF, ENKF) and hybrid filters are introduced.
基金supported by the National Natural Science Foundation of China(NSFC,12173012 and 12473050)the Guangdong Natural Science Funds for Distinguished Young Scholars(2023B1515020049)+2 种基金the Shenzhen Science and Technology Project(JCYJ20240813104805008)the Shenzhen Key Laboratory Launching Project(No.ZDSYS20210702140800001)the Specialized Research Fund for State Key Laboratory of Solar Activity and Space Weather。
文摘The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.
文摘This paper proposes a class of generalized mixed least square methods(GMLSM) forthe estimation of weights in the analytic hierarchy process and studies their good properties such asinvariance under transpose, invariance under change of scale, and also gives a simple convergent iterativealgorithm and some numerical examples. The well-known eigenvector method(EM) is then compared.Theoretical analysis and the numerical results show that the iterative times of the GMLSM are generallyfewer than that of the MLSM, and the GMLSM are preferable to the EM in several important respects.
基金supported by the Preeminent Youth Fund of Sichuan Province,China(Grant No.2012JQ0012)the National Natural Science Foundation of China(Grant Nos.11173008,10974202,and 60978049)the National Key Scientific and Research Equipment Development Project of China(Grant No.ZDYZ2013-2)
文摘For the accurate extraction of cavity decay time, a selection of data points is supplemented to the weighted least square method. We derive the expected precision, accuracy and computation cost of this improved method, and examine these performances by simulation. By comparing this method with the nonlinear least square fitting (NLSF) method and the linear regression of the sum (LRS) method in derivations and simulations, we find that this method can achieve the same or even better precision, comparable accuracy, and lower computation cost. We test this method by experimental decay signals. The results are in agreement with the ones obtained from the nonlinear least square fitting method.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金supported in part by the National Natural Science Foundation of China(62125306)Zhejiang Key Research and Development Project(2024C01163)the State Key Laboratory of Industrial Control Technology,China(ICT2024A06)
文摘In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.
基金partially supported by the National Natural Science Foundation of China (No.41230318)
文摘With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.
基金funded by the National Key Research and Development Program’s intergovernmental International Science and Technology Innovation Cooperation project,titled Remote Sensing and Radio Astronomy Observation of Space Weather in Low and Middle Latitudes(project number:2022YFE0140000)Supported by International Partnership Program of Chinese Academy of Sciences,grant No.114A11KYSB20200001。
文摘The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfrequency radio telescope arrays,such as the Low Frequency Array and the Square Kilometre Array Phase 1 Low Frequency.These instruments are pivotal for radio astronomy research.However,challenges such as ionospheric plasma interference,ambient radio noise,and instrument-related effects have become increasingly prominent,posing major obstacles in cosmology research.To address these issues,this paper proposes an efficient signal processing method that combines wavelet transform and mathematical morphology.The method involves the following steps:Background Subtraction:Background interference in radio observation signals is eliminated.Wavelet Transform:The signal,after removing background noise,undergoes a two-dimensional discrete wavelet transform.Threshold processing is then applied to the wavelet coefficients to effectively remove interference components.Wavelet Inversion:The processed signal is reconstructed using wavelet inversion.Mathematical Morphology:The reconstructed signal is further optimized using mathematical morphology to refine the results.Experimental verification was conducted using solar observation data from the Xinjiang Observatory and the Yunnan Observatory.The results demonstrate that this method successfully removes interference signals while preserving useful signals,thus improving the accuracy of radio astronomy observations and reducing the impact of radio frequency interference.
基金supported by the National Key Basic R&D Program of China via 2023YFA1608303the Strategic Priority Research Program of the Chinese Academy of Sciences(XDB0550103)+3 种基金the National Science Foundation of China 12422303,12403024,12222301,12173007,and 12261141690the Postdoctoral Fellowship Program of CPSF under grant Number GZB20240731the Young Data Scientist Project of the National Astronomical Data Center,and the China Post-doctoral Science Foundation No.2023M743447support from the NSFC through grant No.12303039 and No.12261141690.
文摘As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the precise data processing pipeline designed for wide-field,CMOS-based devices,including the removal of instrumental effects,astrometry,photometry,and flux calibration.When applying this pipeline to approximately3000 observations taken in the Field 02(f02)region by MST,the results demonstrate a remarkable astrometric precision of approximately 70–80 mas(about 0.1 pixel),an impressive calibration accuracy of approximately1 mmag in the MST zero points,and a photometric accuracy of about 4 mmag for bright stars.Our studies demonstrate that MST CMOS can achieve photometric accuracy comparable to that of CCDs,highlighting the feasibility of large-scale CMOS-based optical time-domain surveys and their potential applications for cost optimization in future large-scale time-domain surveys,like the SiTian project.
文摘Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.