The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic respo...The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.展开更多
Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscilla...Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscillations, caused by weak but long-lasting and frequency-stable influences, create the conditions for ultrasonic wave’s generation in the layers, which are capable of destroying thickened oil membranes in reservoir cracks. For fractured-porous reservoirs in the process of exploitation by the method of water high-pressure oil displacement, the possibility of intensifying ultrasonic vibrations can have an important technological significance. Even a very weak ultrasound can destroy, over a long period of time, the viscous oil membranes formed in the cracks between the blocks, which can be the reason for lowering the permeability of the layers and increasing the oil recovery. To describe these effects, it is necessary to consider the wave process in a hierarchically blocky environment and theoretically simulate the mechanism of the appearance of self-oscillations under the action of relaxation shear stresses. For the analysis of seism acoustic response in time on fixed intervals along the borehole an algorithm of phase diagrams of the state of many-phase medium is suggested.展开更多
Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabili...Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.展开更多
This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it i...This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it introduces the R/S analysis for time series analysis into spacial series to calculate the structural fractal dimensions of ranges and standard deviation for spacial series data -and to establish the fractal dimension matrix and the procedures in plotting the fractal dimension anomaly diagram with vector distances of fractal dimension . At last , it has examples of its application .展开更多
In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data be...In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.展开更多
This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Ros...This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.展开更多
Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minute...Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.展开更多
Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces c...Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.展开更多
Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the...Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the Terrestrial Time TT(BIPMXXXX)-TAI but PT has larger measurement error.In this paper,we discuss the smoothing method of PT using a combined smoothing filter and compare the results with that from other filters.The clock difference sequence between PT-TAI and the first time derivative series of the TT(BIPMXXXX)-TAI can be combined by a combined smoothing filter to yield two smooth curves tied by the constraints assuring that the latter is the derivative of the former.The ensemble pulsar time IPTA2016 with respect to TAI published by G.Hobbs et al.and first time derivative series of the TT(BIPM2017)-TAI with quadratic polynomial terms removed are processed by combined smoothing filter in order to demonstrate the properties of the smoothed results.How to correctly estimate two smoothing coefficients is described and the output results of the combined smoothing filter are analyzed.The results show that the combined smoothing method efficiently removes high frequency noises of two input data series and the smoothed data of the PT-TAI combine long term fractional frequency stability of the pulsar time and frequency accuracy of the terrestrial time.Fractional frequency stability analysis indicates that both short and medium time interval stability of the smoothed PT-TAI is improved while keeping its original long term frequency stability level.The combined smoothing filter is more suitable for smoothing observational pulsar timescale data than any filter that only performs smoothing of a single pulsar time series.The smoothed pulsar time by combined smoothing filter is a pulsar atomic time combined timescale.This kind of combined timescale can also be used as terrestrial time.展开更多
In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (ML≥3.0), b-value, η-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the character...In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (ML≥3.0), b-value, η-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However, the synthesis parameter W showed obvious anomalies before 13 earthquakes (MS≥5.8) occurred in North China, which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.展开更多
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
Noise is a significant part within a millimeter-wave molecular line datacube.Analyzing the noise improves our understanding of noise characteristics,and further contributes to scientific discoveries.We measure the noi...Noise is a significant part within a millimeter-wave molecular line datacube.Analyzing the noise improves our understanding of noise characteristics,and further contributes to scientific discoveries.We measure the noise level of a single datacube from MWISP and perform statistical analyses.We identified major factors which increase the noise level of a single datacube,including bad channels,edge effects,baseline distortion and line contamination.Cleaning algorithms are applied to remove or reduce these noise components.As a result,we obtained the cleaned datacube in which noise follows a positively skewed normal distribution.We further analyzed the noise structure distribution of a 3 D mosaicked datacube in the range l=40°7 to 43°3 and b=-2°3 to 0°3 and found that noise in the final mosaicked datacube is mainly characterized by noise fluctuation among the cells.展开更多
This paper improves the slacks-based method for estimating inefficiency,derives the criteria for the selection of the weights of output and input inefficiencies in the objective function,and creates a new nonparametri...This paper improves the slacks-based method for estimating inefficiency,derives the criteria for the selection of the weights of output and input inefficiencies in the objective function,and creates a new nonparametric method for accounting economic growth.Based on this method,the paper estimates the sources of China s economic growth from 1978 to 2013.Our findings suggest that factor input and especially capital is a major source of economic growth for China as a whole and its major regions,and that economic growth in recent years is increasingly dependent on capital.For a rather long period of time before 2005,China s northeast,central and western regions lagged behind the eastern region in terms of economic growth,and TFP and factor input are major reasons behind such regional growth disparities.Although other regions have narrowed their disparities with and even overtaken the eastern region in terms of economic growth,the key driver is the rapid increase in the contribution of factor input.Advanced technologies of eastern region should be utilized to promote TFP progress in other regions,which is vital to economic growth in these regions and China as a whole.展开更多
By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expressi...By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.展开更多
Objective To establish a warehouse on acupuncture-moxibution (acup-mox) methods to explore valuable laws about research and clinical application of acup-mox in a great number of literature by use of data mining tech...Objective To establish a warehouse on acupuncture-moxibution (acup-mox) methods to explore valuable laws about research and clinical application of acup-mox in a great number of literature by use of data mining technique and to promote acup-mox research and effective treatment of diseases. Methods According to the acup-mox literature information of different types, different subjects of the aeup-mox literature are determined and the relevant database is established. In the continuously enriched subject database, the data warehouse catering to multi-subjects and multi-dimensions is set up so as to provide a platform for wider application of aeup-mox literature information. Results Based on characteristics of the acup-mox literature, many subject databases, such as needling with filiform needle, moxibustion, etc., are established and clinical treatment laws of acup-mox are revealed by use of data mining method in the database established. Conclusion Establishment of the acup-mox literature warehouse provides a standard data expression model, rich attributes and relation between different literature information for study of aeup-mox literature by more effective techniques, and a rich and standard data basis for acup-mox researches.展开更多
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that...It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.展开更多
Properties of the Schwabe cycles in solar activity are investigated by using wavelet transform. We study the main range of the Schwabe cycles of the solar activity recorded by relative sunspot numbers, and find that t...Properties of the Schwabe cycles in solar activity are investigated by using wavelet transform. We study the main range of the Schwabe cycles of the solar activity recorded by relative sunspot numbers, and find that the main range of the Schwabe cycles is the periodic span from 8-year to 14-year. We make the comparison of 11-year’s phase between relative sunspot numbers and sunspot group numbers. The results show that there is some difference between two phases for the interval from 1710 to 1810, while the two phases are almost the same for the interval from 1810 to 1990.展开更多
The 10.7 cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7 cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is mean...The 10.7 cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7 cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is meaningful and important to predict F10.7 values accurately for both long-term (months-years) and short-term (days) forecasting, which are often used as inputs in space weather models. This study applies a novel neural network technique, support vector regression (SVR), to forecasting daily values of F10.7. The aim of this study is to examine the feasibility of SVR in short-term F10.7 forecasting. The approach, based on SVR, reduces the dimension of feature space in the training process by using a kernel-based learning algorithm. Thus, the complexity of the calculation becomes lower and a small amount of training data will be sufficient. The time series of F10.7 from 2002 to 2006 are employed as the data sets. The performance of the approach is estimated by calculating the norm mean square error and mean absolute percentage error. It is shown that our approach can perform well by using fewer training data points than the traditional neural network.展开更多
A technique for timescale analysis of spectral lags performed directly in the time domain is developed. Simulation studies are made to compare the time domain technique with the Fourier frequency analysis for spectral...A technique for timescale analysis of spectral lags performed directly in the time domain is developed. Simulation studies are made to compare the time domain technique with the Fourier frequency analysis for spectral time lags. The time domain technique is applied to studying rapid variabilities of X-ray binaries and γ-ray bursts. The results indicate that in comparison with the Fourier analysis the timescale analysis technique is more powerful for the study of spectral lags in rapid variabilities on short time scales and short duration flaring phenomena.展开更多
基金supported by the National Key Research and Development Program Project of China(Grant No.2023YFF0718003)the key research and development plan project of Yunnan Province(Grant No.202303AA080006).
文摘The issue of strong noise has increasingly become a bottleneck restricting the precision and application space of electromagnetic exploration methods.Noise suppression and extraction of effective electromagnetic response information under a strong noise background is a crucial scientific task to be addressed.To solve the noise suppression problem of the controlled-source electromagnetic method in strong interference areas,we propose an approach based on complex-plane 2D k-means clustering for data processing.Based on the stability of the controlled-source signal response,clustering analysis is applied to classify the spectra of different sources and noises in multiple time segments.By identifying the power spectra with controlled-source characteristics,it helps to improve the quality of the controlled-source response extraction.This paper presents the principle and workflow of the proposed algorithm,and demonstrates feasibility and effectiveness of the new algorithm through synthetic and real data examples.The results show that,compared with the conventional Robust denoising method,the clustering algorithm has a stronger suppression effect on common noise,can identify high-quality signals,and improve the preprocessing data quality of the controlledsource electromagnetic method.
文摘Experimental and theoretical studies of the mechanisms of vibration stimulation of oil recovery in watered fields lead to the conclusion that resonance oscillations develop in fractured-block formations. These oscillations, caused by weak but long-lasting and frequency-stable influences, create the conditions for ultrasonic wave’s generation in the layers, which are capable of destroying thickened oil membranes in reservoir cracks. For fractured-porous reservoirs in the process of exploitation by the method of water high-pressure oil displacement, the possibility of intensifying ultrasonic vibrations can have an important technological significance. Even a very weak ultrasound can destroy, over a long period of time, the viscous oil membranes formed in the cracks between the blocks, which can be the reason for lowering the permeability of the layers and increasing the oil recovery. To describe these effects, it is necessary to consider the wave process in a hierarchically blocky environment and theoretically simulate the mechanism of the appearance of self-oscillations under the action of relaxation shear stresses. For the analysis of seism acoustic response in time on fixed intervals along the borehole an algorithm of phase diagrams of the state of many-phase medium is suggested.
文摘Mitigating increasing cyberattack incidents may require strategies such as reinforcing organizations’ networks with Honeypots and effectively analyzing attack traffic for detection of zero-day attacks and vulnerabilities. To effectively detect and mitigate cyberattacks, both computerized and visual analyses are typically required. However, most security analysts are not adequately trained in visualization principles and/or methods, which is required for effective visual perception of useful attack information hidden in attack data. Additionally, Honeypot has proven useful in cyberattack research, but no studies have comprehensively investigated visualization practices in the field. In this paper, we reviewed visualization practices and methods commonly used in the discovery and communication of attack patterns based on Honeypot network traffic data. Using the PRISMA methodology, we identified and screened 218 papers and evaluated only 37 papers having a high impact. Most Honeypot papers conducted summary statistics of Honeypot data based on static data metrics such as IP address, port, and packet size. They visually analyzed Honeypot attack data using simple graphical methods (such as line, bar, and pie charts) that tend to hide useful attack information. Furthermore, only a few papers conducted extended attack analysis, and commonly visualized attack data using scatter and linear plots. Papers rarely included simple yet sophisticated graphical methods, such as box plots and histograms, which allow for critical evaluation of analysis results. While a significant number of automated visualization tools have incorporated visualization standards by default, the construction of effective and expressive graphical methods for easy pattern discovery and explainable insights still requires applied knowledge and skill of visualization principles and tools, and occasionally, an interdisciplinary collaboration with peers. We, therefore, suggest the need, going forward, for non-classical graphical methods for visualizing attack patterns and communicating analysis results. We also recommend training investigators in visualization principles and standards for effective visual perception and presentation.
文摘This paper establishes the phase space in the light of spacial series data , discusses the fractal structure of geological data in terms of correlated functions and studies the chaos of these data . In addition , it introduces the R/S analysis for time series analysis into spacial series to calculate the structural fractal dimensions of ranges and standard deviation for spacial series data -and to establish the fractal dimension matrix and the procedures in plotting the fractal dimension anomaly diagram with vector distances of fractal dimension . At last , it has examples of its application .
基金supported in part by the National Natural Science Foundation of China(62125306)Zhejiang Key Research and Development Project(2024C01163)the State Key Laboratory of Industrial Control Technology,China(ICT2024A06)
文摘In recent decades,control performance monitoring(CPM)has experienced remarkable progress in research and industrial applications.While CPM research has been investigated using various benchmarks,the historical data benchmark(HIS)has garnered the most attention due to its practicality and effectiveness.However,existing CPM reviews usually focus on the theoretical benchmark,and there is a lack of an in-depth review that thoroughly explores HIS-based methods.In this article,a comprehensive overview of HIS-based CPM is provided.First,we provide a novel static-dynamic perspective on data-level manifestations of control performance underlying typical controller capacities including regulation and servo:static and dynamic properties.The static property portrays time-independent variability in system output,and the dynamic property describes temporal behavior driven by closed-loop feedback.Accordingly,existing HIS-based CPM approaches and their intrinsic motivations are classified and analyzed from these two perspectives.Specifically,two mainstream solutions for CPM methods are summarized,including static analysis and dynamic analysis,which match data-driven techniques with actual controlling behavior.Furthermore,this paper also points out various opportunities and challenges faced in CPM for modern industry and provides promising directions in the context of artificial intelligence for inspiring future research.
基金supported by the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.51205283)
文摘This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.
基金National Natural Science Foundation of China(No.41801379)Fundamental Research Funds for the Central Universities(No.2019B08414)National Key R&D Program of China(No.2016YFC0401801)。
文摘Tunnel deformation monitoring is a crucial task to evaluate tunnel stability during the metro operation period.Terrestrial Laser Scanning(TLS)can collect high density and high accuracy point cloud data in a few minutes as an innovation technique,which provides promising applications in tunnel deformation monitoring.Here,an efficient method for extracting tunnel cross-sections and convergence analysis using dense TLS point cloud data is proposed.First,the tunnel orientation is determined using principal component analysis(PCA)in the Euclidean plane.Two control points are introduced to detect and remove the unsuitable points by using point cloud division and then the ground points are removed by defining an elevation value width of 0.5 m.Next,a z-score method is introduced to detect and remove the outlies.Because the tunnel cross-section’s standard shape is round,the circle fitting is implemented using the least-squares method.Afterward,the convergence analysis is made at the angles of 0°,30°and 150°.The proposed approach’s feasibility is tested on a TLS point cloud of a Nanjing subway tunnel acquired using a FARO X330 laser scanner.The results indicate that the proposed methodology achieves an overall accuracy of 1.34 mm,which is also in agreement with the measurements acquired by a total station instrument.The proposed methodology provides new insights and references for the applications of TLS in tunnel deformation monitoring,which can also be extended to other engineering applications.
基金supported by China National Astronomical Data Center(NADC),CAS Astronomical Data Center and Chinese Virtual Observatory(China-VO)supported by Astronomical Big Data Joint Research Center,co-founded by National Astronomical Observatories,Chinese Academy of Sciences and Alibaba Cloud。
文摘Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.
基金supported by the Strategic Priority Research Program of Chinese Academy of Sciences(grant No.XDA0350502)the National SKA Program of China(grant No.2020SKA0120103)the National Natural Science Foundation of China(NSFC,Grant Nos.U1831130 and 11973046)。
文摘Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the Terrestrial Time TT(BIPMXXXX)-TAI but PT has larger measurement error.In this paper,we discuss the smoothing method of PT using a combined smoothing filter and compare the results with that from other filters.The clock difference sequence between PT-TAI and the first time derivative series of the TT(BIPMXXXX)-TAI can be combined by a combined smoothing filter to yield two smooth curves tied by the constraints assuring that the latter is the derivative of the former.The ensemble pulsar time IPTA2016 with respect to TAI published by G.Hobbs et al.and first time derivative series of the TT(BIPM2017)-TAI with quadratic polynomial terms removed are processed by combined smoothing filter in order to demonstrate the properties of the smoothed results.How to correctly estimate two smoothing coefficients is described and the output results of the combined smoothing filter are analyzed.The results show that the combined smoothing method efficiently removes high frequency noises of two input data series and the smoothed data of the PT-TAI combine long term fractional frequency stability of the pulsar time and frequency accuracy of the terrestrial time.Fractional frequency stability analysis indicates that both short and medium time interval stability of the smoothed PT-TAI is improved while keeping its original long term frequency stability level.The combined smoothing filter is more suitable for smoothing observational pulsar timescale data than any filter that only performs smoothing of a single pulsar time series.The smoothed pulsar time by combined smoothing filter is a pulsar atomic time combined timescale.This kind of combined timescale can also be used as terrestrial time.
基金Project of Joint Seismological Science Foundation of China (104090).
文摘In the paper, the primary component analysis is made using 8 seismicity parameters of earthquake frequency N (ML≥3.0), b-value, η-value, A(b)-value, Mf-value, Ac-value, C-value and D-value that reflect the characteristics of magnitude, time and space distribution of seismicity from different respects. By using the primary component analysis method, the synthesis parameter W reflecting the anomalous features of earthquake magnitude, time and space distribution can be gained. Generally, there is some relativity among the 8 parameters, but their variations are different in different periods. The earthquake prediction based on these parameters is not very well. However, the synthesis parameter W showed obvious anomalies before 13 earthquakes (MS≥5.8) occurred in North China, which indicates that the synthesis parameter W can reflect the anomalous characteristics of magnitude, time and space distribution of seismicity better. Other problems related to the conclusions drawn by the primary component analysis method are also discussed.
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
基金supported by the National Key R&D Program of China(2017YFA0402701)Key Research Program of Frontier Sciences of CAS(QYZDJ-SSW-SLH047)partially supported by the National Natural Science Foundation of China(Grant No.U2031202)。
文摘Noise is a significant part within a millimeter-wave molecular line datacube.Analyzing the noise improves our understanding of noise characteristics,and further contributes to scientific discoveries.We measure the noise level of a single datacube from MWISP and perform statistical analyses.We identified major factors which increase the noise level of a single datacube,including bad channels,edge effects,baseline distortion and line contamination.Cleaning algorithms are applied to remove or reduce these noise components.As a result,we obtained the cleaned datacube in which noise follows a positively skewed normal distribution.We further analyzed the noise structure distribution of a 3 D mosaicked datacube in the range l=40°7 to 43°3 and b=-2°3 to 0°3 and found that noise in the final mosaicked datacube is mainly characterized by noise fluctuation among the cells.
文摘This paper improves the slacks-based method for estimating inefficiency,derives the criteria for the selection of the weights of output and input inefficiencies in the objective function,and creates a new nonparametric method for accounting economic growth.Based on this method,the paper estimates the sources of China s economic growth from 1978 to 2013.Our findings suggest that factor input and especially capital is a major source of economic growth for China as a whole and its major regions,and that economic growth in recent years is increasingly dependent on capital.For a rather long period of time before 2005,China s northeast,central and western regions lagged behind the eastern region in terms of economic growth,and TFP and factor input are major reasons behind such regional growth disparities.Although other regions have narrowed their disparities with and even overtaken the eastern region in terms of economic growth,the key driver is the rapid increase in the contribution of factor input.Advanced technologies of eastern region should be utilized to promote TFP progress in other regions,which is vital to economic growth in these regions and China as a whole.
文摘By using the method of least square linear fitting to analyze data do not exist errors under certain conditions, in order to make the linear data fitting method that can more accurately solve the relationship expression between the volume and quantity in scientific experiments and engineering practice, this article analyzed data error by commonly linear data fitting method, and proposed improved process of the least distance squ^re method based on least squares method. Finally, the paper discussed the advantages and disadvantages through the example analysis of two kinds of linear data fitting method, and given reasonable control conditions for its application.
基金Supported by National Natural Science Foundation of China: No.81072883
文摘Objective To establish a warehouse on acupuncture-moxibution (acup-mox) methods to explore valuable laws about research and clinical application of acup-mox in a great number of literature by use of data mining technique and to promote acup-mox research and effective treatment of diseases. Methods According to the acup-mox literature information of different types, different subjects of the aeup-mox literature are determined and the relevant database is established. In the continuously enriched subject database, the data warehouse catering to multi-subjects and multi-dimensions is set up so as to provide a platform for wider application of aeup-mox literature information. Results Based on characteristics of the acup-mox literature, many subject databases, such as needling with filiform needle, moxibustion, etc., are established and clinical treatment laws of acup-mox are revealed by use of data mining method in the database established. Conclusion Establishment of the acup-mox literature warehouse provides a standard data expression model, rich attributes and relation between different literature information for study of aeup-mox literature by more effective techniques, and a rich and standard data basis for acup-mox researches.
文摘It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.
基金the National Natural Science Foundation of China.
文摘Properties of the Schwabe cycles in solar activity are investigated by using wavelet transform. We study the main range of the Schwabe cycles of the solar activity recorded by relative sunspot numbers, and find that the main range of the Schwabe cycles is the periodic span from 8-year to 14-year. We make the comparison of 11-year’s phase between relative sunspot numbers and sunspot group numbers. The results show that there is some difference between two phases for the interval from 1710 to 1810, while the two phases are almost the same for the interval from 1810 to 1990.
文摘The 10.7 cm solar radio flux (F10.7), the value of the solar radio emission flux density at a wavelength of 10.7 cm, is a useful index of solar activity as a proxy for solar extreme ultraviolet radiation. It is meaningful and important to predict F10.7 values accurately for both long-term (months-years) and short-term (days) forecasting, which are often used as inputs in space weather models. This study applies a novel neural network technique, support vector regression (SVR), to forecasting daily values of F10.7. The aim of this study is to examine the feasibility of SVR in short-term F10.7 forecasting. The approach, based on SVR, reduces the dimension of feature space in the training process by using a kernel-based learning algorithm. Thus, the complexity of the calculation becomes lower and a small amount of training data will be sufficient. The time series of F10.7 from 2002 to 2006 are employed as the data sets. The performance of the approach is estimated by calculating the norm mean square error and mean absolute percentage error. It is shown that our approach can perform well by using fewer training data points than the traditional neural network.
基金the National Natural Science Foundation of China.
文摘A technique for timescale analysis of spectral lags performed directly in the time domain is developed. Simulation studies are made to compare the time domain technique with the Fourier frequency analysis for spectral time lags. The time domain technique is applied to studying rapid variabilities of X-ray binaries and γ-ray bursts. The results indicate that in comparison with the Fourier analysis the timescale analysis technique is more powerful for the study of spectral lags in rapid variabilities on short time scales and short duration flaring phenomena.