When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To o...When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.展开更多
The relationships between soil total nitrogen(STN)and influencing factors are scale-dependent.The objective of this study was to identify the multi-scale spatial relationships of STN with selected environmental factor...The relationships between soil total nitrogen(STN)and influencing factors are scale-dependent.The objective of this study was to identify the multi-scale spatial relationships of STN with selected environmental factors(elevation,slope and topographic wetness index),intrinsic soil factors(soil bulk density,sand content,silt content,and clay content)and combined environmental factors(including the first two principal components(PC1 and PC2)of the Vis-NIR soil spectra)along three sampling transects located at the upstream,midstream and downstream of Taiyuan Basin on the Chinese Loess Plateau.We separated the multivariate data series of STN and influencing factors at each transect into six intrinsic mode functions(IMFs)and one residue by multivariate empirical mode decomposition(MEMD).Meanwhile,we obtained the predicted equations of STN based on MEMD by stepwise multiple linear regression(SMLR).The results indicated that the dominant scales of explained variance in STN were at scale 995 m for transect 1,at scales 956 and 8852 m for transect 2,and at scales 972,5716 and 12,317 m for transect 3.Multi-scale correlation coefficients between STN and influencing factors were less significant in transect 3 than in transects 1 and 2.The goodness of fit root mean square error(RMSE),normalized root mean square error(NRMSE),and coefficient of determination(R2)indicated that the prediction of STN at the sampling scale by summing all of the predicted IMFs and residue was more accurate than that by SMLR directly.Therefore,the multi-scale method of MEMD has a good potential in characterizing the multi-scale spatial relationships between STN and influencing factors at the basin landscape scale.展开更多
On the basis of the absolute and relative gravity observations in North China,spatial dynamic variation of regional gravity fields is obtained. A multi-scale decomposition technique is used to separate anomalies at di...On the basis of the absolute and relative gravity observations in North China,spatial dynamic variation of regional gravity fields is obtained. A multi-scale decomposition technique is used to separate anomalies at different depths,and give some explanation to gravity variation at different time space scales. Gravity variation trends in North China are improved. Based on this result and the analysis of wavelet power spectrum,the images of the depth of wavelet approximation and detail are obtained. The results obtained are of scientific significance for the deep understanding of potential seismic risk in North China from gravity variations in different time space scales.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
Monte Carlo transport simulations of a full-core reactor with a high-fidelity structure have been made possible by modern-day computing capabilities. Performing transport–burnup calculations of a full-core model typi...Monte Carlo transport simulations of a full-core reactor with a high-fidelity structure have been made possible by modern-day computing capabilities. Performing transport–burnup calculations of a full-core model typically includes millions of burnup areas requiring hundreds of gigabytes of memory for burnup-related tallies. This paper presents the study of a parallel computing method for full-core Monte Carlo transport–burnup calculations and the development of a thread-level data decomposition method. The proposed method decomposes tally accumulators into different threads and improves the parallel communication pattern and memory access efficiency. A typical pressurized water reactor burnup assembly along with the benchmark for evaluation and validation of reactor simulations model was used to test the proposed method.The result indicates that the method effectively reduces memory consumption and maintains high parallel efficiency.展开更多
Noise has traditionally been suppressed or eliminated in seismic data sets by the use of Fourier filters and, to a lesser degree, nonlinear statistical filters. Although these methods are quite useful under specific c...Noise has traditionally been suppressed or eliminated in seismic data sets by the use of Fourier filters and, to a lesser degree, nonlinear statistical filters. Although these methods are quite useful under specific conditions, they may produce undesirable effects for the low signal to noise ratio data. In this paper, a new method, multi-scale ridgelet transform, is used in the light of the theory of ridgelet transform. We employ wavelet transform to do sub-band decomposition for the signals and then use non-linear thresholding in ridgelet domain for every block. In other words, it is based on the idea of partition, at sufficiently fine scale, a curving singularity looks straight, and so ridgelet transform can work well in such cases. Applications on both synthetic data and actual seismic data from Sichuan basin, South China, show that the new method eliminates the noise portion of the signal more efficiently and retains a greater amount of geologic data than other methods, the quality and consecutiveness of seismic event are improved obviously as well as the quality of section is improved.展开更多
Recent advances in deep learning have expanded new possibilities for fluid flow simulation in petroleum reservoirs.However,the predominant approach in existing research is to train neural networks using high-fidelity ...Recent advances in deep learning have expanded new possibilities for fluid flow simulation in petroleum reservoirs.However,the predominant approach in existing research is to train neural networks using high-fidelity numerical simulation data.This presents a significant challenge because the sole source of authentic wellbore production data for training is sparse.In response to this challenge,this work introduces a novel architecture called physics-informed neural network based on domain decomposition(PINN-DD),aiming to effectively utilize the sparse production data of wells for reservoir simulation with large-scale systems.To harness the capabilities of physics-informed neural networks(PINNs)in handling small-scale spatial-temporal domain while addressing the challenges of large-scale systems with sparse labeled data,the computational domain is divided into two distinct sub-domains:the well-containing and the well-free sub-domain.Moreover,the two sub-domains and the interface are rigorously constrained by the governing equations,data matching,and boundary conditions.The accuracy of the proposed method is evaluated on two problems,and its performance is compared against state-of-the-art PINNs through numerical analysis as a benchmark.The results demonstrate the superiority of PINN-DD in handling large-scale reservoir simulation with limited data and show its potential to outperform conventional PINNs in such scenarios.展开更多
Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inv...Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inversion methods involving illumination compensation. Vertical cable survey is a potential replacement of traditional marine seismic survey for its flexibility and data quality. Conventional vertical cable data processing requires separation of primaries and multiples before migration. We proposed to use multi-scale full waveform inversion (FWI) to improve illumination coverage of vertical cable survey. A deep water velocity model is built to test the capability of multi-scale FWI in detecting low velocity anomalies below seabed. Synthetic results show that multi-scale FWI is an effective model building tool in deep-water exploration. Geometry optimization through target ori- ented illumination analysis and multi-scale FWI may help to mitigate the risks of vertical cable survey. The combination of multi-scale FWI, low-frequency data and multi-vertical-cable acquisition system may provide both high resolution and high fidelity subsurface models.展开更多
The multi-scale expression of enormously complicated laneway data requires differentiation of both contents and the way the contents are expressed. To accomplish multi-scale expression laneway data must support multi-...The multi-scale expression of enormously complicated laneway data requires differentiation of both contents and the way the contents are expressed. To accomplish multi-scale expression laneway data must support multi-scale transformation and have consistent topological relationships. Although the laneway data generated by traverse survey-ing is non-scale data it is still impossible to construct a multi-scale spatial database directly from it. In this paper an al-gorithm is presented to first calculate the laneway mid-line to support multi-scale transformation; then to express topo-logical relationships arising from the data structure; and,finally,a laneway spatial database is built and multi-scale ex-pression is achieved using components GIS-SuperMap Objects. The research result is of great significance for improv-ing the efficiency of laneway data storage and updating,for ensuring consistency of laneway data expression and for extending the potential value of a mine spatial database.展开更多
Multivariate time series forecasting iswidely used in traffic planning,weather forecasting,and energy consumption.Series decomposition algorithms can help models better understand the underlying patterns of the origin...Multivariate time series forecasting iswidely used in traffic planning,weather forecasting,and energy consumption.Series decomposition algorithms can help models better understand the underlying patterns of the original series to improve the forecasting accuracy of multivariate time series.However,the decomposition kernel of previous decomposition-based models is fixed,and these models have not considered the differences in frequency fluctuations between components.These problems make it difficult to analyze the intricate temporal variations of real-world time series.In this paper,we propose a series decomposition-based Mamba model,DecMamba,to obtain the intricate temporal dependencies and the dependencies among different variables of multivariate time series.A variable-level adaptive kernel combination search module is designed to interact with information on different trends and periods between variables.Two backbone structures are proposed to emphasize the differences in frequency fluctuations of seasonal and trend components.Mamba with superior performance is used instead of a Transformer in backbone structures to capture the dependencies among different variables.A new embedding block is designed to capture the temporal features better,especially for the high-frequency seasonal component whose semantic information is difficult to acquire.A gating mechanism is introduced to the decoder in the seasonal backbone to improve the prediction accuracy.A comparison with ten state-of-the-art models on seven real-world datasets demonstrates that DecMamba can better model the temporal dependencies and the dependencies among different variables,guaranteeing better prediction performance for multivariate time series.展开更多
The Secondary Air System(SAS)plays an important role in the safe operation and performance of aeroengines.The traditional 1D-3D coupling method loses information when used for secondary air systems,which affects the c...The Secondary Air System(SAS)plays an important role in the safe operation and performance of aeroengines.The traditional 1D-3D coupling method loses information when used for secondary air systems,which affects the calculation accuracy.In this paper,a Cross-dimensional Data Transmission method(CDT)from 3D to 1D is proposed by introducing flow field uniformity into the data transmission.First,a uniformity index was established to quantify the flow field parameter distribution characteristics,and a uniformity index prediction model based on the locally weighted regression method(Lowess)was established to quickly obtain the flow field information.Then,an information selection criterion in 3D to 1D data transmission was established based on the Spearman rank correlation coefficient between the uniformity index and the accuracy of coupling calculation,and the calculation method was automatically determined according to the established criterion.Finally,a modified function was obtained by fitting the ratio of the 3D mass-average parameters to the analytical solution,which are then used to modify the selected parameters at the 1D-3D interface.Taking a typical disk cavity air system as an example,the results show that the calculation accuracy of the CDT method is greatly improved by a relative 53.88%compared with the traditional 1D-3D coupling method.Furthermore,the CDT method achieves a speedup of 2 to 3 orders of magnitude compared to the 3D calculation.展开更多
Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can han...Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approach:Considering that tensor addition increases the size of a given tensor along all axes,the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors.Additionally,In Par Ten2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.Findings:The performance of In Par Ten2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets.The results confirm that In Par Ten2 can process large tensors and reduce the re-calculation cost of tensor decomposition.Consequently,the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitations:There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods.However,the former require longer iteration time,and therefore their execution time cannot be compared with that of Spark-based algorithms,whereas the latter run on a single machine,thus limiting their ability to handle large data.Practical implications:The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/value:The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark.Moreover,In Par Ten2 can handle static as well as incremental tensor decomposition.展开更多
The complex nonlinear and non-stationary features exhibited in hydrologic sequences make hydrological analysis and forecasting difficult.Currently,some hydrologists employ the complete ensemble empirical mode decompos...The complex nonlinear and non-stationary features exhibited in hydrologic sequences make hydrological analysis and forecasting difficult.Currently,some hydrologists employ the complete ensemble empirical mode decomposition with adaptive noise(CEEMDAN)method,a new time-frequency analysis method based on the empirical mode decomposition(EMD)algorithm,to decompose non-stationary raw data in order to obtain relatively stationary components for further study.However,the endpoint effect in CEEMDAN is often neglected,which can lead to decomposition errors that reduce the accuracy of the research results.In this study,we processed an original runoff sequence using the radial basis function neural network(RBFNN)technique to obtain the extension sequence before utilizing CEEMDAN decomposition.Then,we compared the decomposition results of the original sequence,RBFNN extension sequence,and standard sequence to investigate the influence of the endpoint effect and RBFNN extension on the CEEMDAN method.The results indicated that the RBFNN extension technique effectively reduced the error of medium and low frequency components caused by the endpoint effect.At both ends of the components,the extension sequence more accurately reflected the true fluctuation characteristics and variation trends.These advances are of great significance to the subsequent study of hydrology.Therefore,the CEEMDAN method,combined with an appropriate extension of the original runoff series,can more precisely determine multi-time scale characteristics,and provide a credible basis for the analysis of hydrologic time series and hydrological forecasting.展开更多
Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. T...Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. This paper proposed an approach to organizing and transmitting multi-scale vector river network data via the Internet progressively. This approach takes account of two levels of importance, i.e. the importance of river branches and the importance of the points belonging to each river branch, and forms data packages ac- cording to these. Our experiments have shown that the proposed approach can reduce 90% of original data while preserving the river structure well.展开更多
As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry ...As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.展开更多
A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal ...A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal decomposition(POD),and low-dimensional data are used to train an autoencoder(AE). By integrating the POD operator with the decoder, a nonlinear solution form is established and incorporated into a new maximum-a-posteriori(MAP)-based objective for online reconstruction.The numerical results on the two-dimensional(2D) Bhatnagar-Gross-Krook-Boltzmann(BGK-Boltzmann) equation, wave equation, shallow-water equation, and satellite data show that Gappy POD-AE achieves higher accuracy than gappy proper orthogonal decomposition(Gappy POD), especially for the data with slowly decaying singular values,and is more efficient in training than gappy autoencoder(Gappy AE). The MAP-based formulation and new gappy procedure further enhance the reconstruction accuracy.展开更多
A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of ...A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of the coherency matrix are used to modify the scattering models.Secondly,the entropy and anisotro⁃py of targets are used to improve the volume scattering power.With the guarantee of high double-bounce scatter⁃ing power in the urban areas,the proposed algorithm effectively improves the volume scattering power of vegeta⁃tion areas.The efficacy of the modified multiple-component scattering power decomposition is validated using ac⁃tual AIRSAR PolSAR data.The scattering power obtained through decomposing the original coherency matrix and the coherency matrix after orientation angle compensation is compared with three algorithms.Results from the experiment demonstrate that the proposed decomposition yields more effective scattering power for different PolSAR data sets.展开更多
How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form...How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.展开更多
Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods ...Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods work with no necessity to measure the final oxygen volume, but also the fitting errors decrease evidently.展开更多
Seismic data reconstruction can provide high-density sampling and regular input data for inversion and imaging,playing a crucial role in seismic data processing.In seismic data reconstruction,a common scenario involve...Seismic data reconstruction can provide high-density sampling and regular input data for inversion and imaging,playing a crucial role in seismic data processing.In seismic data reconstruction,a common scenario involves a significant distance between the source and the first receiver,which makes it unattainable to acquire near-offset data.A new workflow for seismic data extrapolation is proposed to address this issue,which is based on a multi-scale dynamic time warping(MS-DTW)algorithm.MS-DTW can accurately calculate the time-shift between two time series and is a robust method for predicting time-offset(t-x)domain data.Using the time-shift calculated by the MS-DTW as the basic input,predict the two-way traveltime(TWT)of other traces based on the TWT of the reference trace.Perform autoregressive polynomial fitting on TWT and extrapolate TWT based on the fitted polynomial coefficients.Extract amplitude information from the TWT curve,fit the amplitude curve,and extrapolate the amplitude using polynomial coefficients.The proposed workflow does not necessitate data conversion to other domains and does not require prior knowledge of underground geological information.It applies to both isotropic and anisotropic media.The effectiveness of the workflow was verified through synthetic data and field data.The results show that compared with the method of predictive painting based on local slope,this approach can accurately predict missing near-offset seismic signals and demonstrates good robustness to noise.展开更多
基金supported by National Natural Science Foundation of China (Grant No. 71271078)National Hi-tech Research and Development Program of China (863 Program, Grant No. 2009AA04Z414)Integration of Industry, Education and Research of Guangdong Province, and Ministry of Education of China (Grant No. 2009B090300312)
文摘When used for separating multi-component non-stationary signals, the adaptive time-varying filter(ATF) based on multi-scale chirplet sparse signal decomposition(MCSSD) generates phase shift and signal distortion. To overcome this drawback, the zero phase filter is introduced to the mentioned filter, and a fault diagnosis method for speed-changing gearbox is proposed. Firstly, the gear meshing frequency of each gearbox is estimated by chirplet path pursuit. Then, according to the estimated gear meshing frequencies, an adaptive zero phase time-varying filter(AZPTF) is designed to filter the original signal. Finally, the basis for fault diagnosis is acquired by the envelope order analysis to the filtered signal. The signal consisting of two time-varying amplitude modulation and frequency modulation(AM-FM) signals is respectively analyzed by ATF and AZPTF based on MCSSD. The simulation results show the variances between the original signals and the filtered signals yielded by AZPTF based on MCSSD are 13.67 and 41.14, which are far less than variances (323.45 and 482.86) between the original signals and the filtered signals obtained by ATF based on MCSSD. The experiment results on the vibration signals of gearboxes indicate that the vibration signals of the two speed-changing gearboxes installed on one foundation bed can be separated by AZPTF effectively. Based on the demodulation information of the vibration signal of each gearbox, the fault diagnosis can be implemented. Both simulation and experiment examples prove that the proposed filter can extract a mono-component time-varying AM-FM signal from the multi-component time-varying AM-FM signal without distortion.
基金financially supported by the Research Project of Shanxi Scholarship Council of China (2017– 075)the Natural Science foundation for Young Scientists of Shanxi Province (201801D221103)the Innovation Grant of Shanxi Agricultural University (2017ZZ07)
文摘The relationships between soil total nitrogen(STN)and influencing factors are scale-dependent.The objective of this study was to identify the multi-scale spatial relationships of STN with selected environmental factors(elevation,slope and topographic wetness index),intrinsic soil factors(soil bulk density,sand content,silt content,and clay content)and combined environmental factors(including the first two principal components(PC1 and PC2)of the Vis-NIR soil spectra)along three sampling transects located at the upstream,midstream and downstream of Taiyuan Basin on the Chinese Loess Plateau.We separated the multivariate data series of STN and influencing factors at each transect into six intrinsic mode functions(IMFs)and one residue by multivariate empirical mode decomposition(MEMD).Meanwhile,we obtained the predicted equations of STN based on MEMD by stepwise multiple linear regression(SMLR).The results indicated that the dominant scales of explained variance in STN were at scale 995 m for transect 1,at scales 956 and 8852 m for transect 2,and at scales 972,5716 and 12,317 m for transect 3.Multi-scale correlation coefficients between STN and influencing factors were less significant in transect 3 than in transects 1 and 2.The goodness of fit root mean square error(RMSE),normalized root mean square error(NRMSE),and coefficient of determination(R2)indicated that the prediction of STN at the sampling scale by summing all of the predicted IMFs and residue was more accurate than that by SMLR directly.Therefore,the multi-scale method of MEMD has a good potential in characterizing the multi-scale spatial relationships between STN and influencing factors at the basin landscape scale.
基金funded by the Special Fund for Earthquake Scientific Research of China(201308004,201308009)
文摘On the basis of the absolute and relative gravity observations in North China,spatial dynamic variation of regional gravity fields is obtained. A multi-scale decomposition technique is used to separate anomalies at different depths,and give some explanation to gravity variation at different time space scales. Gravity variation trends in North China are improved. Based on this result and the analysis of wavelet power spectrum,the images of the depth of wavelet approximation and detail are obtained. The results obtained are of scientific significance for the deep understanding of potential seismic risk in North China from gravity variations in different time space scales.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金supported by the Innovation Foundation of the Chinese Academy of Sciences(No.CXJJ-16Q231)the National Natural Science Foundation of China(No.11305203)+3 种基金the Special Program for Informatization of the Chinese Academy of Sciences(No.XXH12504-1-09)the Anhui Provincial Special Project for High Technology Industrythe Special Project of Youth Innovation Promotion Association of Chinese Academy of Sciencesthe Industrialization Fund
文摘Monte Carlo transport simulations of a full-core reactor with a high-fidelity structure have been made possible by modern-day computing capabilities. Performing transport–burnup calculations of a full-core model typically includes millions of burnup areas requiring hundreds of gigabytes of memory for burnup-related tallies. This paper presents the study of a parallel computing method for full-core Monte Carlo transport–burnup calculations and the development of a thread-level data decomposition method. The proposed method decomposes tally accumulators into different threads and improves the parallel communication pattern and memory access efficiency. A typical pressurized water reactor burnup assembly along with the benchmark for evaluation and validation of reactor simulations model was used to test the proposed method.The result indicates that the method effectively reduces memory consumption and maintains high parallel efficiency.
基金supported by China Petrochemical key project during the 11th Five-year Plan as well as the Doctorate Fund of Ministry of Education of China (No.20050491504)
文摘Noise has traditionally been suppressed or eliminated in seismic data sets by the use of Fourier filters and, to a lesser degree, nonlinear statistical filters. Although these methods are quite useful under specific conditions, they may produce undesirable effects for the low signal to noise ratio data. In this paper, a new method, multi-scale ridgelet transform, is used in the light of the theory of ridgelet transform. We employ wavelet transform to do sub-band decomposition for the signals and then use non-linear thresholding in ridgelet domain for every block. In other words, it is based on the idea of partition, at sufficiently fine scale, a curving singularity looks straight, and so ridgelet transform can work well in such cases. Applications on both synthetic data and actual seismic data from Sichuan basin, South China, show that the new method eliminates the noise portion of the signal more efficiently and retains a greater amount of geologic data than other methods, the quality and consecutiveness of seismic event are improved obviously as well as the quality of section is improved.
基金funded by the National Natural Science Foundation of China(Grant No.52274048)Beijing Natural Science Foundation(Grant No.3222037)+1 种基金the CNPC 14th Five-Year Perspective Fundamental Research Project(Grant No.2021DJ2104)the Science Foundation of China University of Petroleum-Beijing(No.2462021YXZZ010).
文摘Recent advances in deep learning have expanded new possibilities for fluid flow simulation in petroleum reservoirs.However,the predominant approach in existing research is to train neural networks using high-fidelity numerical simulation data.This presents a significant challenge because the sole source of authentic wellbore production data for training is sparse.In response to this challenge,this work introduces a novel architecture called physics-informed neural network based on domain decomposition(PINN-DD),aiming to effectively utilize the sparse production data of wells for reservoir simulation with large-scale systems.To harness the capabilities of physics-informed neural networks(PINNs)in handling small-scale spatial-temporal domain while addressing the challenges of large-scale systems with sparse labeled data,the computational domain is divided into two distinct sub-domains:the well-containing and the well-free sub-domain.Moreover,the two sub-domains and the interface are rigorously constrained by the governing equations,data matching,and boundary conditions.The accuracy of the proposed method is evaluated on two problems,and its performance is compared against state-of-the-art PINNs through numerical analysis as a benchmark.The results demonstrate the superiority of PINN-DD in handling large-scale reservoir simulation with limited data and show its potential to outperform conventional PINNs in such scenarios.
基金the financial support by the National Natural Science Foundation of China (Nos.41304109 and 41230318)the Fundamental Research Funds for the Central Universities,China University of Geosciences (Wuhan) (Nos.CUG130103 and CUG110803)
文摘Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inversion methods involving illumination compensation. Vertical cable survey is a potential replacement of traditional marine seismic survey for its flexibility and data quality. Conventional vertical cable data processing requires separation of primaries and multiples before migration. We proposed to use multi-scale full waveform inversion (FWI) to improve illumination coverage of vertical cable survey. A deep water velocity model is built to test the capability of multi-scale FWI in detecting low velocity anomalies below seabed. Synthetic results show that multi-scale FWI is an effective model building tool in deep-water exploration. Geometry optimization through target ori- ented illumination analysis and multi-scale FWI may help to mitigate the risks of vertical cable survey. The combination of multi-scale FWI, low-frequency data and multi-vertical-cable acquisition system may provide both high resolution and high fidelity subsurface models.
基金Project 2005B018 supported by the Science Foundation of China University of Mining and Technology
文摘The multi-scale expression of enormously complicated laneway data requires differentiation of both contents and the way the contents are expressed. To accomplish multi-scale expression laneway data must support multi-scale transformation and have consistent topological relationships. Although the laneway data generated by traverse survey-ing is non-scale data it is still impossible to construct a multi-scale spatial database directly from it. In this paper an al-gorithm is presented to first calculate the laneway mid-line to support multi-scale transformation; then to express topo-logical relationships arising from the data structure; and,finally,a laneway spatial database is built and multi-scale ex-pression is achieved using components GIS-SuperMap Objects. The research result is of great significance for improv-ing the efficiency of laneway data storage and updating,for ensuring consistency of laneway data expression and for extending the potential value of a mine spatial database.
基金supported in part by the Interdisciplinary Project of Dalian University(DLUXK-2023-ZD-001).
文摘Multivariate time series forecasting iswidely used in traffic planning,weather forecasting,and energy consumption.Series decomposition algorithms can help models better understand the underlying patterns of the original series to improve the forecasting accuracy of multivariate time series.However,the decomposition kernel of previous decomposition-based models is fixed,and these models have not considered the differences in frequency fluctuations between components.These problems make it difficult to analyze the intricate temporal variations of real-world time series.In this paper,we propose a series decomposition-based Mamba model,DecMamba,to obtain the intricate temporal dependencies and the dependencies among different variables of multivariate time series.A variable-level adaptive kernel combination search module is designed to interact with information on different trends and periods between variables.Two backbone structures are proposed to emphasize the differences in frequency fluctuations of seasonal and trend components.Mamba with superior performance is used instead of a Transformer in backbone structures to capture the dependencies among different variables.A new embedding block is designed to capture the temporal features better,especially for the high-frequency seasonal component whose semantic information is difficult to acquire.A gating mechanism is introduced to the decoder in the seasonal backbone to improve the prediction accuracy.A comparison with ten state-of-the-art models on seven real-world datasets demonstrates that DecMamba can better model the temporal dependencies and the dependencies among different variables,guaranteeing better prediction performance for multivariate time series.
基金supported by the National Science and Technology Major Project,China(No.2017-III-0010-0036).
文摘The Secondary Air System(SAS)plays an important role in the safe operation and performance of aeroengines.The traditional 1D-3D coupling method loses information when used for secondary air systems,which affects the calculation accuracy.In this paper,a Cross-dimensional Data Transmission method(CDT)from 3D to 1D is proposed by introducing flow field uniformity into the data transmission.First,a uniformity index was established to quantify the flow field parameter distribution characteristics,and a uniformity index prediction model based on the locally weighted regression method(Lowess)was established to quickly obtain the flow field information.Then,an information selection criterion in 3D to 1D data transmission was established based on the Spearman rank correlation coefficient between the uniformity index and the accuracy of coupling calculation,and the calculation method was automatically determined according to the established criterion.Finally,a modified function was obtained by fitting the ratio of the 3D mass-average parameters to the analytical solution,which are then used to modify the selected parameters at the 1D-3D interface.Taking a typical disk cavity air system as an example,the results show that the calculation accuracy of the CDT method is greatly improved by a relative 53.88%compared with the traditional 1D-3D coupling method.Furthermore,the CDT method achieves a speedup of 2 to 3 orders of magnitude compared to the 3D calculation.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2016R1D1A1B03931529)。
文摘Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approach:Considering that tensor addition increases the size of a given tensor along all axes,the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors.Additionally,In Par Ten2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.Findings:The performance of In Par Ten2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets.The results confirm that In Par Ten2 can process large tensors and reduce the re-calculation cost of tensor decomposition.Consequently,the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitations:There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods.However,the former require longer iteration time,and therefore their execution time cannot be compared with that of Spark-based algorithms,whereas the latter run on a single machine,thus limiting their ability to handle large data.Practical implications:The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/value:The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark.Moreover,In Par Ten2 can handle static as well as incremental tensor decomposition.
基金supported by the National Key R&D Program of China(Grant No.2018YFC0406501)Outstanding Young Talent Research Fund of Zhengzhou Uni-versity(Grant No.1521323002)+2 种基金Program for Innovative Talents(in Science and Technology)at University of Henan Province(Grant No.18HASTIT014)State Key Laboratory of Hydraulic Engineering Simulation and Safety,Tianjin University(Grant No.HESS-1717)Foundation for University Youth Key Teacher of Henan Province(Grant No.2017GGJS006).
文摘The complex nonlinear and non-stationary features exhibited in hydrologic sequences make hydrological analysis and forecasting difficult.Currently,some hydrologists employ the complete ensemble empirical mode decomposition with adaptive noise(CEEMDAN)method,a new time-frequency analysis method based on the empirical mode decomposition(EMD)algorithm,to decompose non-stationary raw data in order to obtain relatively stationary components for further study.However,the endpoint effect in CEEMDAN is often neglected,which can lead to decomposition errors that reduce the accuracy of the research results.In this study,we processed an original runoff sequence using the radial basis function neural network(RBFNN)technique to obtain the extension sequence before utilizing CEEMDAN decomposition.Then,we compared the decomposition results of the original sequence,RBFNN extension sequence,and standard sequence to investigate the influence of the endpoint effect and RBFNN extension on the CEEMDAN method.The results indicated that the RBFNN extension technique effectively reduced the error of medium and low frequency components caused by the endpoint effect.At both ends of the components,the extension sequence more accurately reflected the true fluctuation characteristics and variation trends.These advances are of great significance to the subsequent study of hydrology.Therefore,the CEEMDAN method,combined with an appropriate extension of the original runoff series,can more precisely determine multi-time scale characteristics,and provide a credible basis for the analysis of hydrologic time series and hydrological forecasting.
文摘Due to the conflict between huge amount of map data and limited network bandwidth, rapid trans- mission of vector map data over the Internet has become a bottleneck of spatial data delivery in web-based environment. This paper proposed an approach to organizing and transmitting multi-scale vector river network data via the Internet progressively. This approach takes account of two levels of importance, i.e. the importance of river branches and the importance of the points belonging to each river branch, and forms data packages ac- cording to these. Our experiments have shown that the proposed approach can reduce 90% of original data while preserving the river structure well.
文摘As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.
基金supported by the National Natural Science Foundation of China(No.12472197)。
文摘A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal decomposition(POD),and low-dimensional data are used to train an autoencoder(AE). By integrating the POD operator with the decoder, a nonlinear solution form is established and incorporated into a new maximum-a-posteriori(MAP)-based objective for online reconstruction.The numerical results on the two-dimensional(2D) Bhatnagar-Gross-Krook-Boltzmann(BGK-Boltzmann) equation, wave equation, shallow-water equation, and satellite data show that Gappy POD-AE achieves higher accuracy than gappy proper orthogonal decomposition(Gappy POD), especially for the data with slowly decaying singular values,and is more efficient in training than gappy autoencoder(Gappy AE). The MAP-based formulation and new gappy procedure further enhance the reconstruction accuracy.
基金Supported by the National Natural Science Foundation of China(62376214)the Natural Science Basic Research Program of Shaanxi(2023-JC-YB-533)Foundation of Ministry of Education Key Lab.of Cognitive Radio and Information Processing(Guilin University of Electronic Technology)(CRKL200203)。
文摘A modified multiple-component scattering power decomposition for analyzing polarimetric synthetic aperture radar(PolSAR)data is proposed.The modified decomposition involves two distinct steps.Firstly,ei⁃genvectors of the coherency matrix are used to modify the scattering models.Secondly,the entropy and anisotro⁃py of targets are used to improve the volume scattering power.With the guarantee of high double-bounce scatter⁃ing power in the urban areas,the proposed algorithm effectively improves the volume scattering power of vegeta⁃tion areas.The efficacy of the modified multiple-component scattering power decomposition is validated using ac⁃tual AIRSAR PolSAR data.The scattering power obtained through decomposing the original coherency matrix and the coherency matrix after orientation angle compensation is compared with three algorithms.Results from the experiment demonstrate that the proposed decomposition yields more effective scattering power for different PolSAR data sets.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation (IITP)grant funded by the Korean government (MSIT) (No.2022-0-00369)by the NationalResearch Foundation of Korea Grant funded by the Korean government (2018R1A5A1060031,2022R1F1A1065664).
文摘How can we efficiently store and mine dynamically generated dense tensors for modeling the behavior of multidimensional dynamic data?Much of the multidimensional dynamic data in the real world is generated in the form of time-growing tensors.For example,air quality tensor data consists of multiple sensory values gathered from wide locations for a long time.Such data,accumulated over time,is redundant and consumes a lot ofmemory in its raw form.We need a way to efficiently store dynamically generated tensor data that increase over time and to model their behavior on demand between arbitrary time blocks.To this end,we propose a Block IncrementalDense Tucker Decomposition(BID-Tucker)method for efficient storage and on-demand modeling ofmultidimensional spatiotemporal data.Assuming that tensors come in unit blocks where only the time domain changes,our proposed BID-Tucker first slices the blocks into matrices and decomposes them via singular value decomposition(SVD).The SVDs of the time×space sliced matrices are stored instead of the raw tensor blocks to save space.When modeling from data is required at particular time blocks,the SVDs of corresponding time blocks are retrieved and incremented to be used for Tucker decomposition.The factor matrices and core tensor of the decomposed results can then be used for further data analysis.We compared our proposed BID-Tucker with D-Tucker,which our method extends,and vanilla Tucker decomposition.We show that our BID-Tucker is faster than both D-Tucker and vanilla Tucker decomposition and uses less memory for storage with a comparable reconstruction error.We applied our proposed BID-Tucker to model the spatial and temporal trends of air quality data collected in South Korea from 2018 to 2022.We were able to model the spatial and temporal air quality trends.We were also able to verify unusual events,such as chronic ozone alerts and large fire events.
文摘Two methods of computer data processing, linear fitting and nonlinear fitting, are applied to compute the rate constant for hydrogen peroxide decomposition reaction. The results indicate that not only the new methods work with no necessity to measure the final oxygen volume, but also the fitting errors decrease evidently.
基金the National Natural Science Foundation of China(42374133)the Beijing Nova Program(2022056)for their funding of this research。
文摘Seismic data reconstruction can provide high-density sampling and regular input data for inversion and imaging,playing a crucial role in seismic data processing.In seismic data reconstruction,a common scenario involves a significant distance between the source and the first receiver,which makes it unattainable to acquire near-offset data.A new workflow for seismic data extrapolation is proposed to address this issue,which is based on a multi-scale dynamic time warping(MS-DTW)algorithm.MS-DTW can accurately calculate the time-shift between two time series and is a robust method for predicting time-offset(t-x)domain data.Using the time-shift calculated by the MS-DTW as the basic input,predict the two-way traveltime(TWT)of other traces based on the TWT of the reference trace.Perform autoregressive polynomial fitting on TWT and extrapolate TWT based on the fitted polynomial coefficients.Extract amplitude information from the TWT curve,fit the amplitude curve,and extrapolate the amplitude using polynomial coefficients.The proposed workflow does not necessitate data conversion to other domains and does not require prior knowledge of underground geological information.It applies to both isotropic and anisotropic media.The effectiveness of the workflow was verified through synthetic data and field data.The results show that compared with the method of predictive painting based on local slope,this approach can accurately predict missing near-offset seismic signals and demonstrates good robustness to noise.