In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflec...In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.展开更多
Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradi...Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradiance to solar power production.Ensemble simulations from such weather models aim to quantify uncertainty in the future development of the weather,and can be used to propagate this uncertainty through the model chain to generate probabilistic solar energy predictions.However,ensemble prediction systems are known to exhibit systematic errors,and thus require post-processing to obtain accurate and reliable probabilistic forecasts.The overarching aim of our study is to systematically evaluate different strategies to apply post-processing in model chain approaches with a specific focus on solar energy:not applying any post-processing at all;post-processing only the irradiance predictions before the conversion;post-processing only the solar power predictions obtained from the model chain;or applying post-processing in both steps.In a case study based on a benchmark dataset for the Jacumba solar plant in the U.S.,we develop statistical and machine learning methods for postprocessing ensemble predictions of global horizontal irradiance(GHI)and solar power generation.Further,we propose a neural-network-based model for direct solar power forecasting that bypasses the model chain.Our results indicate that postprocessing substantially improves the solar power generation forecasts,in particular when post-processing is applied to the power predictions.The machine learning methods for post-processing slightly outperform the statistical methods,and the direct forecasting approach performs comparably to the post-processing strategies.展开更多
In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow bas...In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.展开更多
Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve...Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.展开更多
Regular expression matching is playing an important role in deep inspection. The rapid development of SDN and NFV makes the network more dynamic, bringing serious challenges to traditional deep inspection matching eng...Regular expression matching is playing an important role in deep inspection. The rapid development of SDN and NFV makes the network more dynamic, bringing serious challenges to traditional deep inspection matching engines. However, state-of-theart matching methods often require a significant amount of pre-processing time and hence are not suitable for this fast updating scenario. In this paper, a novel matching engine called BFA is proposed to achieve high-speed regular expression matching with fast pre-processing. Experiments demonstrate that BFA obtains 5 to 20 times more update abilities compared to existing regular expression matching methods, and scales well on multi-core platforms.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.How...The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.展开更多
In order to meet the demands for high transmission rates and high service quality in broadband wireless communication systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. Ho...In order to meet the demands for high transmission rates and high service quality in broadband wireless communication systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. However, the inter-block interference (IBI) and inter-carrier interference (ICI) in an OFDM system affect the performance. To mitigate IBI and ICI, some pre-processing approaches have been proposed based on full channel state information (CSI), which improved the system performance. A pre-processing filter based on partial CSI at the transmitter is designed and investigated. The filter coefficient is given by the optimization processing, the symbol error rate (SER) is tested, and the computation complexity of the proposed scheme is analyzed. Computer simulation results show that the proposed pre-processing filter can effectively mitigate IBI and ICI and the performance can be improved. Compared with pre-processing approaches at the transmitter based on full CSI, the proposed scheme has high spectral efficiency, limited CSI feedback and low computation complexity.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM t...Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM techniques(i.e.,laser powder bed fusion and laser-directed energy deposition)will be reviewed,covering the aspects of processes,materials and post-processing.The impacts of process parameters and strategies for optimizing parameters will be elucidated.Various types of Ti alloys processed by LAM,includingα-Ti,(α+β)-Ti,andβ-Ti alloys,will be overviewed in terms of micro structures and benchmarking properties.Furthermore,the post-processing methods for improving the performance of L AM-processed Ti alloys,including conventional and novel heat treatment,hot isostatic pressing,and surface processing(e.g.,ultrasonic and laser shot peening),will be systematically reviewed and discussed.The review summarizes the process windows,properties,and performance envelopes and benchmarks the research achievements in LAM of Ti alloys.The outlooks of further trends in LAM of Ti alloys are also highlighted at the end of the review.This comprehensive review could serve as a valuable resource for researchers and practitioners,promoting further advancements in LAM-built Ti alloys and their applications.展开更多
The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1...The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies.展开更多
This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids...This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.展开更多
High-resolution ice core records covering long time spans enable reconstruction of the past climatic and environmental conditions allowing the investigation of the earth system's evolution.Preprocessing of ice cor...High-resolution ice core records covering long time spans enable reconstruction of the past climatic and environmental conditions allowing the investigation of the earth system's evolution.Preprocessing of ice cores has direct impacts on the data quality control for further analysis since the conventional ice core processing is time-consuming,produces qualitative data,leads to ice mass loss,and leads to risks of potential secondary pollution.However,over the past several decades,preprocessing of ice cores has received less attention than the improvement of ice drilling,the analytical methodology of various indices,and the researches on the climatic and environmental significance of ice core records.Therefore,this papers reviews the development of the processing for ice cores including framework,design as well as materials,analyzes the technical advantages and disadvantages of the different systems.In the past,continuous flowanalysis(CFA)has been successfully applied to process the polar ice cores.However,it is not suitable for ice cores outside polar region because of high level of particles,the memory effect between samples,and the filtration before injection.Ice core processing is a subtle and professional operation due to the fragility of the nonmetallic materials and the random distribution of particles and air bubbles in ice cores,which aggravates uncertainty in the measurements.The future developments of CFA are discussed in preprocessing,memory effect,challenge for brittle ice,coupling with real-time analysis and optimization of CFA in the field.Furthermore,non-polluting cutters with many different configurations could be designed to cut and scrape in multiple directions and to separate inner and outer portions of the core.This system also needs to be coupled with streamlined operation of packaging,coding,and stacking that can be implemented at high resolution and rate,avoiding manual intervention.At the same time,information of the longitudinal sections could be scanned andidentified,and then classified to obtain quantitative data.In addition,irregular ice volume and weight can also be obtained accurately.These improvements are recorded automatically via user-friendly interfaces.These innovations may be applied to other paleomedias with similar features and needs.展开更多
Mathematical morphology is widely applicated in digital image procesing.Vari- ary morphology construction and algorithm being developed are used in deferent digital image processing.The basic idea of mathematical morp...Mathematical morphology is widely applicated in digital image procesing.Vari- ary morphology construction and algorithm being developed are used in deferent digital image processing.The basic idea of mathematical morphology is to use construction ele- ment measure image morphology for solving understand problem.The article presented advanced cellular neural network that forms mathematical morphological cellular neural network (MMCNN) equation to be suit for mathematical morphology filter.It gave the theo- ries of MMCNN dynamic extent and stable state.It is evidenced that arrived mathematical morphology filter through steady of dynamic process in definite condition.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analys...There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analysis. This paper proposes a data pre-processing model based on intelligent algorithms. Firstly, we introduce the integrated network platform of ocean observation. Next, the preprocessing model of data is presemed, and an imelligent cleaning model of data is proposed. Based on fuzzy clustering, the Kohonen clustering network is improved to fulfill the parallel calculation of fuzzy c-means clustering. The proposed dynamic algorithm can automatically f'md the new clustering center with the updated sample data. The rapid and dynamic performance of the model makes it suitable for real time calculation, and the efficiency and accuracy of the model is proved by test results through observation data analysis.展开更多
A signal pre-processing method based on optimal variational mode decomposition(OVMD)is proposed to improve the efficiency and accuracy of local data filtering and analysis of edge nodes in distributed electromechanica...A signal pre-processing method based on optimal variational mode decomposition(OVMD)is proposed to improve the efficiency and accuracy of local data filtering and analysis of edge nodes in distributed electromechanical systems.Firstly,the singular points of original signals are eliminated effectively by using the first-order difference method.Then the OVMD method is applied for signal modal decomposition.Furthermore,correlation analysis is conducted to determine the degree of correlation between each mode and the original signal,so as to accurately separate the real operating signal from noise signal.On the basis of theoretical analysis and simulation,an edge node pre-processing system for distributed electromechanical system is designed.Finally,by virtue of the signal-to-noise ratio(SNR)and root-mean-square error(RMSE)indicators,the signal pre-processing effect is evaluated.The experimental results show that the OVMD-based edge node pre-processing system can extract signals with different characteristics and improve the SNR of reconstructed signals.Due to its high fidelity and reliability,this system can also provide data quality assurance for subsequent system health monitoring and fault diagnosis.展开更多
The solution of linear equation group can be applied to the oil exploration, the structure vibration analysis, the computational fluid dynamics, and other fields. When we make the in-depth analysis of some large or ve...The solution of linear equation group can be applied to the oil exploration, the structure vibration analysis, the computational fluid dynamics, and other fields. When we make the in-depth analysis of some large or very large complicated structures, we must use the parallel algorithm with the aid of high-performance computers to solve complex problems. This paper introduces the implementation process having the parallel with sparse linear equations from the perspective of sparse linear equation group.展开更多
基金Projects 50221402, 50490271 and 50025413 supported by the National Natural Science Foundation of Chinathe National Basic Research Program of China (2009CB219603, 2009 CB724601, 2006CB202209 and 2005CB221500)+1 种基金the Key Project of the Ministry of Education (306002)the Program for Changjiang Scholars and Innovative Research Teams in Universities of MOE (IRT0408)
文摘In order to carry out numerical simulation using geologic structural data obtained from Landmark(seismic interpretation system), underground geological structures are abstracted into mechanical models which can reflect actual situations and facilitate their computation and analyses.Given the importance of model building, further processing methods about traditional seismic interpretation results from Landmark should be studied and the processed result can then be directly used in numerical simulation computations.Through this data conversion procedure, Landmark and FLAC(the international general stress software) are seamlessly connected.Thus, the format conversion between the two systems and the pre-and post-processing in simulation computation is realized.A practical application indicates that this method has many advantages such as simple operation, high accuracy of the element subdivision and high speed, which may definitely satisfy the actual needs of floor grid cutting.
基金the Young Investigator Group“Artificial Intelligence for Probabilistic Weather Forecasting”funded by the Vector Stiftungfunding from the Federal Ministry of Education and Research(BMBF)and the Baden-Württemberg Ministry of Science as part of the Excellence Strategy of the German Federal and State Governments。
文摘Weather forecasts from numerical weather prediction models play a central role in solar energy forecasting,where a cascade of physics-based models is used in a model chain approach to convert forecasts of solar irradiance to solar power production.Ensemble simulations from such weather models aim to quantify uncertainty in the future development of the weather,and can be used to propagate this uncertainty through the model chain to generate probabilistic solar energy predictions.However,ensemble prediction systems are known to exhibit systematic errors,and thus require post-processing to obtain accurate and reliable probabilistic forecasts.The overarching aim of our study is to systematically evaluate different strategies to apply post-processing in model chain approaches with a specific focus on solar energy:not applying any post-processing at all;post-processing only the irradiance predictions before the conversion;post-processing only the solar power predictions obtained from the model chain;or applying post-processing in both steps.In a case study based on a benchmark dataset for the Jacumba solar plant in the U.S.,we develop statistical and machine learning methods for postprocessing ensemble predictions of global horizontal irradiance(GHI)and solar power generation.Further,we propose a neural-network-based model for direct solar power forecasting that bypasses the model chain.Our results indicate that postprocessing substantially improves the solar power generation forecasts,in particular when post-processing is applied to the power predictions.The machine learning methods for post-processing slightly outperform the statistical methods,and the direct forecasting approach performs comparably to the post-processing strategies.
文摘In the present computational fluid dynamics (CFD) community, post-processing is regarded as a procedure to view parameter distribution, detect characteristic structure and reveal physical mechanism of fluid flow based on computational or experimental results. Field plots by contours, iso-surfaces, streamlines, vectors and others are traditional post-processing techniques. While the shock wave, as one important and critical flow structure in many aerodynamic problems, can hardly be detected or distinguished in a direct way using these traditional methods, due to possible confusions with other similar discontinuous flow structures like slip line, contact discontinuity, etc. Therefore, method for automatic detection of shock wave in post-processing is of great importance for both academic research and engineering applications. In this paper, the current status of methodologies developed for shock wave detection and implementations in post-processing platform are reviewed, as well as discussions on advantages and limitations of the existing methods and proposals for further studies of shock wave detection method. We also develop an advanced post-processing software, with improved shock detection.
基金Supported by the Chinese Academy of Sciences Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics,Shanghai Branch,University of Science and Technology of Chinathe National Natural Science Foundation of China under Grant No 11405172
文摘Quantum random number generators adopting single negligible dead time of avalanche photodiodes (APDs) photon detection have been restricted due to the non- We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32× 32 APD array is up to tens of Gbits/s.
基金supported by the National Key Technology R&D Program of China under Grant No. 2015BAK34B00the National Key Research and Development Program of China under Grant No. 2016YFB1000102
文摘Regular expression matching is playing an important role in deep inspection. The rapid development of SDN and NFV makes the network more dynamic, bringing serious challenges to traditional deep inspection matching engines. However, state-of-theart matching methods often require a significant amount of pre-processing time and hence are not suitable for this fast updating scenario. In this paper, a novel matching engine called BFA is proposed to achieve high-speed regular expression matching with fast pre-processing. Experiments demonstrate that BFA obtains 5 to 20 times more update abilities compared to existing regular expression matching methods, and scales well on multi-core platforms.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金supported by the National Natural Science Foundation of China(Nos.62003115 and 11972130)the Shenzhen Science and Technology Program,China(JCYJ20220818102207015)the Heilongjiang Touyan Team Program,China。
文摘The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified.
基金supported by the National Natural Science Foundation of China(60902045)the National High-Tech Research and Developmeent Program of China(863 Program)(2011AA01A105)
文摘In order to meet the demands for high transmission rates and high service quality in broadband wireless communication systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. However, the inter-block interference (IBI) and inter-carrier interference (ICI) in an OFDM system affect the performance. To mitigate IBI and ICI, some pre-processing approaches have been proposed based on full channel state information (CSI), which improved the system performance. A pre-processing filter based on partial CSI at the transmitter is designed and investigated. The filter coefficient is given by the optimization processing, the symbol error rate (SER) is tested, and the computation complexity of the proposed scheme is analyzed. Computer simulation results show that the proposed pre-processing filter can effectively mitigate IBI and ICI and the performance can be improved. Compared with pre-processing approaches at the transmitter based on full CSI, the proposed scheme has high spectral efficiency, limited CSI feedback and low computation complexity.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
基金financially supported by the 2022 MTC Young Individual Research Grants under Singapore Research,Innovation and Enterprise(RIE)2025 Plan(No.M22K3c0097)the Natural Science Foundation of US(No.DMR-2104933)the sponsorship of the China Scholarship Council(No.202106130051)。
文摘Laser additive manufacturing(LAM)of titanium(Ti)alloys has emerged as a transformative technology with vast potential across multiple industries.To recap the state of the art,Ti alloys processed by two essential LAM techniques(i.e.,laser powder bed fusion and laser-directed energy deposition)will be reviewed,covering the aspects of processes,materials and post-processing.The impacts of process parameters and strategies for optimizing parameters will be elucidated.Various types of Ti alloys processed by LAM,includingα-Ti,(α+β)-Ti,andβ-Ti alloys,will be overviewed in terms of micro structures and benchmarking properties.Furthermore,the post-processing methods for improving the performance of L AM-processed Ti alloys,including conventional and novel heat treatment,hot isostatic pressing,and surface processing(e.g.,ultrasonic and laser shot peening),will be systematically reviewed and discussed.The review summarizes the process windows,properties,and performance envelopes and benchmarks the research achievements in LAM of Ti alloys.The outlooks of further trends in LAM of Ti alloys are also highlighted at the end of the review.This comprehensive review could serve as a valuable resource for researchers and practitioners,promoting further advancements in LAM-built Ti alloys and their applications.
文摘The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies.
文摘This paper proposed improvements to the low bit rate parametric audio coder with sinusoid model as its kernel. Firstly, we propose a new method to effectively order and select the perceptually most important sinusoids. The sinusoid which contributes most to the reduction of overall NMR is chosen. Combined with our improved parametric psychoacoustic model and advanced peak riddling techniques, the number of sinusoids required can be greatly reduced and the coding efficiency can be greatly enhanced. A lightweight version is also given to reduce the amount of computation with only little sacrifice of performance. Secondly, we propose two enhancement techniques for sinusoid synthesis: bandwidth enhancement and line enhancement. With little overhead, the effective bandwidth can be extended one more octave; the timbre tends to sound much brighter, thicker and more beautiful.
基金supported by the National Natural Science Foundation of China(Grant No.41630754)the State Key Laboratory of Cryospheric Science(SKLCS-ZZ-2017)CAS Key Technology Talent Program and Open Foundation of State Key Laboratory of Hydrology-Water Resources and Hydraulic Engineering(2017490711)
文摘High-resolution ice core records covering long time spans enable reconstruction of the past climatic and environmental conditions allowing the investigation of the earth system's evolution.Preprocessing of ice cores has direct impacts on the data quality control for further analysis since the conventional ice core processing is time-consuming,produces qualitative data,leads to ice mass loss,and leads to risks of potential secondary pollution.However,over the past several decades,preprocessing of ice cores has received less attention than the improvement of ice drilling,the analytical methodology of various indices,and the researches on the climatic and environmental significance of ice core records.Therefore,this papers reviews the development of the processing for ice cores including framework,design as well as materials,analyzes the technical advantages and disadvantages of the different systems.In the past,continuous flowanalysis(CFA)has been successfully applied to process the polar ice cores.However,it is not suitable for ice cores outside polar region because of high level of particles,the memory effect between samples,and the filtration before injection.Ice core processing is a subtle and professional operation due to the fragility of the nonmetallic materials and the random distribution of particles and air bubbles in ice cores,which aggravates uncertainty in the measurements.The future developments of CFA are discussed in preprocessing,memory effect,challenge for brittle ice,coupling with real-time analysis and optimization of CFA in the field.Furthermore,non-polluting cutters with many different configurations could be designed to cut and scrape in multiple directions and to separate inner and outer portions of the core.This system also needs to be coupled with streamlined operation of packaging,coding,and stacking that can be implemented at high resolution and rate,avoiding manual intervention.At the same time,information of the longitudinal sections could be scanned andidentified,and then classified to obtain quantitative data.In addition,irregular ice volume and weight can also be obtained accurately.These improvements are recorded automatically via user-friendly interfaces.These innovations may be applied to other paleomedias with similar features and needs.
文摘Mathematical morphology is widely applicated in digital image procesing.Vari- ary morphology construction and algorithm being developed are used in deferent digital image processing.The basic idea of mathematical morphology is to use construction ele- ment measure image morphology for solving understand problem.The article presented advanced cellular neural network that forms mathematical morphological cellular neural network (MMCNN) equation to be suit for mathematical morphology filter.It gave the theo- ries of MMCNN dynamic extent and stable state.It is evidenced that arrived mathematical morphology filter through steady of dynamic process in definite condition.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
基金Key Science and Technology Project of the Shanghai Committee of Science and Technology, China (No.06dz1200921)Major Basic Research Project of the Shanghai Committee of Science and Technology(No.08JC1400100)+1 种基金Shanghai Talent Developing Foundation, China(No.001)Specialized Foundation for Excellent Talent of Shanghai,China
文摘There are a number of dirty data in observation data set derived from integrated ocean observing network system. Thus, the data must be carefully and reasonably processed before they are used for forecasting or analysis. This paper proposes a data pre-processing model based on intelligent algorithms. Firstly, we introduce the integrated network platform of ocean observation. Next, the preprocessing model of data is presemed, and an imelligent cleaning model of data is proposed. Based on fuzzy clustering, the Kohonen clustering network is improved to fulfill the parallel calculation of fuzzy c-means clustering. The proposed dynamic algorithm can automatically f'md the new clustering center with the updated sample data. The rapid and dynamic performance of the model makes it suitable for real time calculation, and the efficiency and accuracy of the model is proved by test results through observation data analysis.
基金National Natural Science Foundation of China(No.61903291)Industrialization Project of Shaanxi Provincial Department of Education(No.18JC018)。
文摘A signal pre-processing method based on optimal variational mode decomposition(OVMD)is proposed to improve the efficiency and accuracy of local data filtering and analysis of edge nodes in distributed electromechanical systems.Firstly,the singular points of original signals are eliminated effectively by using the first-order difference method.Then the OVMD method is applied for signal modal decomposition.Furthermore,correlation analysis is conducted to determine the degree of correlation between each mode and the original signal,so as to accurately separate the real operating signal from noise signal.On the basis of theoretical analysis and simulation,an edge node pre-processing system for distributed electromechanical system is designed.Finally,by virtue of the signal-to-noise ratio(SNR)and root-mean-square error(RMSE)indicators,the signal pre-processing effect is evaluated.The experimental results show that the OVMD-based edge node pre-processing system can extract signals with different characteristics and improve the SNR of reconstructed signals.Due to its high fidelity and reliability,this system can also provide data quality assurance for subsequent system health monitoring and fault diagnosis.
文摘The solution of linear equation group can be applied to the oil exploration, the structure vibration analysis, the computational fluid dynamics, and other fields. When we make the in-depth analysis of some large or very large complicated structures, we must use the parallel algorithm with the aid of high-performance computers to solve complex problems. This paper introduces the implementation process having the parallel with sparse linear equations from the perspective of sparse linear equation group.