期刊文献+
共找到28,561篇文章
< 1 2 250 >
每页显示 20 50 100
Enhancing the data processing speed of a deep-learning-based three-dimensional single molecule localization algorithm (FD-DeepLoc) with a combination of feature compression and pipeline programming
1
作者 Shuhao Guo Jiaxun Lin +1 位作者 Yingjun Zhang Zhen-Li Huang 《Journal of Innovative Optical Health Sciences》 2025年第2期150-160,共11页
Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.... Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm. 展开更多
关键词 Real-time data processing feature compression pipeline programming
原文传递
IDCE:Integrated Data Compression and Encryption for Enhanced Security and Efficiency
2
作者 Muhammad Usama Arshad Aziz +2 位作者 Suliman A.Alsuhibany Imtiaz Hassan Farrukh Yuldashev 《Computer Modeling in Engineering & Sciences》 2025年第4期1029-1048,共20页
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da... Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field. 展开更多
关键词 Chaotic maps SECURITY data compression data encryption integrated compression and encryption
在线阅读 下载PDF
Battery pack capacity prediction using deep learning and data compression technique:A method for real-world vehicles
3
作者 Yi Yang Jibin Yang +4 位作者 Xiaohua Wu Liyue Fu Xinmei Gao Xiandong Xie Quan Ouyang 《Journal of Energy Chemistry》 2025年第7期553-564,共12页
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti... The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications. 展开更多
关键词 Lithium-ion battery Capacity prediction Real-world vehicle data data compression Deep learning
在线阅读 下载PDF
A review of test methods for uniaxial compressive strength of rocks:Theory,apparatus and data processing
4
作者 Wei-Qiang Xie Xiao-Li Liu +2 位作者 Xiao-Ping Zhang Quan-Sheng Liu En-ZhiWang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第3期1889-1905,共17页
The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and ... The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors. 展开更多
关键词 Uniaxial compressive strength(UCS) UCS testing methods Test apparatus data processing
在线阅读 下载PDF
Basic processing of the InSight seismic data from Mars for further seismological research
5
作者 Shuguang Wang Shuoxian Ning +4 位作者 Zhixiang Yao Jiaqi Li Wanbo Xiao Tianfan Yan Feng Xu 《Earthquake Science》 2025年第5期450-460,共11页
The InSight mission has obtained seismic data from Mars,offering new insights into the planet’s internal structure and seismic activity.However,the raw data released to the public contain various sources of noise,suc... The InSight mission has obtained seismic data from Mars,offering new insights into the planet’s internal structure and seismic activity.However,the raw data released to the public contain various sources of noise,such as ticks and glitches,which hamper further seismological studies.This paper presents step-by-step processing of InSight’s Very Broad Band seismic data,focusing on the suppression and removal of non-seismic noise.The processing stages include tick noise removal,glitch signal suppression,multicomponent synchronization,instrument response correction,and rotation of orthogonal components.The processed datasets and associated codes are openly accessible and will support ongoing efforts to explore the geophysical properties of Mars and contribute to the broader field of planetary seismology. 展开更多
关键词 MARS INSIGHT SEISMOLOGY data process seismic noise
在线阅读 下载PDF
Modeling and Performance Evaluation of Streaming Data Processing System in IoT Architecture
6
作者 Feng Zhu Kailin Wu Jie Ding 《Computers, Materials & Continua》 2025年第5期2573-2598,共26页
With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Alth... With the widespread application of Internet of Things(IoT)technology,the processing of massive realtime streaming data poses significant challenges to the computational and data-processing capabilities of systems.Although distributed streaming data processing frameworks such asApache Flink andApache Spark Streaming provide solutions,meeting stringent response time requirements while ensuring high throughput and resource utilization remains an urgent problem.To address this,the study proposes a formal modeling approach based on Performance Evaluation Process Algebra(PEPA),which abstracts the core components and interactions of cloud-based distributed streaming data processing systems.Additionally,a generic service flow generation algorithmis introduced,enabling the automatic extraction of service flows fromthe PEPAmodel and the computation of key performance metrics,including response time,throughput,and resource utilization.The novelty of this work lies in the integration of PEPA-based formal modeling with the service flow generation algorithm,bridging the gap between formal modeling and practical performance evaluation for IoT systems.Simulation experiments demonstrate that optimizing the execution efficiency of components can significantly improve system performance.For instance,increasing the task execution rate from 10 to 100 improves system performance by 9.53%,while further increasing it to 200 results in a 21.58%improvement.However,diminishing returns are observed when the execution rate reaches 500,with only a 0.42%gain.Similarly,increasing the number of TaskManagers from 10 to 20 improves response time by 18.49%,but the improvement slows to 6.06% when increasing from 20 to 50,highlighting the importance of co-optimizing component efficiency and resource management to achieve substantial performance gains.This study provides a systematic framework for analyzing and optimizing the performance of IoT systems for large-scale real-time streaming data processing.The proposed approach not only identifies performance bottlenecks but also offers insights into improving system efficiency under different configurations and workloads. 展开更多
关键词 System modeling performance evaluation streaming data process IoT system PEPA
在线阅读 下载PDF
Automation and parallelization scheme to accelerate pulsar observation data processing
7
作者 Xingnan Zhang Minghui Li 《Astronomical Techniques and Instruments》 2025年第4期226-238,共13页
Previous studies aiming to accelerate data processing have focused on enhancement algorithms,using the graphics processing unit(GPU)to speed up programs,and thread-level parallelism.These methods overlook maximizing t... Previous studies aiming to accelerate data processing have focused on enhancement algorithms,using the graphics processing unit(GPU)to speed up programs,and thread-level parallelism.These methods overlook maximizing the utilization of existing central processing unit(CPU)resources and reducing human and computational time costs via process automation.Accordingly,this paper proposes a scheme,called SSM,that combines“Srun job submission mode”,“Sbatch job submission mode”,and“Monitor function”.The SSM scheme includes three main modules:data management,command management,and resource management.Its core innovations are command splitting and parallel execution.The results show that this method effectively improves CPU utilization and reduces the time required for data processing.In terms of CPU utilization,the average value of this scheme is 89%.In contrast,the average CPU utilizations of“Srun job submission mode”and“Sbatch job submission mode”are significantly lower,at 43%and 52%,respectively.In terms of the data-processing time,SSM testing on the Five-hundred-meter Aperture Spherical radio Telescope(FAST)data requires only 5.5 h,compared with 8 h in the“Srun job submission mode”and 14 h in the“Sbatch job submission mode”.In addition,tests on the FAST and Parkes datasets demonstrate the universality of the SSM scheme,which can process data from different telescopes.The compatibility of the SSM scheme for pulsar searches is verified using 2 days of observational data from the globular cluster M2,with the scheme successfully discovering all published pulsars in M2. 展开更多
关键词 Astronomical data Parallel processing PulsaR Exploration and Search TOolkit(PRESTO) CPU FAST Parkes
在线阅读 下载PDF
GT-scopy:A Data Processing and Enhancing Package(Level 1.0-1.5)for Ground Solar Telescopes——Based on the 1.6 m Goode Solar Telescope
8
作者 Ding Yuan Wei Wu +4 位作者 Song Feng Libo Fu Wenda Cao Jianchuan Zheng Lin Mei 《Research in Astronomy and Astrophysics》 2025年第11期191-197,共7页
The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a pytho... The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community. 展开更多
关键词 techniques:image processing methods:data analysis Astronomical Instrumentation Methods and Techniques
在线阅读 下载PDF
Multi-scale intelligent fusion and dynamic validation for high-resolution seismic data processing in drilling
9
作者 YUAN Sanyi XU Yanwu +2 位作者 XIE Renjun CHEN Shuai YUAN Junliang 《Petroleum Exploration and Development》 2025年第3期680-691,共12页
During drilling operations,the low resolution of seismic data often limits the accurate characterization of small-scale geological bodies near the borehole and ahead of the drill bit.This study investigates high-resol... During drilling operations,the low resolution of seismic data often limits the accurate characterization of small-scale geological bodies near the borehole and ahead of the drill bit.This study investigates high-resolution seismic data processing technologies and methods tailored for drilling scenarios.The high-resolution processing of seismic data is divided into three stages:pre-drilling processing,post-drilling correction,and while-drilling updating.By integrating seismic data from different stages,spatial ranges,and frequencies,together with information from drilled wells and while-drilling data,and applying artificial intelligence modeling techniques,a progressive high-resolution processing technology of seismic data based on multi-source information fusion is developed,which performs simple and efficient seismic information updates during drilling.Case studies show that,with the gradual integration of multi-source information,the resolution and accuracy of seismic data are significantly improved,and thin-bed weak reflections are more clearly imaged.The updated seismic information while-drilling demonstrates high value in predicting geological bodies ahead of the drill bit.Validation using logging,mud logging,and drilling engineering data ensures the fidelity of the processing results of high-resolution seismic data.This provides clearer and more accurate stratigraphic information for drilling operations,enhancing both drilling safety and efficiency. 展开更多
关键词 high-resolution seismic data processing while-drilling update while-drilling logging multi-source information fusion thin-bed weak reflection artificial intelligence modeling
在线阅读 下载PDF
APPLICATION OF GREY SYSTEM THEORY TO PROCESSING OF MEASURING DATA IN REVERSE ENGINEERING 被引量:3
10
作者 平雪良 周儒荣 安鲁陵 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2003年第1期36-41,共6页
The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured d... The processing of measuri ng data plays an important role in reverse engineering. Based on grey system the ory, we first propose some methods to the processing of measuring data in revers e engineering. The measured data usually have some abnormalities. When the abnor mal data are eliminated by filtering, blanks are created. The grey generation an d GM(1,1) are used to create new data for these blanks. For the uneven data sequ en ce created by measuring error, the mean generation is used to smooth it and then the stepwise and smooth generations are used to improve the data sequence. 展开更多
关键词 reverse engineering gr ey system theory DIGITIZATION data processing grey generation
在线阅读 下载PDF
Semantic-based query processing for relational data integration 被引量:1
11
作者 苗壮 张亚非 +2 位作者 王进鹏 陆建江 周波 《Journal of Southeast University(English Edition)》 EI CAS 2011年第1期22-25,共4页
To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al... To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance. 展开更多
关键词 data integration relational database simple protocol and RDF query language(SPARQL) minimal connectable unit query processing
在线阅读 下载PDF
ADVANCED FREQUENCY-DIRECTED RUN-LENTH BASED CODING SCHEME ON TEST DATA COMPRESSION FOR SYSTEM-ON-CHIP 被引量:1
12
作者 张颖 吴宁 葛芬 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2012年第1期77-83,共7页
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre... Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes. 展开更多
关键词 test data compression FDR codes test resource partitioning SYSTEM-ON-CHIP
在线阅读 下载PDF
An Efficient Test Data Compression Technique Based on Codes
13
作者 方建平 郝跃 +1 位作者 刘红侠 李康 《Journal of Semiconductors》 EI CAS CSCD 北大核心 2005年第11期2062-2068,共7页
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t... This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method 展开更多
关键词 test data compression unspecified bits assignment system-on-a-chip test hybrid run-length codes
在线阅读 下载PDF
Data processing of small samples based on grey distance information approach 被引量:14
14
作者 Ke Hongfa, Chen Yongguang & Liu Yi 1. Coll. of Electronic Science and Engineering, National Univ. of Defense Technology, Changsha 410073, P. R. China 2. Unit 63880, Luoyang 471003, P. R. China 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第2期281-289,共9页
Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is di... Data processing of small samples is an important and valuable research problem in the electronic equipment test. Because it is difficult and complex to determine the probability distribution of small samples, it is difficult to use the traditional probability theory to process the samples and assess the degree of uncertainty. Using the grey relational theory and the norm theory, the grey distance information approach, which is based on the grey distance information quantity of a sample and the average grey distance information quantity of the samples, is proposed in this article. The definitions of the grey distance information quantity of a sample and the average grey distance information quantity of the samples, with their characteristics and algorithms, are introduced. The correlative problems, including the algorithm of estimated value, the standard deviation, and the acceptance and rejection criteria of the samples and estimated results, are also proposed. Moreover, the information whitening ratio is introduced to select the weight algorithm and to compare the different samples. Several examples are given to demonstrate the application of the proposed approach. The examples show that the proposed approach, which has no demand for the probability distribution of small samples, is feasible and effective. 展开更多
关键词 data processing Grey theory Norm theory Small samples Uncertainty assessments Grey distance measure Information whitening ratio.
在线阅读 下载PDF
The development of data acquisition and processing application system for RF ion source 被引量:3
15
作者 Xiaodan ZHANG Xiaoying WANG +3 位作者 Chundong HU Caichao JIANG Yahong XlE Yuanzhe ZHAO 《Plasma Science and Technology》 SCIE EI CAS CSCD 2017年第7期124-129,共6页
As the key ion source component of nuclear fusion auxiliary heating devices, the radio frequency (RF) ion source is developed and applied gradually to offer a source plasma with the advantages of ease of control and... As the key ion source component of nuclear fusion auxiliary heating devices, the radio frequency (RF) ion source is developed and applied gradually to offer a source plasma with the advantages of ease of control and high reliability. In addition, it easily achieves long-pulse steady-state operation. During the process of the development and testing of the RF ion source, a lot of original experimental data will be generated. Therefore, it is necessary to develop a stable and reliable computer data acquisition and processing application system for realizing the functions of data acquisition, storage, access, and real-time monitoring. In this paper, the development of a data acquisition and processing application system for the RF ion source is presented. The hardware platform is based on the PXI system and the software is programmed on the LabVIEW development environment. The key technologies that are used for the implementation of this software programming mainly include the long-pulse data acquisition technology, multi- threading processing technology, transmission control communication protocol, and the Lempel-Ziv-Oberhumer data compression algorithm. Now, this design has been tested and applied on the RF ion source. The test results show that it can work reliably and steadily. With the help of this design, the stable plasma discharge data of the RF ion source are collected, stored, accessed, and monitored in real-time. It is shown that it has a very practical application significance for the RF experiments. 展开更多
关键词 RF ion source data acquisition data processing TCP LZO algorithm
在线阅读 下载PDF
Magnetic field data processing methods of the China Seismo-Electromagnetic Satellite 被引量:15
16
作者 Bin Zhou YanYan Yang +4 位作者 YiTeng Zhang XiaoChen Gou BingJun Cheng JinDong Wang Lei Li 《Earth and Planetary Physics》 2018年第6期455-461,共7页
The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark ... The High Precision Magnetometer(HPM) on board the China Seismo-Electromagnetic Satellite(CSES) allows highly accurate measurement of the geomagnetic field; it includes FGM(Fluxgate Magnetometer) and CDSM(Coupled Dark State Magnetometer)probes. This article introduces the main processing method, algorithm, and processing procedure of the HPM data. First, the FGM and CDSM probes are calibrated according to ground sensor data. Then the FGM linear parameters can be corrected in orbit, by applying the absolute vector magnetic field correction algorithm from CDSM data. At the same time, the magnetic interference of the satellite is eliminated according to ground-satellite magnetic test results. Finally, according to the characteristics of the magnetic field direction in the low latitude region, the transformation matrix between FGM probe and star sensor is calibrated in orbit to determine the correct direction of the magnetic field. Comparing the magnetic field data of CSES and SWARM satellites in five continuous geomagnetic quiet days, the difference in measurements of the vector magnetic field is about 10 nT, which is within the uncertainty interval of geomagnetic disturbance. 展开更多
关键词 China Seismo-Electromagnetic Satellite(CSES) High Precision Magnetometer(HPM) fluxgate magnetometer CPT magnetometer data processing
在线阅读 下载PDF
Empirical data decomposition and its applications in image compression 被引量:2
17
作者 Deng Jiaxian Wu Xiaoqin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第1期164-170,共7页
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i... A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression. 展开更多
关键词 Image processing Image compression Empirical data decomposition NON-STATIONARY NONLINEAR data decomposition framework
在线阅读 下载PDF
Efficient Compression of Vector Data Map Based on a Clustering Model 被引量:5
18
作者 YANG Bisheng LI Qingquan 《Geo-Spatial Information Science》 2009年第1期13-17,共5页
This paper proposes a new method for the compression of vector data map. Three key steps are encompassed in the proposed method, namely, the simplification of vector data map via the elimination of vertices, the compr... This paper proposes a new method for the compression of vector data map. Three key steps are encompassed in the proposed method, namely, the simplification of vector data map via the elimination of vertices, the compression of re- moved vertices based on a clustering model, and the decoding of the compressed vector data map. The proposed compres- sion method was implemented and applied to compress vector data map to investigate its performance in terms of the com- pression ratio and distortions of geometric shapes. The results show that the proposed method provides a feasible and effi- cient solution for the compression of vector data map and is able to achieve a promising ratio of compression and maintain the main shape characteristics of the spatial objects within the compressed vector data map. 展开更多
关键词 spatial data decoding spatial data compression error evaluation
原文传递
A machine learning framework for low-field NMR data processing 被引量:5
19
作者 Si-Hui Luo Li-Zhi Xiao +4 位作者 Yan Jin Guang-Zhi Liao Bin-Sen Xu Jun Zhou Can Liang 《Petroleum Science》 SCIE CAS CSCD 2022年第2期581-593,共13页
Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strengt... Low-field(nuclear magnetic resonance)NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a“data processing framework”to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results. 展开更多
关键词 Dictionary learning Low-field NMR DENOISING data processing T_(2)distribution
原文传递
GF-3 data real-time processing method based on multi-satellite distributed data processing system 被引量:7
20
作者 YANG Jun CAO Yan-dong +2 位作者 SUN Guang-cai XING Meng-dao GUO Liang 《Journal of Central South University》 SCIE EI CAS CSCD 2020年第3期842-852,共11页
Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process... Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process large amounts of data of spaceborne synthetic aperture radars.It is proposed to use a new method of networked satellite data processing for improving the efficiency of data processing.A multi-satellite distributed SAR real-time processing method based on Chirp Scaling(CS)imaging algorithm is studied in this paper,and a distributed data processing system is built with field programmable gate array(FPGA)chips as the kernel.Different from the traditional CS algorithm processing,the system divides data processing into three stages.The computing tasks are reasonably allocated to different data processing units(i.e.,satellites)in each stage.The method effectively saves computing and storage resources of satellites,improves the utilization rate of a single satellite,and shortens the data processing time.Gaofen-3(GF-3)satellite SAR raw data is processed by the system,with the performance of the method verified. 展开更多
关键词 synthetic aperture radar full-track utilization rate distributed data processing CS imaging algorithm field programmable gate array Gaofen-3
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部