期刊文献+
共找到3,518篇文章
< 1 2 176 >
每页显示 20 50 100
IDCE:Integrated Data Compression and Encryption for Enhanced Security and Efficiency
1
作者 Muhammad Usama Arshad Aziz +2 位作者 Suliman A.Alsuhibany Imtiaz Hassan Farrukh Yuldashev 《Computer Modeling in Engineering & Sciences》 2025年第4期1029-1048,共20页
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da... Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field. 展开更多
关键词 Chaotic maps SECURITY data compression data encryption integrated compression and encryption
在线阅读 下载PDF
Battery pack capacity prediction using deep learning and data compression technique:A method for real-world vehicles
2
作者 Yi Yang Jibin Yang +4 位作者 Xiaohua Wu Liyue Fu Xinmei Gao Xiandong Xie Quan Ouyang 《Journal of Energy Chemistry》 2025年第7期553-564,共12页
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti... The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications. 展开更多
关键词 Lithium-ion battery Capacity prediction Real-world vehicle data data compression Deep learning
在线阅读 下载PDF
Enhancing the data processing speed of a deep-learning-based three-dimensional single molecule localization algorithm (FD-DeepLoc) with a combination of feature compression and pipeline programming
3
作者 Shuhao Guo Jiaxun Lin +1 位作者 Yingjun Zhang Zhen-Li Huang 《Journal of Innovative Optical Health Sciences》 2025年第2期150-160,共11页
Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.... Three-dimensional(3D)single molecule localization microscopy(SMLM)plays an important role in biomedical applications,but its data processing is very complicated.Deep learning is a potential tool to solve this problem.As the state of art 3D super-resolution localization algorithm based on deep learning,FD-DeepLoc algorithm reported recently still has a gap with the expected goal of online image processing,even though it has greatly improved the data processing throughput.In this paper,a new algorithm Lite-FD-DeepLoc is developed on the basis of FD-DeepLoc algorithm to meet the online image processing requirements of 3D SMLM.This new algorithm uses the feature compression method to reduce the parameters of the model,and combines it with pipeline programming to accelerate the inference process of the deep learning model.The simulated data processing results show that the image processing speed of Lite-FD-DeepLoc is about twice as fast as that of FD-DeepLoc with a slight decrease in localization accuracy,which can realize real-time processing of 256×256 pixels size images.The results of biological experimental data processing imply that Lite-FD-DeepLoc can successfully analyze the data based on astigmatism and saddle point engineering,and the global resolution of the reconstructed image is equivalent to or even better than FD-DeepLoc algorithm. 展开更多
关键词 Real-time data processing feature compression pipeline programming
原文传递
A review of test methods for uniaxial compressive strength of rocks:Theory,apparatus and data processing
4
作者 Wei-Qiang Xie Xiao-Li Liu +2 位作者 Xiao-Ping Zhang Quan-Sheng Liu En-ZhiWang 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第3期1889-1905,共17页
The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and ... The uniaxial compressive strength(UCS)of rocks is a vital geomechanical parameter widely used for rock mass classification,stability analysis,and engineering design in rock engineering.Various UCS testing methods and apparatuses have been proposed over the past few decades.The objective of the present study is to summarize the status and development in theories,test apparatuses,data processing of the existing testing methods for UCS measurement.It starts with elaborating the theories of these test methods.Then the test apparatus and development trends for UCS measurement are summarized,followed by a discussion on rock specimens for test apparatus,and data processing methods.Next,the method selection for UCS measurement is recommended.It reveals that the rock failure mechanism in the UCS testing methods can be divided into compression-shear,compression-tension,composite failure mode,and no obvious failure mode.The trends of these apparatuses are towards automation,digitization,precision,and multi-modal test.Two size correction methods are commonly used.One is to develop empirical correlation between the measured indices and the specimen size.The other is to use a standard specimen to calculate the size correction factor.Three to five input parameters are commonly utilized in soft computation models to predict the UCS of rocks.The selection of the test methods for the UCS measurement can be carried out according to the testing scenario and the specimen size.The engineers can gain a comprehensive understanding of the UCS testing methods and its potential developments in various rock engineering endeavors. 展开更多
关键词 Uniaxial compressive strength(UCS) UCS testing methods Test apparatus data processing
在线阅读 下载PDF
Quantitative Comparative Study of the Performance of Lossless Compression Methods Based on a Text Data Model
5
作者 Namogo Silué Sié Ouattara +1 位作者 Mouhamadou Dosso Alain Clément 《Open Journal of Applied Sciences》 2024年第7期1944-1962,共19页
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform... Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding. 展开更多
关键词 Arithmetic Coding BWT compression Ratio Comparative Study compression Techniques Shannon-Fano HUFFMAN Lossless compression LZW PERFORMANCE REDUNDANCY RLE Text data Tunstall
在线阅读 下载PDF
ADVANCED FREQUENCY-DIRECTED RUN-LENTH BASED CODING SCHEME ON TEST DATA COMPRESSION FOR SYSTEM-ON-CHIP 被引量:1
6
作者 张颖 吴宁 葛芬 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2012年第1期77-83,共7页
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre... Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes. 展开更多
关键词 test data compression FDR codes test resource partitioning SYSTEM-ON-CHIP
在线阅读 下载PDF
An Efficient Test Data Compression Technique Based on Codes
7
作者 方建平 郝跃 +1 位作者 刘红侠 李康 《Journal of Semiconductors》 EI CAS CSCD 北大核心 2005年第11期2062-2068,共7页
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t... This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method 展开更多
关键词 test data compression unspecified bits assignment system-on-a-chip test hybrid run-length codes
在线阅读 下载PDF
Empirical data decomposition and its applications in image compression 被引量:2
8
作者 Deng Jiaxian Wu Xiaoqin 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2007年第1期164-170,共7页
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i... A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression. 展开更多
关键词 Image processing Image compression Empirical data decomposition NON-STATIONARY NONLINEAR data decomposition framework
在线阅读 下载PDF
Efficient Compression of Vector Data Map Based on a Clustering Model 被引量:5
9
作者 YANG Bisheng LI Qingquan 《Geo-Spatial Information Science》 2009年第1期13-17,共5页
This paper proposes a new method for the compression of vector data map. Three key steps are encompassed in the proposed method, namely, the simplification of vector data map via the elimination of vertices, the compr... This paper proposes a new method for the compression of vector data map. Three key steps are encompassed in the proposed method, namely, the simplification of vector data map via the elimination of vertices, the compression of re- moved vertices based on a clustering model, and the decoding of the compressed vector data map. The proposed compres- sion method was implemented and applied to compress vector data map to investigate its performance in terms of the com- pression ratio and distortions of geometric shapes. The results show that the proposed method provides a feasible and effi- cient solution for the compression of vector data map and is able to achieve a promising ratio of compression and maintain the main shape characteristics of the spatial objects within the compressed vector data map. 展开更多
关键词 spatial data decoding spatial data compression error evaluation
原文传递
RESEARCH ON ADAPTIVE DATA COMPRESSION METHOD FOR TRIANGULATED SURFACES 被引量:2
10
作者 Wang Wen Wu Shixiong Chen Zichen Department of Mechanical Engineering,Zhejiang University,Hangzhou 310027, China 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2004年第2期189-192,共4页
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t... NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data. 展开更多
关键词 data compression Reverse engineering Triangulated surfaces
在线阅读 下载PDF
Design of quantum VQ iteration and quantum VQ encoding algorithm taking O(√N) steps for data compression 被引量:2
11
作者 庞朝阳 周正威 +1 位作者 陈平形 郭光灿 《Chinese Physics B》 SCIE EI CAS CSCD 2006年第3期618-623,共6页
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)... Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal. 展开更多
关键词 data compression vector quantization Grover's algorithm quantum VQ iteration
原文传递
A New Vector Data Compression Approach for WebGIS 被引量:2
12
作者 LIYunjin ZHONG Ershun 《Geo-Spatial Information Science》 2011年第1期48-53,共6页
High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new comp... High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points. 展开更多
关键词 vector data compression WEBGIS progressive data transmission
原文传递
Improved SDT Process Data Compression Algorithm 被引量:3
13
作者 冯晓东 Cheng +4 位作者 Changling Liu Changling Shao Huihe 《High Technology Letters》 EI CAS 2003年第2期91-96,共6页
Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. B... Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change. 展开更多
关键词 SDT data compression process data treatment
在线阅读 下载PDF
Design of real-time data compression wireless sensor network based on LZW algorithm 被引量:2
14
作者 CHENG Ya-li LI Jin-ming CHENG Nai-peng 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2019年第2期191-198,共8页
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica... A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects. 展开更多
关键词 wireless sensor network ZIGBEE LZW algorithm data compression
在线阅读 下载PDF
A Hybrid Method for Compression of Solar Radiation Data Using Neural Networks 被引量:1
15
作者 Bharath Chandra Mummadisetty Astha Puri +1 位作者 Ershad Sharifahmadian Shahram Latifi 《International Journal of Communications, Network and System Sciences》 2015年第6期217-228,共12页
The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these v... The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these variables for accurate solar radiation prediction is very important. This paper presents a hybrid method for the compression of solar radiation using predictive analysis. The prediction of minute wise solar radiation is performed by using different models of Artificial Neural Networks (ANN), namely Multi-layer perceptron neural network (MLPNN), Cascade feed forward back propagation (CFNN) and Elman back propagation (ELMNN). Root mean square error (RMSE) is used to evaluate the prediction accuracy of the three ANN models used. The information and knowledge gained from the present study could improve the accuracy of analysis concerning climate studies and help in congestion control. 展开更多
关键词 data compression PREDICTIVE Analysis Artificial NEURAL Network compression RATIO Machine Learning CLIMATE data Prediction
暂未订购
A Path-Based Approach for Data Aggregation in Grid-Based Wireless Sensor Networks 被引量:1
16
作者 Neng-Chung Wang Yung-Kuei Chiang Chih-Hung Hsieh 《Journal of Electronic Science and Technology》 CAS 2014年第3期313-317,共5页
Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime,... Sensor nodes in a wireless sensor network (WSN) are typically powered by batteries, thus the energy is constrained. It is our design goal to efficiently utilize the energy of each sensor node to extend its lifetime, so as to prolong the lifetime of the whole WSN. In this paper, we propose a path-based data aggregation scheme (PBDAS) for grid-based wireless sensor networks. In order to extend the lifetime of a WSN, we construct a grid infrastructure by partitioning the whole sensor field into a grid of cells. Each cell has a head responsible for aggregating its own data with the data sensed by the others in the same cell and then transmitting out. In order to efficiently and rapidly transmit the data to the base station (BS), we link each cell head to form a chain. Each cell head on the chain takes turn becoming the chain leader responsible for transmitting data to the BS. Aggregated data moves from head to head along the chain, and finally the chain leader transmits to the BS. In PBDAS, only the cell heads need to transmit data toward the BS. Therefore, the data transmissions to the BS substantially decrease. Besides, the cell heads and chain leader are designated in turn according to the energy level so that the energy depletion of nodes is evenly distributed. Simulation results show that the proposed PBDAS extends the lifetime of sensor nodes, so as to make the lifetime of the whole network longer. 展开更多
关键词 Base station cell head data aggregation grid-based wireless sensor networks
在线阅读 下载PDF
A Complexity Analysis and Entropy for Different Data Compression Algorithms on Text Files 被引量:1
17
作者 Mohammad Hjouj Btoush Ziad E. Dawahdeh 《Journal of Computer and Communications》 2018年第1期301-315,共15页
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith... In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results. 展开更多
关键词 TEXT FILES data compression HUFFMAN Coding LZW Hamming ENTROPY COMPLEXITY
暂未订购
Sea Route Monitoring System Using Wireless Sensor Network Based on the Data Compression Algorithm 被引量:1
18
作者 LI Yang ZHANG Zhongshan +3 位作者 HUANGFU Wei CHAI Xiaomeng ZHU Xinpeng ZHU Hongliang 《China Communications》 SCIE CSCD 2014年第A01期179-186,共8页
The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based o... The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China. 展开更多
关键词 wireless sensor network sea route monitoring data compression principal component analysis
在线阅读 下载PDF
Technology of OptimalData Compression
19
作者 杨峰 彭苏萍 +1 位作者 郑裕 梁春青 《International Journal of Mining Science and Technology》 SCIE EI 2000年第1期22-25,共4页
The method of data compression, using orthogonal transform, is introduced so as to insure the minimal distortion of signal restoration. It, featured with transformation, can compress the data according to the needed p... The method of data compression, using orthogonal transform, is introduced so as to insure the minimal distortion of signal restoration. It, featured with transformation, can compress the data according to the needed precision. The ratio of compressed data is closely related to precision. The results show it to be favorable to different kinds of data compression. 展开更多
关键词 ORTHOGONAL TRANSFORMATION data compression optimization CORRELATION
在线阅读 下载PDF
DPCM-based vibration sensor data compression and its effect on structural system identification
20
作者 张云峰 李健 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2005年第1期153-163,共11页
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens... Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data. 展开更多
关键词 data compression INSTRUMENTATION linear predictor modal parameters SENSOR system identification VIBRATION
在线阅读 下载PDF
上一页 1 2 176 下一页 到第
使用帮助 返回顶部