期刊文献+
共找到3,714篇文章
< 1 2 186 >
每页显示 20 50 100
AN EFFICIENT ECG DATA COMPRESSION METHOD
1
作者 Chongxun Zheng Xiangguo Yan (Institute of Biomedical Engineering Xi’an Jiaotong University, Xi’an, 710049, China) 《Chinese Journal of Biomedical Engineering(English Edition)》 1997年第4期234-239,共6页
-An efficient ECG (Electrocardiogram) data compression algorithm called KPDEC (key point detection and error compensation) is presented in this pa-Per. With tkis KPDEC method only the key points (KPs) of ECG signals a... -An efficient ECG (Electrocardiogram) data compression algorithm called KPDEC (key point detection and error compensation) is presented in this pa-Per. With tkis KPDEC method only the key points (KPs) of ECG signals are con-sidered to be saved to make the compression more efficient. These KPs can be ex-tracted from ECG samples by calculating the second-ordered central difrerences.Then an error pre-correcting technique is used to let the saved sample having a rea-sonable compensation berore it is stored. This technique is able to reduce the PRD (Percentage Root Mean Square Difference) obviously. In the paper we describe an optimal cording sckeme for getting higer compression rate. Furthermore, an adap-tive filtering tecknique is designed for reconstructed ECG signals to get better fi-delity waves. The algorithm is able to compress ECG data to 168 bits per second with PRD less than 3%. 展开更多
关键词 ecg data compression
暂未订购
IDCE:Integrated Data Compression and Encryption for Enhanced Security and Efficiency
2
作者 Muhammad Usama Arshad Aziz +2 位作者 Suliman A.Alsuhibany Imtiaz Hassan Farrukh Yuldashev 《Computer Modeling in Engineering & Sciences》 2025年第4期1029-1048,共20页
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da... Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field. 展开更多
关键词 Chaotic maps SECURITY data compression data encryption integrated compression and encryption
在线阅读 下载PDF
ADVANCED FREQUENCY-DIRECTED RUN-LENTH BASED CODING SCHEME ON TEST DATA COMPRESSION FOR SYSTEM-ON-CHIP 被引量:1
3
作者 张颖 吴宁 葛芬 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2012年第1期77-83,共7页
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre... Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes. 展开更多
关键词 test data compression FDR codes test resource partitioning SYSTEM-ON-CHIP
在线阅读 下载PDF
Battery pack capacity prediction using deep learning and data compression technique:A method for real-world vehicles
4
作者 Yi Yang Jibin Yang +4 位作者 Xiaohua Wu Liyue Fu Xinmei Gao Xiandong Xie Quan Ouyang 《Journal of Energy Chemistry》 2025年第7期553-564,共12页
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti... The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications. 展开更多
关键词 Lithium-ion battery Capacity prediction Real-world vehicle data data compression Deep learning
在线阅读 下载PDF
An Efficient Test Data Compression Technique Based on Codes
5
作者 方建平 郝跃 +1 位作者 刘红侠 李康 《Journal of Semiconductors》 EI CAS CSCD 北大核心 2005年第11期2062-2068,共7页
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t... This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method 展开更多
关键词 test data compression unspecified bits assignment system-on-a-chip test hybrid run-length codes
在线阅读 下载PDF
Compression of ECG Signal Based on Compressive Sensing and the Extraction of Significant Features 被引量:2
6
作者 Mohammed M. Abo-Zahhad Aziza I. Hussein Abdelfatah M. Mohamed 《International Journal of Communications, Network and System Sciences》 2015年第5期97-117,共21页
Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows ... Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression. 展开更多
关键词 Compressed Sensing ecg SIGNAL compression SPARSITY COHERENCE Spatial DOMAIN
暂未订购
A New Vector Data Compression Approach for WebGIS 被引量:2
7
作者 LIYunjin ZHONG Ershun 《Geo-Spatial Information Science》 2011年第1期48-53,共6页
High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new comp... High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points. 展开更多
关键词 vector data compression WEBGIS progressive data transmission
原文传递
Compression of ECG signal using video codec technology-like scheme
8
作者 Dihu Chen Sheng Yang 《Journal of Biomedical Science and Engineering》 2008年第1期22-26,共5页
In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and ... In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection. 展开更多
关键词 ecg compression VIDEO CODEC QRS detection ARITHMETIC CODING
暂未订购
ECG compression and labview implementation
9
作者 Tatiparti Padma M. Madhavi Latha Abrar Ahmed 《Journal of Biomedical Science and Engineering》 2009年第3期177-183,共7页
It is often very difficult for the patient to tell the difference between angina symptoms and heart attack symptoms, so it is very important to recognize the signs of heart attack and immedi-ately seek medical attenti... It is often very difficult for the patient to tell the difference between angina symptoms and heart attack symptoms, so it is very important to recognize the signs of heart attack and immedi-ately seek medical attention. A practical case of this type of remote consultation is examined in this paper. To deal with the huge amount of electrocardiogram (ECG) data for analysis, storage and transmission;an efficient ECG compression technique is needed to reduce the amount of data as much as possible while pre-serving the clinical significant signal for cardiac diagnosis. Here the ECG signal is analyzed for various parameters such as heart rate, QRS-width, etc. Then the various parameters and the compressed signal can be transmitted with less channel capacity. Comparison of various ECG compression techniques like TURNING POINT, AZTEC, CORTES, FFT and DCT it was found that DCT is the best suitable compression technique with compression ratio of about 100:1. In addition, different techniques are available for implementation of hardware components for signal pickup the virtual im-plementation with labview is also used for analysis of various cardiac parameters and to identify the abnormalities like Tachycardia, Bradycardia, AV Block, etc. Both hardware and virtual implementation are also detailed in this context. 展开更多
关键词 ecg compression LABVIEW IMPLEMENTATION
暂未订购
Compression of ECG Signals Based on DWT and Exploiting the Correlation between ECG Signal Samples
10
作者 Mohammed M. Abo-Zahhad Tarik K. Abdel-Hamid Abdelfatah M. Mohamed 《International Journal of Communications, Network and System Sciences》 2014年第1期53-70,共18页
This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code M... This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results. 展开更多
关键词 ecg Signal Segmentation LOSSLESS and LOSSY compression Techniques Discrete Wavelet Transform Energy PACKING Efficiency RUN-LENGTH Coding
暂未订购
New Ecg Signal Compression Model Based on Set Theory Applied to Images
11
作者 Ivan Basile Kabiena Eric Michel Deussom Djomadji Emmanuel Tonye 《Journal of Computer and Communications》 2023年第8期29-43,共15页
Cardiovascular diseases are the origin of many causes of death worldwide. They impose on practitioners optimal diagnostic methods such as telemedicine in order to be able to quickly detect anomalies for daily care and... Cardiovascular diseases are the origin of many causes of death worldwide. They impose on practitioners optimal diagnostic methods such as telemedicine in order to be able to quickly detect anomalies for daily care and monitoring of patients. The Electrocardiogram (ECG) is an examination that can detect abnormal functioning of the heart and generates a large number of digital data which can be stored or transmitted for further analysis. For storage or transmission purposes, one of the challenges is to reduce the space occupied by ECG signal and for that, it is important to offer more and more efficient algorithms capable of achieving high compression rates, while offering a good quality of reconstruction in a relatively short time. We propose in this paper a new ECG compression scheme that is based on a subset of signal splitting and 2D processing, the wavelet transform (DWT) and SPIHT coding which has proved their worth in the field of signal processing and compression. They are exploited for decorrelation and coding of the signal. The results obtained are significant and offer many perspectives. 展开更多
关键词 compression ecg DWT Sub-Set 2D
在线阅读 下载PDF
System-on-Chip Test Data Compression Based on Split-Data Variable Length (SDV) Code
12
作者 J. Robert Theivadas V. Ranganathan J. Raja Paul Perinbam 《Circuits and Systems》 2016年第8期1213-1223,共11页
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti... System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream. 展开更多
关键词 Test data compression SDV Codes SOC ATE Benchmark Circuits
在线阅读 下载PDF
Bidirectional Recurrent Nets for ECG Signal Compression
13
作者 Eman AL-Saidi Khalil El Hindi 《Journal of Computer Science Research》 2022年第4期15-25,共11页
Electrocardiogram(ECG)is a commonly used tool in biological diagnosis of heart diseases.ECG allows the representation of electrical signals which cause heart muscles to contract and relax.Recently,accurate deep learni... Electrocardiogram(ECG)is a commonly used tool in biological diagnosis of heart diseases.ECG allows the representation of electrical signals which cause heart muscles to contract and relax.Recently,accurate deep learning methods have been developed to overcome manual diagnosis in terms of time and effort.However,most of current automatic medical diagnosis use long electrocardiogram(ECG)signals to inspect different types of heart arrhythmia.Therefore,ECG signal files tend to require large storage to store and may cause significant overhead when exchanged over a computer network.This raises the need to come up with effective compression methods for ECG signals.In this work,the authors investigate using BERT(Bidirectional Encoder Representations from Transformers)model,which is a bidirectional neural network that was originally designed for natural language.The authors evaluate the model with respect to its compression ratio and information preservation,and measure information preservation in terms of the of the accuracy of a convolutional neural network in classifying the decompressed signal.The results show that the method can achieve up to 83%saving in storage.Also,the classification accuracy of the decompressed signals is around 92.41%.Furthermore,the method enables the user to balance the compression ratio and the required accuracy of the CNN classifiers. 展开更多
关键词 BERT model Convolutional neural networks(CNN) data compression Deep learning ecg diagnosis
暂未订购
RESEARCH ON ADAPTIVE DATA COMPRESSION METHOD FOR TRIANGULATED SURFACES 被引量:2
14
作者 Wang Wen Wu Shixiong Chen Zichen Department of Mechanical Engineering,Zhejiang University,Hangzhou 310027, China 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2004年第2期189-192,共4页
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t... NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data. 展开更多
关键词 data compression Reverse engineering Triangulated surfaces
在线阅读 下载PDF
Design of quantum VQ iteration and quantum VQ encoding algorithm taking O(√N) steps for data compression 被引量:2
15
作者 庞朝阳 周正威 +1 位作者 陈平形 郭光灿 《Chinese Physics B》 SCIE EI CAS CSCD 2006年第3期618-623,共6页
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)... Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal. 展开更多
关键词 data compression vector quantization Grover's algorithm quantum VQ iteration
原文传递
Improved SDT Process Data Compression Algorithm 被引量:3
16
作者 冯晓东 Cheng +4 位作者 Changling Liu Changling Shao Huihe 《High Technology Letters》 EI CAS 2003年第2期91-96,共6页
Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. B... Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change. 展开更多
关键词 SDT data compression process data treatment
在线阅读 下载PDF
Design of real-time data compression wireless sensor network based on LZW algorithm 被引量:2
17
作者 CHENG Ya-li LI Jin-ming CHENG Nai-peng 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2019年第2期191-198,共8页
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica... A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects. 展开更多
关键词 wireless sensor network ZIGBEE LZW algorithm data compression
在线阅读 下载PDF
A Complexity Analysis and Entropy for Different Data Compression Algorithms on Text Files 被引量:1
18
作者 Mohammad Hjouj Btoush Ziad E. Dawahdeh 《Journal of Computer and Communications》 2018年第1期301-315,共15页
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith... In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results. 展开更多
关键词 TEXT FILES data compression HUFFMAN Coding LZW Hamming ENTROPY COMPLEXITY
暂未订购
Sea Route Monitoring System Using Wireless Sensor Network Based on the Data Compression Algorithm 被引量:1
19
作者 LI Yang ZHANG Zhongshan +3 位作者 HUANGFU Wei CHAI Xiaomeng ZHU Xinpeng ZHU Hongliang 《China Communications》 SCIE CSCD 2014年第A01期179-186,共8页
The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based o... The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China. 展开更多
关键词 wireless sensor network sea route monitoring data compression principal component analysis
在线阅读 下载PDF
DPCM-based vibration sensor data compression and its effect on structural system identification
20
作者 张云峰 李健 《Earthquake Engineering and Engineering Vibration》 SCIE EI CSCD 2005年第1期153-163,共11页
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens... Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data. 展开更多
关键词 data compression INSTRUMENTATION linear predictor modal parameters SENSOR system identification VIBRATION
在线阅读 下载PDF
上一页 1 2 186 下一页 到第
使用帮助 返回顶部