-An efficient ECG (Electrocardiogram) data compression algorithm called KPDEC (key point detection and error compensation) is presented in this pa-Per. With tkis KPDEC method only the key points (KPs) of ECG signals a...-An efficient ECG (Electrocardiogram) data compression algorithm called KPDEC (key point detection and error compensation) is presented in this pa-Per. With tkis KPDEC method only the key points (KPs) of ECG signals are con-sidered to be saved to make the compression more efficient. These KPs can be ex-tracted from ECG samples by calculating the second-ordered central difrerences.Then an error pre-correcting technique is used to let the saved sample having a rea-sonable compensation berore it is stored. This technique is able to reduce the PRD (Percentage Root Mean Square Difference) obviously. In the paper we describe an optimal cording sckeme for getting higer compression rate. Furthermore, an adap-tive filtering tecknique is designed for reconstructed ECG signals to get better fi-delity waves. The algorithm is able to compress ECG data to 168 bits per second with PRD less than 3%.展开更多
Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive da...Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.展开更多
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicti...The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows ...Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression.展开更多
High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new comp...High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points.展开更多
In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and ...In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.展开更多
It is often very difficult for the patient to tell the difference between angina symptoms and heart attack symptoms, so it is very important to recognize the signs of heart attack and immedi-ately seek medical attenti...It is often very difficult for the patient to tell the difference between angina symptoms and heart attack symptoms, so it is very important to recognize the signs of heart attack and immedi-ately seek medical attention. A practical case of this type of remote consultation is examined in this paper. To deal with the huge amount of electrocardiogram (ECG) data for analysis, storage and transmission;an efficient ECG compression technique is needed to reduce the amount of data as much as possible while pre-serving the clinical significant signal for cardiac diagnosis. Here the ECG signal is analyzed for various parameters such as heart rate, QRS-width, etc. Then the various parameters and the compressed signal can be transmitted with less channel capacity. Comparison of various ECG compression techniques like TURNING POINT, AZTEC, CORTES, FFT and DCT it was found that DCT is the best suitable compression technique with compression ratio of about 100:1. In addition, different techniques are available for implementation of hardware components for signal pickup the virtual im-plementation with labview is also used for analysis of various cardiac parameters and to identify the abnormalities like Tachycardia, Bradycardia, AV Block, etc. Both hardware and virtual implementation are also detailed in this context.展开更多
This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code M...This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.展开更多
Cardiovascular diseases are the origin of many causes of death worldwide. They impose on practitioners optimal diagnostic methods such as telemedicine in order to be able to quickly detect anomalies for daily care and...Cardiovascular diseases are the origin of many causes of death worldwide. They impose on practitioners optimal diagnostic methods such as telemedicine in order to be able to quickly detect anomalies for daily care and monitoring of patients. The Electrocardiogram (ECG) is an examination that can detect abnormal functioning of the heart and generates a large number of digital data which can be stored or transmitted for further analysis. For storage or transmission purposes, one of the challenges is to reduce the space occupied by ECG signal and for that, it is important to offer more and more efficient algorithms capable of achieving high compression rates, while offering a good quality of reconstruction in a relatively short time. We propose in this paper a new ECG compression scheme that is based on a subset of signal splitting and 2D processing, the wavelet transform (DWT) and SPIHT coding which has proved their worth in the field of signal processing and compression. They are exploited for decorrelation and coding of the signal. The results obtained are significant and offer many perspectives.展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
Electrocardiogram(ECG)is a commonly used tool in biological diagnosis of heart diseases.ECG allows the representation of electrical signals which cause heart muscles to contract and relax.Recently,accurate deep learni...Electrocardiogram(ECG)is a commonly used tool in biological diagnosis of heart diseases.ECG allows the representation of electrical signals which cause heart muscles to contract and relax.Recently,accurate deep learning methods have been developed to overcome manual diagnosis in terms of time and effort.However,most of current automatic medical diagnosis use long electrocardiogram(ECG)signals to inspect different types of heart arrhythmia.Therefore,ECG signal files tend to require large storage to store and may cause significant overhead when exchanged over a computer network.This raises the need to come up with effective compression methods for ECG signals.In this work,the authors investigate using BERT(Bidirectional Encoder Representations from Transformers)model,which is a bidirectional neural network that was originally designed for natural language.The authors evaluate the model with respect to its compression ratio and information preservation,and measure information preservation in terms of the of the accuracy of a convolutional neural network in classifying the decompressed signal.The results show that the method can achieve up to 83%saving in storage.Also,the classification accuracy of the decompressed signals is around 92.41%.Furthermore,the method enables the user to balance the compression ratio and the required accuracy of the CNN classifiers.展开更多
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t...NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. B...Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.展开更多
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica...A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.展开更多
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith...In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.展开更多
The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based o...The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China.展开更多
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens...Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.展开更多
文摘-An efficient ECG (Electrocardiogram) data compression algorithm called KPDEC (key point detection and error compensation) is presented in this pa-Per. With tkis KPDEC method only the key points (KPs) of ECG signals are con-sidered to be saved to make the compression more efficient. These KPs can be ex-tracted from ECG samples by calculating the second-ordered central difrerences.Then an error pre-correcting technique is used to let the saved sample having a rea-sonable compensation berore it is stored. This technique is able to reduce the PRD (Percentage Root Mean Square Difference) obviously. In the paper we describe an optimal cording sckeme for getting higer compression rate. Furthermore, an adap-tive filtering tecknique is designed for reconstructed ECG signals to get better fi-delity waves. The algorithm is able to compress ECG data to 168 bits per second with PRD less than 3%.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2025).
文摘Data compression plays a vital role in datamanagement and information theory by reducing redundancy.However,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and misuse.With the exponential growth of digital data,robust security measures are essential.Data encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key control.Despite their individual benefits,both require significant computational resources.Additionally,performing them separately for the same data increases complexity and processing time.Recognizing the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and efficiency.Thealgorithmoperates on 128-bit block sizes and a 256-bit secret key length.It combines Huffman coding for compression and a Tent map for encryption.Additionally,an iterative Arnold cat map further enhances cryptographic confusion properties.Experimental analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
基金supported in part by the Science and Technology Department of Sichuan Province(No.2025ZNSFSC0427,No.2024ZDZX0035)the Open Project Fund of Vehicle Measurement,Control and Safety Key Laboratory of Sichuan Province(No.QCCK2024-004)the Industrial and Educational Integration Project of Yibin(No.YB-XHU-20240001)。
文摘The accurate prediction of battery pack capacity in electric vehicles(EVs)is crucial for ensuring safety and optimizing performance.Despite extensive research on predicting cell capacity using laboratory data,predicting the capacity of onboard battery packs from field data remains challenging due to complex operating conditions and irregular EV usage in real-world settings.Most existing methods rely on extracting health feature parameters from raw data for capacity prediction of onboard battery packs,however,selecting specific parameters often results in a loss of critical information,which reduces prediction accuracy.To this end,this paper introduces a novel framework combining deep learning and data compression techniques to accurately predict battery pack capacity onboard.The proposed data compression method converts monthly EV charging data into feature maps,which preserve essential data characteristics while reducing the volume of raw data.To address missing capacity labels in field data,a capacity labeling method is proposed,which calculates monthly battery capacity by transforming the ampere-hour integration formula and applying linear regression.Subsequently,a deep learning model is proposed to build a capacity prediction model,using feature maps from historical months to predict the battery capacity of future months,thus facilitating accurate forecasts.The proposed framework,evaluated using field data from 20 EVs,achieves a mean absolute error of 0.79 Ah,a mean absolute percentage error of 0.65%,and a root mean square error of 1.02 Ah,highlighting its potential for real-world EV applications.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
文摘Diagnoses of heart diseases can be done effectively on long term recordings of ECG signals that preserve the signals’ morphologies. In these cases, the volume of the ECG data produced by the monitoring systems grows significantly. To make the mobile healthcare possible, the need for efficient ECG signal compression algorithms to store and/or transmit the signal efficiently has been rising exponentially. Currently, ECG signal is acquired at Nyquist rate or higher, thus introducing redundancies between adjacent heartbeats due to its quasi-periodic structure. Existing compression methods remove these redundancies by achieving compression and facilitate transmission of the patient’s imperative information. Based on the fact that these signals can be approximated by a linear combination of a few coefficients taken from different basis, an alternative new compression scheme based on Compressive Sensing (CS) has been proposed. CS provides a new approach concerned with signal compression and recovery by exploiting the fact that ECG signal can be reconstructed by acquiring a relatively small number of samples in the “sparse” domains through well-developed optimization procedures. In this paper, a single-lead ECG compression method has been proposed based on improving the signal sparisty through the extraction of the signal significant features. The proposed method starts with a preprocessing stage that detects the peaks and periods of the Q, R and S waves of each beat. Then, the QRS-complex for each signal beat is estimated. The estimated QRS-complexes are subtracted from the original ECG signal and the resulting error signal is compressed using the CS technique. Throughout this process, DWT sparsifying dictionaries have been adopted. The performance of the proposed algorithm, in terms of the reconstructed signal quality and compression ratio, is evaluated by adopting DWT spatial domain basis applied to ECG records extracted from the MIT-BIH Arrhythmia Database. The results indicate that average compression ratio of 11:1 with PRD1 = 1.2% are obtained. Moreover, the quality of the retrieved signal is guaranteed and the compression ratio achieved is an improvement over those obtained by previously reported algorithms. Simulation results suggest that CS should be considered as an acceptable methodology for ECG compression.
基金Supported by the National High-tech R&D Program of China(NO.2007AA120501)
文摘High compression ratio,high decoding performance,and progressive data transmission are the most important require-ments of vector data compression algorithms for WebGIS.To meet these requirements,we present a new compression approach.This paper begins with the generation of multiscale data by converting float coordinates to integer coordinates.It is proved that the distance between the converted point and the original point on screen is within 2 pixels,and therefore,our approach is suitable for the visualization of vector data on the client side.Integer coordinates are passed to an Integer Wavelet Transformer,and the high-frequency coefficients produced by the transformer are encoded by Canonical Huffman codes.The experimental results on river data and road data demonstrate the effectiveness of the proposed approach:compression ratio can reach 10% for river data and 20% for road data,respectively.We conclude that more attention needs be paid to correlation between curves that contain a few points.
文摘In this paper, we present a method using video codec technology to compress ECG signals. This method exploits both intra-beat and inter-beat correlations of the ECG signals to achieve high compression ratios (CR) and a low percent root mean square difference (PRD). Since ECG signals have both intra-beat and inter-beat redundancies like video signals, which have both intra-frame and inter-frame correlation, video codec technology can be used for ECG compression. In order to do this, some pre-process will be needed. The ECG signals should firstly be segmented and normalized to a sequence of beat cycles with the same length, and then these beat cycles can be treated as picture frames and compressed with video codec technology. We have used records from MIT-BIH arrhythmia database to evaluate our algorithm. Results show that, besides compression efficiently, this algorithm has the advantages of resolution adjustable, random access and flexibility for irregular period and QRS false detection.
文摘It is often very difficult for the patient to tell the difference between angina symptoms and heart attack symptoms, so it is very important to recognize the signs of heart attack and immedi-ately seek medical attention. A practical case of this type of remote consultation is examined in this paper. To deal with the huge amount of electrocardiogram (ECG) data for analysis, storage and transmission;an efficient ECG compression technique is needed to reduce the amount of data as much as possible while pre-serving the clinical significant signal for cardiac diagnosis. Here the ECG signal is analyzed for various parameters such as heart rate, QRS-width, etc. Then the various parameters and the compressed signal can be transmitted with less channel capacity. Comparison of various ECG compression techniques like TURNING POINT, AZTEC, CORTES, FFT and DCT it was found that DCT is the best suitable compression technique with compression ratio of about 100:1. In addition, different techniques are available for implementation of hardware components for signal pickup the virtual im-plementation with labview is also used for analysis of various cardiac parameters and to identify the abnormalities like Tachycardia, Bradycardia, AV Block, etc. Both hardware and virtual implementation are also detailed in this context.
文摘This paper presents a hybrid technique for the compression of ECG signals based on DWT and exploiting the correlation between signal samples. It incorporates Discrete Wavelet Transform (DWT), Differential Pulse Code Modulation (DPCM), and run-length coding techniques for the compression of different parts of the signal;where lossless compression is adopted in clinically relevant parts and lossy compression is used in those parts that are not clinically relevant. The proposed compression algorithm begins by segmenting the ECG signal into its main components (P-waves, QRS-complexes, T-waves, U-waves and the isoelectric waves). The resulting waves are grouped into Region of Interest (RoI) and Non Region of Interest (NonRoI) parts. Consequently, lossless and lossy compression schemes are applied to the RoI and NonRoI parts respectively. Ideally we would like to compress the signal losslessly, but in many applications this is not an option. Thus, given a fixed bit budget, it makes sense to spend more bits to represent those parts of the signal that belong to a specific RoI and, thus, reconstruct them with higher fidelity, while allowing other parts to suffer larger distortion. For this purpose, the correlation between the successive samples of the RoI part is utilized by adopting DPCM approach. However the NonRoI part is compressed using DWT, thresholding and coding techniques. The wavelet transformation is used for concentrating the signal energy into a small number of transform coefficients. Compression is then achieved by selecting a subset of the most relevant coefficients which afterwards are efficiently coded. Illustrative examples are given to demonstrate thresholding based on energy packing efficiency strategy, coding of DWT coefficients and data packetizing. The performance of the proposed algorithm is tested in terms of the compression ratio and the PRD distortion metrics for the compression of 10 seconds of data extracted from records 100 and 117 of MIT-BIH database. The obtained results revealed that the proposed technique possesses higher compression ratios and lower PRD compared to the other wavelet transformation techniques. The principal advantages of the proposed approach are: 1) the deployment of different compression schemes to compress different ECG parts to reduce the correlation between consecutive signal samples;and 2) getting high compression ratios with acceptable reconstruction signal quality compared to the recently published results.
文摘Cardiovascular diseases are the origin of many causes of death worldwide. They impose on practitioners optimal diagnostic methods such as telemedicine in order to be able to quickly detect anomalies for daily care and monitoring of patients. The Electrocardiogram (ECG) is an examination that can detect abnormal functioning of the heart and generates a large number of digital data which can be stored or transmitted for further analysis. For storage or transmission purposes, one of the challenges is to reduce the space occupied by ECG signal and for that, it is important to offer more and more efficient algorithms capable of achieving high compression rates, while offering a good quality of reconstruction in a relatively short time. We propose in this paper a new ECG compression scheme that is based on a subset of signal splitting and 2D processing, the wavelet transform (DWT) and SPIHT coding which has proved their worth in the field of signal processing and compression. They are exploited for decorrelation and coding of the signal. The results obtained are significant and offer many perspectives.
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.
文摘Electrocardiogram(ECG)is a commonly used tool in biological diagnosis of heart diseases.ECG allows the representation of electrical signals which cause heart muscles to contract and relax.Recently,accurate deep learning methods have been developed to overcome manual diagnosis in terms of time and effort.However,most of current automatic medical diagnosis use long electrocardiogram(ECG)signals to inspect different types of heart arrhythmia.Therefore,ECG signal files tend to require large storage to store and may cause significant overhead when exchanged over a computer network.This raises the need to come up with effective compression methods for ECG signals.In this work,the authors investigate using BERT(Bidirectional Encoder Representations from Transformers)model,which is a bidirectional neural network that was originally designed for natural language.The authors evaluate the model with respect to its compression ratio and information preservation,and measure information preservation in terms of the of the accuracy of a convolutional neural network in classifying the decompressed signal.The results show that the method can achieve up to 83%saving in storage.Also,the classification accuracy of the decompressed signals is around 92.41%.Furthermore,the method enables the user to balance the compression ratio and the required accuracy of the CNN classifiers.
基金This project is supported by Provincial Key Project of Science and Technology of Zhejiang(No.2003C21031).
文摘NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金The authors would like to acknowledge the support from Project“973”of the State Key Fundamental Research under grant G1998030415.
文摘Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.
文摘A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.
文摘In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.
基金key project of the National Natural Science Foundation of China,Information Acquirement and Publish System of Shipping Lane in Harbor,the fund of Beijing Science and Technology Commission Network Monitoring and Application Demonstration in Food Security,the Program for New Century Excellent Talents in University,National Natural Science Foundation of ChinaProject,Fundamental Research Funds for the Central Universities
文摘The wireless sensor network (WSN) plays an important role in monitoring the environment near the harbor in order to make the ships nearby out of dangers and to optimize the utilization of limited sea routes. Based on the historical data collected by the buoys with sensing capacities, a novel data compression algorithm called adaptive time piecewise constant vector quantization (ATPCVQ) is proposed to utilize the principal components. The proposed system is capable of lowering the budget of wireless communication and enhancing the lifetime of sensor nodes subject to the constrain of data precision. Furthermore, the proposed algorithm is verified by using the practical data in Qinhuangdao Port of China.
文摘Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.