期刊文献+
共找到953篇文章
< 1 2 48 >
每页显示 20 50 100
Data partitioning based on sampling for power load streams
1
作者 王永利 徐宏炳 +2 位作者 董逸生 钱江波 刘学军 《Journal of Southeast University(English Edition)》 EI CAS 2005年第3期293-298,共6页
A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,wh... A novel data streams partitioning method is proposed to resolve problems of range-aggregation continuous queries over parallel streams for power industry.The first step of this method is to parallel sample the data,which is implemented as an extended reservoir-sampling algorithm.A skip factor based on the change ratio of data-values is introduced to describe the distribution characteristics of data-values adaptively.The second step of this method is to partition the fluxes of data streams averagely,which is implemented with two alternative equal-depth histogram generating algorithms that fit the different cases:one for incremental maintenance based on heuristics and the other for periodical updates to generate an approximate partition vector.The experimental results on actual data prove that the method is efficient,practical and suitable for time-varying data streams processing. 展开更多
关键词 data streams continuous queries parallel processing sampling data partitioning
在线阅读 下载PDF
An Improved Hilbert Curve for Parallel Spatial Data Partitioning 被引量:7
2
作者 MENG Lingkui HUANG Changqing ZHAO Chunyu LIN Zhiyong 《Geo-Spatial Information Science》 2007年第4期282-286,共5页
A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on t... A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on the improved Hilbert curve, the algorithm can be designed to achieve almost-uniform spatial data partitioning among multiple disks in parallel spatial databases. Thus, the phenomenon of data imbalance can be significantly avoided and search and query efficiency can be enhanced. 展开更多
关键词 parallel spatial database spatial data partitioning data imbalance Hilbert curve
在线阅读 下载PDF
Clustering method based on data division and partition 被引量:1
3
作者 卢志茂 刘晨 +2 位作者 S.Massinanke 张春祥 王蕾 《Journal of Central South University》 SCIE EI CAS 2014年第1期213-222,共10页
Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets(VLDS).In this work,a novel division and partition clustering method(DP... Many classical clustering algorithms do good jobs on their prerequisite but do not scale well when being applied to deal with very large data sets(VLDS).In this work,a novel division and partition clustering method(DP) was proposed to solve the problem.DP cut the source data set into data blocks,and extracted the eigenvector for each data block to form the local feature set.The local feature set was used in the second round of the characteristics polymerization process for the source data to find the global eigenvector.Ultimately according to the global eigenvector,the data set was assigned by criterion of minimum distance.The experimental results show that it is more robust than the conventional clusterings.Characteristics of not sensitive to data dimensions,distribution and number of nature clustering make it have a wide range of applications in clustering VLDS. 展开更多
关键词 CLUSTERING DIVISION partition very large data sets (VLDS)
在线阅读 下载PDF
A rate-distortion optimized rate shaping scheme for H.264 data partitioned video bitstream
4
作者 张锦锋 《High Technology Letters》 EI CAS 2009年第1期65-69,共5页
To enable quality sealability and further improve the reconstructed video quallty m rate shaping, a rate-distortion optimized packet dropping scheme for H. 264 data partitioned video bitstream is proposed in this pape... To enable quality sealability and further improve the reconstructed video quallty m rate shaping, a rate-distortion optimized packet dropping scheme for H. 264 data partitioned video bitstream is proposed in this paper. Some side information is generated for each video bitstream in advance, while streaming such side information is exploited by a greedy algorithm to optimally drop partitions in a rate-distortion optimized way. Quality sealability is supported by adopting data partition instead of whole frame as the dropping unit. Simulation resuhs show that the proposed scheme achieves a great gain in the reconstructed video quality over two typical frame dropping schemes, with the help of the fine granularity in dropping unit as well as rate-distortion optimization. 展开更多
关键词 rate shaping frame dropping rate-distortion optimization data partition H.264
在线阅读 下载PDF
A Partition Checkpoint Strategy Based on Data Segment Priority
5
作者 LIANG Ping LIU Yunsheng 《Wuhan University Journal of Natural Sciences》 CAS 2012年第2期109-113,共5页
A partition checkpoint strategy based on data segment priority is presented to meet the timing constraints of the data and the transaction in embedded real-time main memory database systems(ERTMMDBS) as well as to r... A partition checkpoint strategy based on data segment priority is presented to meet the timing constraints of the data and the transaction in embedded real-time main memory database systems(ERTMMDBS) as well as to reduce the number of the transactions missing their deadlines and the recovery time.The partition checkpoint strategy takes into account the characteristics of the data and the transactions associated with it;moreover,it partitions the database according to the data segment priority and sets the corresponding checkpoint frequency to each partition for independent checkpoint operation.The simulation results show that the partition checkpoint strategy decreases the ratio of trans-actions missing their deadlines. 展开更多
关键词 embedded real-time main memory database systems database recovery partition checkpoint data segment priority
原文传递
Highly Available Hypercube Tokenized Sequential Matrix Partitioned Data Sharing in Large P2P Networks
6
作者 C. G. Ravichandran J. Lourdu Xavier 《Circuits and Systems》 2016年第9期2109-2119,共11页
Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P... Peer-to-peer (P2P) networking is a distributed architecture that partitions tasks or data between peer nodes. In this paper, an efficient Hypercube Sequential Matrix Partition (HS-MP) for efficient data sharing in P2P Networks using tokenizer method is proposed to resolve the problems of the larger P2P networks. The availability of data is first measured by the tokenizer using Dynamic Hypercube Organization. By applying Dynamic Hypercube Organization, that efficiently coordinates and assists the peers in P2P network ensuring data availability at many locations. Each data in peer is then assigned with valid ID by the tokenizer using Sequential Self-Organizing (SSO) ID generation model. This ensures data sharing with other nodes in large P2P network at minimum time interval which is obtained through proximity of data availability. To validate the framework HS-MP, the performance is evaluated using traffic traces collected from data sharing applications. Simulations conducting using Network simulator-2 show that the proposed framework outperforms the conventional streaming models. The performance of the proposed system is analyzed using energy consumption, average latency and average data availability rate with respect to the number of peer nodes, data size, amount of data shared and execution time. The proposed method reduces the energy consumption 43.35% to transpose traffic, 35.29% to bitrev traffic and 25% to bitcomp traffic patterns. 展开更多
关键词 Peer-to-Peer (P2P) VIDEO-ON-DEMAND HYPERCUBE Sequential Matrix partition data Mapping data Availability
在线阅读 下载PDF
Data-Aware Partitioning Schema in MapReduce
7
作者 Liang Junjie Liu Qiongni +1 位作者 Yin Li Yu Dunhui 《国际计算机前沿大会会议论文集》 2015年第1期28-29,共2页
With the advantages of MapReduce programming model in parallel computing and processing of data and tasks on large-scale clusters, a Dataaware partitioning schema in MapReduce for large-scale high-dimensional data is ... With the advantages of MapReduce programming model in parallel computing and processing of data and tasks on large-scale clusters, a Dataaware partitioning schema in MapReduce for large-scale high-dimensional data is proposed. It optimizes partition method of data blocks with the same contribution to computation in MapReduce. Using a two-stage data partitioning strategy, the data are uniformly distributed into data blocks by clustering and partitioning. The experiments show that the data-aware partitioning schema is very effective and extensible for improving the query efficiency of highdimensional data. 展开更多
关键词 CLOUD COMPUTING MAPREDUCE HIGH-DIMENSIONAL data dataaware partitioning
在线阅读 下载PDF
A HEVC Video Steganalysis Algorithm Based on PU Partition Modes 被引量:3
8
作者 Zhonghao Li Laijing Meng +3 位作者 Shutong Xu Zhaohong Li Yunqing Shi Yuanchang Liang 《Computers, Materials & Continua》 SCIE EI 2019年第5期563-574,共12页
Steganalysis is a technique used for detecting the existence of secret information embedded into cover media such as images and videos.Currently,with the higher speed of the Internet,videos have become a kind of main ... Steganalysis is a technique used for detecting the existence of secret information embedded into cover media such as images and videos.Currently,with the higher speed of the Internet,videos have become a kind of main methods for transferring information.The latest video coding standard High Efficiency Video Coding(HEVC)shows better coding performance compared with the H.264/AVC standard published in the previous time.Therefore,since the HEVC was published,HEVC videos have been widely used as carriers of hidden information.In this paper,a steganalysis algorithm is proposed to detect the latest HEVC video steganography method which is based on the modification of Prediction Units(PU)partition modes.To detect the embedded data,All the PU partition modes are extracted from P pictures,and the probability of each PU partition mode in cover videos and stego videos is adopted as the classification feature.Furthermore,feature optimization is applied,that the 25-dimensional steganalysis feature has been reduced to the 3-dimensional feature.Then the Support Vector Machine(SVM)is used to identify stego videos.It is demonstrated in experimental results that the proposed steganalysis algorithm can effectively detect the stego videos,and much higher classification accuracy has been achieved compared with state-of-the-art work. 展开更多
关键词 Video steganalysis PU partition modes data hiding HEVC videos
在线阅读 下载PDF
A Partitioning Methodology That Optimizes the Communication Cost for Reconfigurable Computing Systems
9
作者 Ramzi Ayadi Bouraoui Ouni Abdellatif Mtibaa 《International Journal of Automation and computing》 EI 2012年第3期280-287,共8页
This paper focuses on the design process for reconfigurable architecture. Our contribution focuses on introducing a new temporal partitioning algorithm. Our algorithm is based on typical mathematic flow to solve the t... This paper focuses on the design process for reconfigurable architecture. Our contribution focuses on introducing a new temporal partitioning algorithm. Our algorithm is based on typical mathematic flow to solve the temporal partitioning problem. This algorithm optimizes the transfer of data required between design partitions and the reconfiguration overhead. Results show that our algorithm considerably decreases the communication cost and the latency compared with other well known algorithms. 展开更多
关键词 Temporal partitioning data flow graph communication cost reconfigurable computing systems field-programmable gate array (FPGA)
原文传递
A Study on Associated Rules and Fuzzy Partitions for Classification
10
作者 Yeu-Shiang Huang Jyi-Feng Yao 《Intelligent Information Management》 2012年第5期217-224,共8页
The amount of data for decision making has increased tremendously in the age of the digital economy. Decision makers who fail to proficiently manipulate the data produced may make incorrect decisions and therefore har... The amount of data for decision making has increased tremendously in the age of the digital economy. Decision makers who fail to proficiently manipulate the data produced may make incorrect decisions and therefore harm their business. Thus, the task of extracting and classifying the useful information efficiently and effectively from huge amounts of computational data is of special importance. In this paper, we consider that the attributes of data could be both crisp and fuzzy. By examining the suitable partial data, segments with different classes are formed, then a multithreaded computation is performed to generate crisp rules (if possible), and finally, the fuzzy partition technique is employed to deal with the fuzzy attributes for classification. The rules generated in classifying the overall data can be used to gain more knowledge from the data collected. 展开更多
关键词 data Mining Fuzzy partition PARTIAL CLASSIFICATION ASSOCIATION RULE Knowledge Discovery.
暂未订购
Hybrid Graph Partitioning with OLB Approach in Distributed Transactions
11
作者 Rajesh Bharati Vahida Attar 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期763-775,共13页
Online Transaction Processing(OLTP)gets support from data partitioning to achieve better performance and scalability.The primary objective of database and application developers is to provide scalable and reliable dat... Online Transaction Processing(OLTP)gets support from data partitioning to achieve better performance and scalability.The primary objective of database and application developers is to provide scalable and reliable database systems.This research presents a novel method for data partitioning and load balancing for scalable transactions.Data is efficiently partitioned using the hybrid graph partitioning method.Optimized load balancing(OLB)approach is applied to calculate the weight factor,average workload,and partition efficiency.The presented approach is appropriate for various online data transaction applications.The quality of the proposed approach is examined using OLTP database benchmark.The performance of the proposed methodology significantly outperformed with respect to metrics like throughput,response time,and CPU utilization. 展开更多
关键词 datapartitioning SCALABILITY OPTIMIZATION THROUGHPUT
在线阅读 下载PDF
An Animated GIF Steganography Using Variable Block Partition Scheme
12
作者 Maram Abdullah M.Alyahya Arshiya S.Ansari Mohammad Sajid Mohammadi 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期897-914,共18页
The paper presents a novel Graphics Interchange Format (GIF) Steganography system. The algorithm uses an animated (GIF) file format video to applyon, a secured and variable image partition scheme for data embedding. T... The paper presents a novel Graphics Interchange Format (GIF) Steganography system. The algorithm uses an animated (GIF) file format video to applyon, a secured and variable image partition scheme for data embedding. The secretdata could be any character text, any image, an audio file, or a video file;that isconverted in the form of bits. The proposed method uses a variable partitionscheme structure for data embedding in the (GIF) file format video. The algorithmestimates the capacity of the cover (GIF) image frames to embed data bits. Ourmethod built variable partition blocks in an empty frame separately and incorporate it with randomly selected (GIF) frames. This way the (GIF) frame is dividedinto variable block same as in the empty frame. Then algorithm embeds secretdata on appropriate pixel of the (GIF) frame. Each selected partition block for dataembedding, can store a different number of data bits based on block size. Intruders could never come to know exact position of the secrete data in this stegoframe. All the (GIF) frames are rebuild to make animated stego (GIF) video.The performance of the proposed (GIF) algorithm has experimented andevaluated based on different input parameters, like Mean Square Error (MSE)and Peak Signal-to-Noise Ratio (PSNR) values. The results are compared withsome existing methods and found that our method has promising results. 展开更多
关键词 (GIF)Steganography frame partition variable data insertion data encapsulation
在线阅读 下载PDF
An Improved FP-Growth Algorithm Based on SOM Partition
13
作者 Kuikui Jia Haibin Liu 《国际计算机前沿大会会议论文集》 2017年第1期42-44,共3页
FP-growth algorithm is an algorithm for mining association rules without generating candidate sets.It has high practical value in many fields.However,it is a memory resident algorithm,and can only handle small data se... FP-growth algorithm is an algorithm for mining association rules without generating candidate sets.It has high practical value in many fields.However,it is a memory resident algorithm,and can only handle small data sets.It seems powerless when dealing with massive data sets.This paper improves the FP-growth algorithm.The core idea of the improved algorithm is to partition massive data set into small data sets,which would be dealt with separately.Firstly,systematic sampling methods are used to extract representative samples from large data sets,and these samples are used to make SOM(Self-organizing Map)cluster analysis.Then,the large data set is partitioned into several subsets according to the cluster results.Lastly,FP-growth algorithm is executed in each subset,and association rules are mined.The experimental result shows that the improved algorithm reduces the memory consumption,and shortens the time of data mining.The processing capacity and efficiency of massive data is enhanced by the improved algorithm. 展开更多
关键词 FP-GROWTH SOM data MINING CLUSTER partition
在线阅读 下载PDF
Graph Based Two-Phase Procedure for Phasor Data Concentrator Planning in Wide Area Measurement System of Smart Grid
14
作者 Ma Hailong Duan Tong +2 位作者 Yi Peng Jiang Yiming Zhang Jin 《China Communications》 2025年第11期291-304,共14页
The phasor data concentrator placement(PDCP)in wide area measurement systems(WAMS)is an optimization problem in the communication network planning for power grid.Instead of using the traditional integer linear program... The phasor data concentrator placement(PDCP)in wide area measurement systems(WAMS)is an optimization problem in the communication network planning for power grid.Instead of using the traditional integer linear programming(ILP)based modeling and solution schemes that ignore the graph-related features of WAMS,in this work,the PDCP problem is solved through a heuristic graphbased two-phase procedure(TPP):topology partitioning,and phasor data concentrator(PDC)provisioning.Based on the existing minimum k-section algorithms in graph theory,the k-base topology partitioning algorithm is proposed.To improve the performance,the“center-node-last”pre-partitioning algorithm is proposed to give an initial partition before the k-base partitioning algorithm is applied.Then,the PDC provisioning algorithm is proposed to locate PDCs into the decomposed sub-graphs.The proposed TPP was evaluated on five different IEEE benchmark test power systems and the achieved overall communication performance compared to the ILP based schemes show the validity and efficiency of the proposed method. 展开更多
关键词 industrial Internet minimum k-section phasor data concentrator placement phasor measurement unit smart grid topology partitioning wide area measurement system
在线阅读 下载PDF
ADVANCED FREQUENCY-DIRECTED RUN-LENTH BASED CODING SCHEME ON TEST DATA COMPRESSION FOR SYSTEM-ON-CHIP 被引量:1
15
作者 张颖 吴宁 葛芬 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2012年第1期77-83,共7页
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre... Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes. 展开更多
关键词 test data compression FDR codes test resource partitioning SYSTEM-ON-CHIP
在线阅读 下载PDF
Tailored Partitioning for Healthcare Big Data: A Novel Technique for Efficient Data Management and Hash Retrieval in RDBMS Relational Architectures
16
作者 Ehsan Soltanmohammadi Neset Hikmet Dilek Akgun 《Journal of Data Analysis and Information Processing》 2025年第1期46-65,共20页
Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and... Efficient data management in healthcare is essential for providing timely and accurate patient care, yet traditional partitioning methods in relational databases often struggle with the high volume, heterogeneity, and regulatory complexity of healthcare data. This research introduces a tailored partitioning strategy leveraging the MD5 hashing algorithm to enhance data insertion, query performance, and load balancing in healthcare systems. By applying a consistent hash function to patient IDs, our approach achieves uniform distribution of records across partitions, optimizing retrieval paths and reducing access latency while ensuring data integrity and compliance. We evaluated the method through experiments focusing on partitioning efficiency, scalability, and fault tolerance. The partitioning efficiency analysis compared our MD5-based approach with standard round-robin methods, measuring insertion times, query latency, and data distribution balance. Scalability tests assessed system performance across increasing dataset sizes and varying partition counts, while fault tolerance experiments examined data integrity and retrieval performance under simulated partition failures. The experimental results demonstrate that the MD5-based partitioning strategy significantly reduces query retrieval times by optimizing data access patterns, achieving up to X% better performance compared to round-robin methods. It also scales effectively with larger datasets, maintaining low latency and ensuring robust resilience under failure scenarios. This novel approach offers a scalable, efficient, and fault-tolerant solution for healthcare systems, facilitating faster clinical decision-making and improved patient care in complex data environments. 展开更多
关键词 Healthcare data partitioning Relational database Management Systems (RDBMS) Big data Management Load Balance Query Performance Improvement data Integrity and Fault Tolerance EFFICIENT Big data in Healthcare Dynamic data Distribution Healthcare Information Systems partitioning Algorithms Performance Evaluation in databases
在线阅读 下载PDF
LRP:learned robust data partitioning for efficient processing of large dynamic queries
17
作者 Pengju LIU Pan CAI +2 位作者 Kai ZHONG Cuiping LI Hong CHEN 《Frontiers of Computer Science》 2025年第9期43-60,共18页
The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing p... The interconnection between query processing and data partitioning is pivotal for the acceleration of massive data processing during query execution,primarily by minimizing the number of scanned block files.Existing partitioning techniques predominantly focus on query accesses on numeric columns for constructing partitions,often overlooking non-numeric columns and thus limiting optimization potential.Additionally,these techniques,despite creating fine-grained partitions from representative queries to enhance system performance,experience from notable performance declines due to unpredictable fluctuations in future queries.To tackle these issues,we introduce LRP,a learned robust partitioning system for dynamic query processing.LRP first proposes a method for data and query encoding that captures comprehensive column access patterns from historical queries.It then employs Multi-Layer Perceptron and Long Short-Term Memory networks to predict shifts in the distribution of historical queries.To create high-quality,robust partitions based on these predictions,LRP adopts a greedy beam search algorithm for optimal partition division and implements a data redundancy mechanism to share frequently accessed data across partitions.Experimental evaluations reveal that LRP yields partitions with more stable performance under incoming queries and significantly surpasses state-of-the-art partitioning methods. 展开更多
关键词 data partitioning data encoding query prediction beam search data redundancy
原文传递
Local and global approaches of affinity propagation clustering for large scale data 被引量:15
18
作者 Ding-yin XIA Fei WU +1 位作者 Xu-qing ZHAN Yue-ting ZHUANG 《Journal of Zhejiang University-Science A(Applied Physics & Engineering)》 SCIE EI CAS CSCD 2008年第10期1373-1381,共9页
Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster ... Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable. 展开更多
关键词 CLUSTERING Affinity propagation Large scale data partition affinity propagation Landmark affinity propagation
在线阅读 下载PDF
Fast Computation of Sparse Data Cubes with Constraints 被引量:2
19
作者 FengYu-cai ChenChang-qing FengJian-lin XiangLong-gang 《Wuhan University Journal of Natural Sciences》 EI CAS 2004年第2期167-172,共6页
For a data cube there are always constraints between dimensions or among attributes in a dimension, such as functional dependencies. We introduce the problem that when there are functional dependencies, how to use the... For a data cube there are always constraints between dimensions or among attributes in a dimension, such as functional dependencies. We introduce the problem that when there are functional dependencies, how to use them to speed up the computation of sparse data cubes. A new algorithm CFD (Computation by Functional Dependencies) is presented to satisfy this demand. CFD determines the order of dimensions by considering cardinalities of dimensions and functional dependencies between dimensions together, thus reduce the number of partitions for such dimensions. CFD also combines partitioning from bottom to up and aggregate computation from top to bottom to speed up the computation further. CFD can efficiently compute a data cube with hierarchies in a dimension from the smallest granularity to the coarsest one. Key words sparse data cube - functional dependency - dimension - partition - CFD CLC number TP 311 Foundation item: Supported by the E-Government Project of the Ministry of Science and Technology of China (2001BA110B01)Biography: Feng Yu-cai (1945-), male, Professor, research direction: database system. 展开更多
关键词 sparse data cube functional dependency DIMENSION partition CFD
在线阅读 下载PDF
Reversible Data Hiding Based on Pixel-Value-Ordering and Pixel Block Merging Strategy 被引量:1
20
作者 Wengui Su Xiang Wang Yulong Shen 《Computers, Materials & Continua》 SCIE EI 2019年第6期925-941,共17页
With the reversible data hiding method based on pixel-value-ordering,data are embedded through the modification of the maximum and minimum values of a block.A significant relationship exists between the embedding perf... With the reversible data hiding method based on pixel-value-ordering,data are embedded through the modification of the maximum and minimum values of a block.A significant relationship exists between the embedding performance and the block size.Traditional pixel-value-ordering methods utilize pixel blocks with a fixed size to embed data;the smaller the pixel blocks,greater is the embedding capacity.However,it tends to result in the deterioration of the quality of the marked image.Herein,a novel reversible data hiding method is proposed by incorporating a block merging strategy into Li et al.’s pixel-value-ordering method,which realizes the dynamic control of block size by considering the image texture.First,the cover image is divided into non-overlapping 2×2 pixel blocks.Subsequently,according to their complexity,similarity and thresholds,these blocks are employed for data embedding through the pixel-value-ordering method directly or after being emerged into 2×4,4×2,or 4×4 sized blocks.Hence,smaller blocks can be used in the smooth region to create a high embedding capacity and larger blocks in the texture region to maintain a high peak signal-to-noise ratio.Experimental results prove that the proposed method is superior to the other three advanced methods.It achieves a high embedding capacity while maintaining low distortion and improves the embedding performance of the pixel-value-ordering algorithm. 展开更多
关键词 Reversible data hiding pixel-value-ordering prediction error expansion dynamic block partition
在线阅读 下载PDF
上一页 1 2 48 下一页 到第
使用帮助 返回顶部