期刊文献+
共找到281篇文章
< 1 2 15 >
每页显示 20 50 100
Hybrid cloud approach for block-level deduplication and searchable encryption in large universe
1
作者 Liu Zhenhua Kang Yaqian +1 位作者 Li Chen Fan Yaqing 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2017年第5期23-34,共12页
Ciphertext-policy attribute-based searchable encryption (CP-ABSE) can achieve fine-grained access control for data sharing and retrieval, and secure deduplication can save storage space by eliminating duplicate copi... Ciphertext-policy attribute-based searchable encryption (CP-ABSE) can achieve fine-grained access control for data sharing and retrieval, and secure deduplication can save storage space by eliminating duplicate copies. However, there are seldom schemes supporting both searchable encryption and secure deduplication. In this paper, a large universe CP-ABSE scheme supporting secure block-level deduplication are proposed under a hybrid cloud mechanism. In the proposed scheme, after the ciphertext is inserted into bloom filter tree (BFT), private cloud can perform fine-grained deduplication efficiently by matching tags, and public cloud can search efficiently using homomorphic searchable method and keywords matching. Finally, the proposed scheme can achieve privacy under chosen distribution attacks block-level (PRV-CDA-B) secure deduplication and match-concealing (MC) searchable security. Compared with existing schemes, the proposed scheme has the advantage in supporting fine-grained access control, block-level deduplication and efficient search, simultaneously. 展开更多
关键词 block-level deduplication searchable encryption large tmiverse BFT
原文传递
Updatable block-level deduplication of encrypted data with efficient auditing in cloud storage 被引量:1
2
作者 Dang Qianlong Xie Ying +1 位作者 Li Donghao Hu Gongcheng 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2019年第3期56-72,共17页
Updatable block-level message-locked encryption(MLE) can efficiently update encrypted data, and public auditing can verify the integrity of cloud storage data by utilizing a third party auditor(TPA). However, there ar... Updatable block-level message-locked encryption(MLE) can efficiently update encrypted data, and public auditing can verify the integrity of cloud storage data by utilizing a third party auditor(TPA). However, there are seldom schemes supporting both updatable block-level deduplication and public auditing. In this paper, an updatable block-level deduplication scheme with efficient auditing is proposed based on a tree-based authenticated structure. In the proposed scheme, the cloud server(CS) can perform block-level deduplication, and the TPA achieves integrity auditing tasks. When a data block is updated, the ciphertext and auditing tags could be updated efficiently. The security analysis demonstrates that the proposed scheme can achieve privacy under chosen distribution attacks in secure deduplication and resist uncheatable chosen distribution attacks(UNC-CDA) in proof of ownership(PoW). Furthermore, the integrity auditing process is proven secure under adaptive chosen-message attacks. Compared with previous relevant schemes, the proposed scheme achieves better functionality and higher efficiency. 展开更多
关键词 data update operation block-level deduplication EFFICIENT AUDITING tree-based authenticated structure proof of OWNERSHIP
原文传递
AR-Dedupe: An Efficient Deduplication Approach for Cluster Deduplication System 被引量:2
3
作者 邢玉轩 肖侬 +2 位作者 刘芳 孙振 何晚辉 《Journal of Shanghai Jiaotong university(Science)》 EI 2015年第1期76-81,共6页
As data are growing rapidly in data centers,inline cluster deduplication technique has been widely used to improve storage efficiency and data reliability.However,there are some challenges faced by the cluster dedupli... As data are growing rapidly in data centers,inline cluster deduplication technique has been widely used to improve storage efficiency and data reliability.However,there are some challenges faced by the cluster deduplication system:the decreasing data deduplication rate with the increasing deduplication server nodes,high communication overhead for data routing,and load balance to improve the throughput of the system.In this paper,we propose a well-performed cluster deduplication system called AR-Dedupe.The experimental results of two real datasets demonstrate that AR-Dedupe can achieve a high data deduplication rate with a low communication overhead and keep the system load balancing well at the same time through a new data routing algorithm.In addition,we utilize application-aware mechanism to speed up the index of handprints in the routing server which has a 30%performance improvement. 展开更多
关键词 cluster deduplication system routing algorithm application-aware
原文传递
Public Auditing for Encrypted Data with Client-Side Deduplication in Cloud Storage 被引量:4
4
作者 HE Kai HUANG Chuanhe +3 位作者 ZHOU Hao SHI Jiaoli WANG Xiaomao DAN Feng 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2015年第4期291-298,共8页
Storage auditing and client-side deduplication techniques have been proposed to assure data integrity and improve storage efficiency, respectively. Recently, a few schemes start to consider these two different aspects... Storage auditing and client-side deduplication techniques have been proposed to assure data integrity and improve storage efficiency, respectively. Recently, a few schemes start to consider these two different aspects together. However, these schemes either only support plaintext data file or have been proved insecure. In this paper, we propose a public auditing scheme for cloud storage systems, in which deduplication of encrypted data and data integrity checking can be achieved within the same framework. The cloud server can correctly check the ownership for new owners and the auditor can correctly check the integrity of deduplicated data. Our scheme supports deduplication of encrypted data by using the method of proxy re-encryption and also achieves deduplication of data tags by aggregating the tags from different owners. The analysis and experiment results show that our scheme is provably secure and efficient. 展开更多
关键词 public auditing data integrity storage deduplication cloud storage
原文传递
Using multi-threads to hide deduplication I/O latency with low synchronization overhead 被引量:1
5
作者 朱锐 秦磊华 +1 位作者 周敬利 郑寰 《Journal of Central South University》 SCIE EI CAS 2013年第6期1582-1591,共10页
Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the C... Data deduplication, as a compression method, has been widely used in most backup systems to improve bandwidth and space efficiency. As data exploded to be backed up, two main challenges in data deduplication are the CPU-intensive chunking and hashing works and the I/0 intensive disk-index access latency. However, CPU-intensive works have been vastly parallelized and speeded up by multi-core and many-core processors; the I/0 latency is likely becoming the bottleneck in data deduplication. To alleviate the challenge of I/0 latency in multi-core systems, multi-threaded deduplication (Multi-Dedup) architecture was proposed. The main idea of Multi-Dedup was using parallel deduplication threads to hide the I/0 latency. A prefix based concurrent index was designed to maintain the internal consistency of the deduplication index with low synchronization overhead. On the other hand, a collisionless cache array was also designed to preserve locality and similarity within the parallel threads. In various real-world datasets experiments, Multi-Dedup achieves 3-5 times performance improvements incorporating with locality-based ChunkStash and local-similarity based SiLo methods. In addition, Multi-Dedup has dramatically decreased the synchronization overhead and achieves 1.5-2 times performance improvements comparing to traditional lock-based synchronization methods. 展开更多
关键词 MULTI-THREAD MULTI-CORE parallel data deduplication
在线阅读 下载PDF
Secured Data Storage Using Deduplication in Cloud Computing Based on Elliptic Curve Cryptography 被引量:1
6
作者 N.Niyaz Ahamed N.Duraipandian 《Computer Systems Science & Engineering》 SCIE EI 2022年第4期83-94,共12页
The tremendous development of cloud computing with related technol-ogies is an unexpected one.However,centralized cloud storage faces few chal-lenges such as latency,storage,and packet drop in the network.Cloud storag... The tremendous development of cloud computing with related technol-ogies is an unexpected one.However,centralized cloud storage faces few chal-lenges such as latency,storage,and packet drop in the network.Cloud storage gets more attention due to its huge data storage and ensures the security of secret information.Most of the developments in cloud storage have been positive except better cost model and effectiveness,but still data leakage in security are billion-dollar questions to consumers.Traditional data security techniques are usually based on cryptographic methods,but these approaches may not be able to with-stand an attack from the cloud server's interior.So,we suggest a model called multi-layer storage(MLS)based on security using elliptical curve cryptography(ECC).The suggested model focuses on the significance of cloud storage along with data protection and removing duplicates at the initial level.Based on divide and combine methodologies,the data are divided into three parts.Here,thefirst two portions of data are stored in the local system and fog nodes to secure the data using the encoding and decoding technique.The other part of the encrypted data is saved in the cloud.The viability of our model has been tested by research in terms of safety measures and test evaluation,and it is truly a powerful comple-ment to existing methods in cloud storage. 展开更多
关键词 Cloud storage deduplication fog computing and elliptic curve cryptography
在线阅读 下载PDF
SRSC: Improving Restore Performance for Deduplication-Based Storage Systems
7
作者 ZUO Chunxue WANG Fang +2 位作者 TANG Xiaolan ZHANG Yucheng FENG Dan 《ZTE Communications》 2019年第2期59-66,共8页
Modern backup systems exploit data deduplication technology to save stor-age space whereas suffering from the fragmentation problem caused by deduplication.Fragmentation degrades the restore performance because of res... Modern backup systems exploit data deduplication technology to save stor-age space whereas suffering from the fragmentation problem caused by deduplication.Fragmentation degrades the restore performance because of restoring the chunks thatare scattered all over different containers. To improve the restore performance, thestate-of-the-art History Aware Rewriting Algorithm(HAR) is proposed to collect frag-mented chunks in the last backup and rewrite them in the next backup. However, dueto rewriting fragmented chunks in the next backup, HAR fails to eliminate internalfragmentation caused by self-referenced chunks(that exist more than two times in abackup) in the current backup, thus degrading the restore performance. In this paper,we propose Selectively Rewriting Self-Referenced Chunks(SRSC), a scheme that de-signs a buffer to simulate a restore cache, identify internal fragmentation in the cacheand selectively rewrite them. Our experimental results based on two real-world datas-ets show that SRSC improves the restore performance by 45% with an acceptable sac-rifice of the deduplication ratio. 展开更多
关键词 DATA deduplication FRAGMENTATION RESTORE PERFORMANCE
在线阅读 下载PDF
dCACH: Content Aware Clustered and Hierarchical Distributed Deduplication
8
作者 Girum Dagnaw Ke Zhou Hua Wang 《Journal of Software Engineering and Applications》 2019年第11期460-490,共31页
In deduplication, index-lookup disk bottleneck is a major obstacle which limits the throughput of backup processes. One way to minimize the effect of this issue and boost speed is to use very high course-grained chunk... In deduplication, index-lookup disk bottleneck is a major obstacle which limits the throughput of backup processes. One way to minimize the effect of this issue and boost speed is to use very high course-grained chunks for deduplication at a cost of low storage saving and limited scalability. Another way is to distribute the deduplication process among multiple nodes but this approach introduces storage node island effect and also incurs high communication cost. In this paper, we explore dCACH, a content-aware clustered and hierarchical deduplication system, which implements a hybrid of inline course grained and offline fine-grained distributed deduplication where routing decisions are made for a set of files instead of single files. It utilizes bloom filters for detecting similarity between a data stream and previous data streams and performs stateful routing which solves the storage node island problem. Moreover, it exploits the negligibly small amount of content shared among chunks from different file types to create groups of files and deduplicate each group in their own fingerprint index space. It implements hierarchical deduplication to reduce the size of fingerprint indexes at the global level, where only files and big sized segments are deduplicated. Locality is created and exploited first using the big sized segments deduplicated at the global level and second by routing a set of consecutive files together to one storage node. Furthermore, the use of bloom filter for similarity detection between streams has low communication and computation cost while it enables to achieve duplicate elimination performance comparable to single node deduplication. dCACH is evaluated using a prototype deployed on a server environment distributed over four separate machines. It is shown to have 10× the speed of Extreme_Binn with a minimal communication overhead, while its duplicate elimination effectiveness is on a par with a single node deduplication system. 展开更多
关键词 Clustered deduplication Content Aware GROUPING HIERARCHICAL deduplication Stateful Routing SIMILARITY BLOOM FILTERS
在线阅读 下载PDF
Privacy-Enhanced Data Deduplication Computational Intelligence Technique for Secure Healthcare Applications
9
作者 Jinsu Kim Sungwook Ryu Namje Park 《Computers, Materials & Continua》 SCIE EI 2022年第2期4169-4184,共16页
A significant number of cloud storage environments are already implementing deduplication technology.Due to the nature of the cloud environment,a storage server capable of accommodating large-capacity storage is requi... A significant number of cloud storage environments are already implementing deduplication technology.Due to the nature of the cloud environment,a storage server capable of accommodating large-capacity storage is required.As storage capacity increases,additional storage solutions are required.By leveraging deduplication,you can fundamentally solve the cost problem.However,deduplication poses privacy concerns due to the structure itself.In this paper,we point out the privacy infringement problemand propose a new deduplication technique to solve it.In the proposed technique,since the user’s map structure and files are not stored on the server,the file uploader list cannot be obtained through the server’s meta-information analysis,so the user’s privacy is maintained.In addition,the personal identification number(PIN)can be used to solve the file ownership problemand provides advantages such as safety against insider breaches and sniffing attacks.The proposed mechanism required an additional time of approximately 100 ms to add a IDRef to distinguish user-file during typical deduplication,and for smaller file sizes,the time required for additional operations is similar to the operation time,but relatively less time as the file’s capacity grows. 展开更多
关键词 Computational intelligence CLOUD MULTIMEDIA data deduplication
在线阅读 下载PDF
Differentially Authorized Deduplication System Based on Blockchain
10
作者 ZHAO Tian LI Hui +4 位作者 YANG Xin WANG Han ZENG Ming GUO Haisheng WANG Dezheng 《ZTE Communications》 2021年第2期67-76,共10页
In architecture of cloud storage, the deduplication technology encrypted with theconvergent key is one of the important data compression technologies, which effectively improvesthe utilization of space and bandwidth. ... In architecture of cloud storage, the deduplication technology encrypted with theconvergent key is one of the important data compression technologies, which effectively improvesthe utilization of space and bandwidth. To further refine the usage scenarios for varioususer permissions and enhance user’s data security, we propose a blockchain-based differentialauthorized deduplication system. The proposed system optimizes the traditionalProof of Vote (PoV) consensus algorithm and simplifies the existing differential authorizationprocess to realize credible management and dynamic update of authority. Based on thedecentralized property of blockchain, we overcome the centralized single point fault problemof traditional differentially authorized deduplication system. Besides, the operations oflegitimate users are recorded in blocks to ensure the traceability of behaviors. 展开更多
关键词 convergent key deduplication blockchain differential authorization
在线阅读 下载PDF
Homogeneous Batch Memory Deduplication Using Clustering of Virtual Machines
11
作者 N.Jagadeeswari V.Mohan Raj 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期929-943,共15页
Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kern... Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kernel Same Page Merging(KSM)is a Linux service for memory pages sharing in virtualized environments.Memory deduplication is vulnerable to a memory disclosure attack,which uses covert channel establishment to reveal the contents of other colocated virtual machines.To avoid a memory disclosure attack,sharing of identical pages within a single user’s virtual machine is permitted,but sharing of contents between different users is forbidden.In our proposed approach,virtual machines with similar operating systems of active domains in a node are recognised and organised into a homogenous batch,with memory deduplication performed inside that batch,to improve the memory pages sharing efficiency.When compared to memory deduplication applied to the entire host,implementation details demonstrate a significant increase in the number of pages shared when memory deduplication applied batch-wise and CPU(Central processing unit)consumption also increased. 展开更多
关键词 Kernel same page merging memory deduplication virtual machine sharing content-based sharing
在线阅读 下载PDF
Health Data Deduplication Using Window Chunking-Signature Encryption in Cloud
12
作者 G.Neelamegam P.Marikkannu 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期1079-1093,共15页
Due to the development of technology in medicine,millions of health-related data such as scanning the images are generated.It is a great challenge to store the data and handle a massive volume of data.Healthcare data ... Due to the development of technology in medicine,millions of health-related data such as scanning the images are generated.It is a great challenge to store the data and handle a massive volume of data.Healthcare data is stored in the cloud-fog storage environments.This cloud-Fog based health model allows the users to get health-related data from different sources,and duplicated informa-tion is also available in the background.Therefore,it requires an additional sto-rage area,increase in data acquisition time,and insecure data replication in the environment.This paper is proposed to eliminate the de-duplication data using a window size chunking algorithm with a biased sampling-based bloomfilter and provide the health data security using the Advanced Signature-Based Encryp-tion(ASE)algorithm in the Fog-Cloud Environment(WCA-BF+ASE).This WCA-BF+ASE eliminates the duplicate copy of the data and minimizes its sto-rage space and maintenance cost.The data is also stored in an efficient and in a highly secured manner.The security level in the cloud storage environment Win-dows Chunking Algorithm(WSCA)has got 86.5%,two thresholds two divisors(TTTD)80%,Ordinal in Python(ORD)84.4%,Boom Filter(BF)82%,and the proposed work has got better security storage of 97%.And also,after applying the de-duplication process,the proposed method WCA-BF+ASE has required only less storage space for variousfile sizes of 10 KB for 200,400 MB has taken only 22 KB,and 600 MB has required 35 KB,800 MB has consumed only 38 KB,1000 MB has taken 40 KB of storage spaces. 展开更多
关键词 Health data ENCRYPTION chunks CLOUD FOG deduplication bloomfilter Algorithm 3:Generation of Key
在线阅读 下载PDF
Hash-Indexing Block-Based Deduplication Algorithm for Reducing Storage in the Cloud
13
作者 D.Viji S.Revathy 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期27-42,共16页
Cloud storage is essential for managing user data to store and retrieve from the distributed data centre.The storage service is distributed as pay a service for accessing the size to collect the data.Due to the massiv... Cloud storage is essential for managing user data to store and retrieve from the distributed data centre.The storage service is distributed as pay a service for accessing the size to collect the data.Due to the massive amount of data stored in the data centre containing similar information and file structures remaining in multi-copy,duplication leads to increase storage space.The potential deduplication system doesn’t make efficient data reduction because of inaccuracy in finding similar data analysis.It creates a complex nature to increase the storage consumption under cost.To resolve this problem,this paper proposes an efficient storage reduction called Hash-Indexing Block-based Deduplication(HIBD)based on Segmented Bind Linkage(SBL)Methods for reducing storage in a cloud environment.Initially,preprocessing is done using the sparse augmentation technique.Further,the preprocessed files are segmented into blocks to make Hash-Index.The block of the contents is compared with other files through Semantic Content Source Deduplication(SCSD),which identifies the similar content presence between the file.Based on the content presence count,the Distance Vector Weightage Correlation(DVWC)estimates the document similarity weight,and related files are grouped into a cluster.Finally,the segmented bind linkage compares the document to find duplicate content in the cluster using similarity weight based on the coefficient match case.This implementation helps identify the data redundancy efficiently and reduces the service cost in distributed cloud storage. 展开更多
关键词 Cloud computing deduplication hash indexing relational content analysis document clustering cloud storage record linkage
在线阅读 下载PDF
Implementation and Validation of the Optimized Deduplication Strategy in Federated Cloud Environment
14
作者 Nipun Chhabra Manju Bala Vrajesh Sharma 《Computers, Materials & Continua》 SCIE EI 2022年第4期2019-2035,共17页
Cloud computing technology is the culmination of technical advancements in computer networks,hardware and software capabilities that collectively gave rise to computing as a utility.It offers a plethora of utilities t... Cloud computing technology is the culmination of technical advancements in computer networks,hardware and software capabilities that collectively gave rise to computing as a utility.It offers a plethora of utilities to its clients worldwide in a very cost-effective way and this feature is enticing users/companies to migrate their infrastructure to cloud platform.Swayed by its gigantic capacity and easy access clients are uploading replicated data on cloud resulting in an unnecessary crunch of storage in datacenters.Many data compression techniques came to rescue but none could serve the purpose for the capacity as large as a cloud,hence,researches were made to de-duplicate the data and harvest the space from exiting storage capacity which was going in vain due to duplicacy of data.For providing better cloud services through scalable provisioning of resources,interoperability has brought many Cloud Service Providers(CSPs)under one umbrella and termed it as Cloud Federation.Many policies have been devised for private and public cloud deployment models for searching/eradicating replicated copies using hashing techniques.Whereas the exploration for duplicate copies is not restricted to any one type of CSP but to a set of public or private CSPs contributing to the federation.It was found that even in advanced deduplication techniques for federated clouds,due to the different nature of CSPs,a single file is stored at private as well as public group in the same cloud federation which can be handled if an optimized deduplication strategy be rendered for addressing this issue.Therefore,this study has been aimed to further optimize a deduplication strategy for federated cloud environment and suggested a central management agent for the federation.It was perceived that work relevant to this is not existing,hence,in this paper,the concept of federation agent has been implemented and deduplication technique following file level has been used for the accomplishment of this approach. 展开更多
关键词 Federation agent deduplication in federated cloud central management agent for cloud federation interoperability in cloud computing bloom filters cloud computing cloud data storage
在线阅读 下载PDF
基于Faster R-CNN的作物生物密度智能识别方法 被引量:1
15
作者 李修华 李倩 +2 位作者 张瀚文 丁璐 王泽平 《生物工程学报》 北大核心 2025年第10期3828-3839,共12页
准确获取大田作物数量和密度不仅是水肥管理按需投入的关键,也是保障作物产量和品质的关键。无人机(unmanned aerial vehicle,UAV)航拍可以快速且大面积地获取大田作物的分布图像信息,但是单一类型密集目标的准确识别对于大多数识别算... 准确获取大田作物数量和密度不仅是水肥管理按需投入的关键,也是保障作物产量和品质的关键。无人机(unmanned aerial vehicle,UAV)航拍可以快速且大面积地获取大田作物的分布图像信息,但是单一类型密集目标的准确识别对于大多数识别算法来说都是一个巨大的挑战。本研究以香蕉苗为例,通过无人机高空航拍香蕉园的图像,研究密集目标高效识别方法。本研究提出了一种“裁-识-拼”的策略,构建了一个基于改进的Faster R-CNN算法的计数方法。该方法先将包含高密集目标的图像按不同尺寸(模拟不同飞行高度)裁剪成大量图像瓦片,并采用对比度限制自适应直方图均衡化(contrast limited adaptive histogram equalization,CLAHE)算法提高图像质量,构建了包含36000张图像瓦片的香蕉苗数据集;然后采用经过参数优化的Faster R-CNN网络训练香蕉苗识别模型;最后将识别结果进行反拼接,并设计了一种边界去重算法,对最终的计数结果进行校正,以减少图像裁剪引起的香蕉苗重复识别。结果表明,经过参数优化的Faster R-CNN对不同尺寸的香蕉图像数据集的识别精度最高达到了0.99;去重算法可以将针对航拍原始图像的平均计数误差从1.60%降低到0.60%,香蕉苗的平均计数准确率达到99.4%。本研究提出的方法有效解决了高分辨率航拍图像中密集小目标识别难题,为精准农业中的作物密度智能监测提供了高效可靠的技术支撑。 展开更多
关键词 果园计数 香蕉 Faster R-CNN 深度学习 去重
原文传递
云存储数据加密重删攻击与防御技术研究进展
16
作者 吴健 付印金 +4 位作者 方艳梅 刘垚 付伟 操晓春 肖侬 《计算机研究与发展》 北大核心 2025年第9期2283-2297,共15页
重复数据删除作为一种面向大数据的高效缩减技术,已经被广泛应用于各种云存储系统和服务中,为了兼容数据重删和加密,通常采用收敛加密.然而,这种云服务商的外包存储方式以及确定性的加密方式会导致一系列数据安全问题.目前,数据加密重... 重复数据删除作为一种面向大数据的高效缩减技术,已经被广泛应用于各种云存储系统和服务中,为了兼容数据重删和加密,通常采用收敛加密.然而,这种云服务商的外包存储方式以及确定性的加密方式会导致一系列数据安全问题.目前,数据加密重删技术已成为云存储领域的研究热点.首先介绍重复数据删除技术的概念、基础加密重删算法和云存储中数据加密重删的安全挑战.其次从攻击和防御的角度阐述当前云存储数据加密重删安全研究现状,攻击包括:蛮力攻击、频率攻击、侧信道攻击3种类型.围绕每种攻击类型,梳理对应的代表性防御方案,并总结各个方案的优势和缺陷.最后,针对当前数据加密重删防御方案存在的问题进行总结,并对未来的研究方向进行展望. 展开更多
关键词 云存储 数据重删 数据加密 攻击手段 防御措施
在线阅读 下载PDF
面向无线传感网络深度防御的密文数据去重方法
17
作者 高俊杰 杨帆 《传感技术学报》 北大核心 2025年第10期1886-1891,共6页
无线传感网络中,重复的密文数据在传输和存储过程中会增加网络负荷和资源消耗,降低网络性能。因此提出了一种面向无线传感网络深度防御的密文数据去重方法,首先利用压缩感知方法采集无线传感网络中的密文数据;其次将密文数据映射到特征... 无线传感网络中,重复的密文数据在传输和存储过程中会增加网络负荷和资源消耗,降低网络性能。因此提出了一种面向无线传感网络深度防御的密文数据去重方法,首先利用压缩感知方法采集无线传感网络中的密文数据;其次将密文数据映射到特征空间,获取密文数据特征;基于特征属性的动态权值计算密文数据特征间的相似度,通过相似度计算实现重复数据的去除。仿真结果表明,随着数据去重时间的增长,所提方法的网络剩余能量可达到85%以上,网络吞吐量可达3.5Mb/s以上,且网络空间压缩率可达60%以上,表明所提方法的密文数据去重效率高,可以实现无线传感网络的深度防御。 展开更多
关键词 无线传感网络 密文数据去重 网络防御 压缩感知方法 相似度计算
在线阅读 下载PDF
基于重复数据删除的分层存储优化技术研究进展 被引量:2
18
作者 姚子路 付印金 肖侬 《计算机科学》 北大核心 2025年第1期120-130,共11页
随着全球数据量的爆炸式增长以及数据多样性的日益丰富,单一介质层的存储系统逐渐不能满足用户多样化的应用需求。分层存储技术可依据数据的重要性、访问频率、安全性需求等特征将数据分类存放到具有不同访问延迟、存储容量、容错能力... 随着全球数据量的爆炸式增长以及数据多样性的日益丰富,单一介质层的存储系统逐渐不能满足用户多样化的应用需求。分层存储技术可依据数据的重要性、访问频率、安全性需求等特征将数据分类存放到具有不同访问延迟、存储容量、容错能力的存储层中,已经在各个领域得到广泛应用。重复数据删除是一种面向大数据的缩减技术,可高效去除存储系统中的重复数据,最大化存储空间利用率。不同于单存储层场景,将重复数据删除技术运用于分层存储中,不仅能减少跨层数据冗余,进一步节省存储空间、降低存储成本,还能更好地提升数据I/O性能和存储设备的耐久性。在简要分析基于重复数据删除的分层存储技术的原理、流程和分类之后,从存储位置选择、重复内容识别和数据迁移操作3个关键步骤入手,深入总结了诸多优化方法的研究进展,并针对基于重复数据删除的分层存储技术潜在的技术挑战进行了深入探讨。最后展望了基于重复数据删除的分层存储技术的未来发展趋势。 展开更多
关键词 重复数据删除 分层存储 存储位置选择 重复内容识别 数据迁移
在线阅读 下载PDF
基于改进遗传算法的工业机器人视觉动态分拣方法研究 被引量:1
19
作者 梁存仙 焦建静 +2 位作者 赵志鹏 孙爱芹 王吉岱 《机械与电子》 2025年第3期60-65,73,共7页
针对工业机器人视觉动态分拣过程存在的漏抓、误抓和精度不高等问题,提出一种基于改进遗传算法的多工件动态分拣方法。在分析工件分拣过程的基础上,确定了相机坐标系、用户坐标系和工具坐标系之间的转换关系。设计了一种图像去重算法,... 针对工业机器人视觉动态分拣过程存在的漏抓、误抓和精度不高等问题,提出一种基于改进遗传算法的多工件动态分拣方法。在分析工件分拣过程的基础上,确定了相机坐标系、用户坐标系和工具坐标系之间的转换关系。设计了一种图像去重算法,以时间和工件位置为判断依据,经对比去掉重复图像信息。为提高分拣效率,提出了一种基于改进遗传算法的工件动态分拣序列优化方法,利用非均匀变异算子进行邻域变异操作,可提高算法的收敛性和搜索能力,缩短分拣耗时。仿真和实验结果表明:改进遗传算法收敛较快;所述工业机器人视觉动态分拣方法效率较高,具有较好的稳定性和准确性。 展开更多
关键词 工业机器人 视觉分拣 图像去重 遗传算法
在线阅读 下载PDF
云存储背后的虚拟复制与隐形传播 被引量:2
20
作者 崔国斌 《中国应用法学》 2025年第2期93-109,共17页
现在云存储服务商普遍采用合并存储技术,即多个用户试图存储同一文件时,服务商不单独为每个用户保留备份,而是许可多个用户共享同一母本文件,从而节省存储空间并提升传输效率,并改进用户体验。合并存储给用户和服务商的行为定性带来前... 现在云存储服务商普遍采用合并存储技术,即多个用户试图存储同一文件时,服务商不单独为每个用户保留备份,而是许可多个用户共享同一母本文件,从而节省存储空间并提升传输效率,并改进用户体验。合并存储给用户和服务商的行为定性带来前所未有的挑战。著作权法应推定用户的“秒存”(虚拟复制)行为构成著作权法意义上的“复制”,但是许可利益相关方举证推翻这一结论,恢复“虚拟复制”行为的本来面目,即类似于网络分享链接。用户基于虚拟复制的公开分享行为也应推定为信息网络传播行为。在利益相关方举证推翻“复制”推定后,按照加框链接或聚合链接行为处理用户的公开传播行为。在合并存储时,服务商许可诸多用户浏览同一母本文件的行为,不再是私人传输,而是公开传播即信息网络传播行为。服务商在获知用户的具体侵权行为前,不因合并存储而承担侵权责任。在获知用户的具体侵权行为后,服务商应当在其知情的范围内立即停止传播行为,否则应承担直接的侵权责任。 展开更多
关键词 云存储 合并存储 虚拟复制 信息网络传播
在线阅读 下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部