Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than...Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than other approaches such as reducing the recording volume and multiplexing technology.However,the architecture of current MRLs is identical to that of recording data into physical layers with rigid space,which leads to either severe interlayer crosstalk or finite recording layers constrained by the short working distances of the objectives.Here,we propose the concept of hybrid-layer ODS,which can record optical information into a physical layer and multiple virtual layers by using high-orthogonality random meta-channels.In the virtual layer,32 images are experimentally reconstructed through holog-raphy,where their holographic phases are encoded into 16 printed images and complementary images in the physical layer,yielding a capacity of 2.5 Tbit cm^(-3).A higher capacity is achievable with more virtual layers,suggesting hybrid-layer ODS as a possible candidate for next-generation ODS.展开更多
To accomplish on-site separation, preconcentration and cold storage of highly volatile organic compounds(VOCs) from water samples as well as their rapid transportation to laboratory, a high-throughput miniaturized pur...To accomplish on-site separation, preconcentration and cold storage of highly volatile organic compounds(VOCs) from water samples as well as their rapid transportation to laboratory, a high-throughput miniaturized purge-and-trap(μP&T) device integrating semiconductor refrigeration storage was developed in this work. Water samples were poured into the purge vessels and purged with purified air generated by an air pump. The VOCs in water samples were then separated and preconcentrated with sorbent tubes. After their complete separation and preconcentration, the tubes were subsequently preserved in the semiconductor refrigeration unit of the μP&T device. Notably, the high integration, small size, light weight, and low power consumption of the device makes it easy to be hand-carried to the field and transport by drone from remote locations, significantly enhancing the flexibility of field sampling. The performances of the device were evaluated by comparing analytical figures of merit for the detection of four cyclic volatile methylsiloxanes(cVMSs) in water. Compared to conventional collection and preservation methods, our proposed device preserved the VOCs more consistently in the sorbent tubes, with less than 5% loss of all analytes, and maintained stability for at least 20 days at 4℃. As a proof-of-concept,10 municipal wastewater samples were pretreated using this device with recoveries ranging from 82.5% to 99.9% for the target VOCs.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches ...China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches the magnitude of PB,and the amount of data is still increasing.The safe management of these marine data storage is the basis of building a Smart Ocean.This paper discusses the current situation of security management of marine data storage in China,analyzes the problems of security management in domestic marine data storage,and puts forward suggestions.展开更多
DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the product...DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.展开更多
Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)...Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.展开更多
Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic ...Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.展开更多
To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is nece...To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.展开更多
The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarizatio...The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.展开更多
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b...The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.展开更多
In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and center...In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.展开更多
Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and com...Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.展开更多
Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing...Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing. The light-induced phase transition is the key for this technology. Traditional understanding on the role of light is the heating effect. Generally, the RESET operation of phase-change memory is believed to be a melt-quenching-amorphization process. However, some recent experimental and theoretical investigations have revealed that ultrafast laser can manipulate the structures of phase-change materials by non-thermal effects and induces unconventional phase transitions including solid-to-solid amorphization and order-to-order phase transitions. Compared with the conventional thermal amorphization,these transitions have potential superiors such as faster speed, better endurance, and low power consumption. This article summarizes some recent progress of experimental observations and theoretical analyses on these unconventional phase transitions. The discussions mainly focus on the physical mechanism at atomic scale to provide guidance to control the phase transitions for optical storage. Outlook on some possible applications of the non-thermal phase transition is also presented to develop new types of devices.展开更多
In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dy...In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dynamic balance between privacy protection and data sharing.The storage capacity of blockchain is limited and single blockchain schemes have poor scalability and low throughput.To address these issues,we propose a secure and efficient medical data storage and sharing scheme based on double blockchain.In our scheme,we encrypt the original EMR and store it in the cloud.The storage blockchain stores the index of the complete EMR,and the shared blockchain stores the index of the shared part of the EMR.Users with different attributes can make requests to different blockchains to share different parts according to their own permissions.Through experiments,it was found that cloud storage combined with blockchain not only solved the problem of limited storage capacity of blockchain,but also greatly reduced the risk of leakage of the original EMR.Content Extraction Signature(CES)combined with the double blockchain technology realized the separation of the privacy part and the shared part of the original EMR.The symmetric encryption technology combined with Ciphertext-Policy Attribute-Based Encryption(CP–ABE)not only ensures the safe storage of data in the cloud,but also achieves the consistency and convenience of data update,avoiding redundant backup of data.Safety analysis and performance analysis verified the feasibility and effectiveness of our scheme.展开更多
Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end u...Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end user tofind the exact search results among the huge paginated search results.Higher level of drill down search with category based search feature leads to get the most accurate search results but it increases the number and size of thefile system.The purpose of this manuscript is to implement a big data storage reduction binaryfile system model for category based drill down search engine that offers fast multi-levelfiltering capability.The basic methodology of the proposed model stores the search engine data in the binaryfile system model.To verify the effectiveness of the proposedfile system model,5 million unique keyword data are stored into a binaryfile,thereby analysing the proposedfile system with efficiency.Some experimental results are also provided based on real data that show our storage model speed and superiority.Experiments demonstrated that ourfile system expansion ratio is constant and it reduces the disk storage space up to 30%with conventional database/file system and it also increases the search performance for any levels of search.To discuss deeply,the paper starts with the short introduction of drill down search followed by the discussion of important technologies used to implement big data storage reduction system in detail.展开更多
The tremendous growth of the cloud computing environments requires new architecture for security services. Cloud computing is the utilization of many servers/data centers or cloud data storages (CDSs) housed in many d...The tremendous growth of the cloud computing environments requires new architecture for security services. Cloud computing is the utilization of many servers/data centers or cloud data storages (CDSs) housed in many different locations and interconnected by high speed networks. CDS, like any other emerging technology, is experiencing growing pains. It is immature, it is fragmented and it lacks standardization. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this paper a comprehensive security framework based on Multi-Agent System (MAS) architecture for CDS to facilitate confidentiality, correctness assurance, availability and integrity of users' data in the cloud is proposed. Our security framework consists of two main layers as agent layer and CDS layer. Our propose MAS architecture includes main five types of agents: Cloud Service Provider Agent (CSPA), Cloud Data Confidentiality Agent (CDConA), Cloud Data Correctness Agent (CDCorA), Cloud Data Availability Agent (CDAA) and Cloud Data Integrity Agent (CDIA). In order to verify our proposed security framework based on MAS architecture, pilot study is conducted using a questionnaire survey. Rasch Methodology is used to analyze the pilot data. Item reliability is found to be poor and a few respondents and items are identified as misfits with distorted measurements. As a result, some problematic questions are revised and some predictably easy questions are excluded from the questionnaire. A prototype of the system is implemented using Java. To simulate the agents, oracle database packages and triggers are used to implement agent functions and oracle jobs are utilized to create agents.展开更多
With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality a...With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.展开更多
In the era of information explosion,the demand of data storage is increased dramatically.Holographic data storage technology is one of the most promising next-generation data storage technologies due to its high stora...In the era of information explosion,the demand of data storage is increased dramatically.Holographic data storage technology is one of the most promising next-generation data storage technologies due to its high storage density,fast data transfer rate,long data life time and less energy consumption.Collinear holographic data storage technology is the typical solution of the holographic data storage technology which owns a more compact,compatible and practical system.This paper gives a brief review of holographic data storage,introduces collinear holographic data storage technology and discusses phase modulation technology being used in the holographic data storage system to achieve higher storage density and higher data transfer rate.展开更多
The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to ...The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.展开更多
Cloud computing is a new paradigm of computing and is considered to be the next generation of information technology infrastructure for an enterprise. The distributed architecture of cloud data storage facilitates the...Cloud computing is a new paradigm of computing and is considered to be the next generation of information technology infrastructure for an enterprise. The distributed architecture of cloud data storage facilitates the customer to get benefits from the greater quality of storage and minimized the operating cost. This technology also brought numerous possible threats including data confidentiality, integrity and availability. A homomorphic based model of storage is proposed, which enable the customer and a third party auditor to perform the authentication of data stored on the cloud storage. This model performs the verification of huge file’s integrity and availability with less consumption of computation, storage and communication resources. The proposed model also supports public verifiability and dynamic data recovery.展开更多
基金the National Key Research and Development Program of China(2022YFB3607300)the National Natural Science Foundation of China(62322512 and 12134013)+3 种基金the Chinese Acad-emy of Sciences Project for Young Scientists in Basic Research(YSBR-049)support from the University of Science and Technology of China’s Center for Micro and Nanoscale Research and Fabricationsupported by the China Postdoctoral Science Foundation(2023M743364)supercomputing system in Hefei Advanced Computing Center and the Supercomputing Center of University of Science and Technology of China.
文摘Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than other approaches such as reducing the recording volume and multiplexing technology.However,the architecture of current MRLs is identical to that of recording data into physical layers with rigid space,which leads to either severe interlayer crosstalk or finite recording layers constrained by the short working distances of the objectives.Here,we propose the concept of hybrid-layer ODS,which can record optical information into a physical layer and multiple virtual layers by using high-orthogonality random meta-channels.In the virtual layer,32 images are experimentally reconstructed through holog-raphy,where their holographic phases are encoded into 16 printed images and complementary images in the physical layer,yielding a capacity of 2.5 Tbit cm^(-3).A higher capacity is achievable with more virtual layers,suggesting hybrid-layer ODS as a possible candidate for next-generation ODS.
基金the National Natural Science Foundation of China (No. 22306146)the PhD Scientific Research Startup Foundation of Xihua University (No. RX2200002003) for their financial support。
文摘To accomplish on-site separation, preconcentration and cold storage of highly volatile organic compounds(VOCs) from water samples as well as their rapid transportation to laboratory, a high-throughput miniaturized purge-and-trap(μP&T) device integrating semiconductor refrigeration storage was developed in this work. Water samples were poured into the purge vessels and purged with purified air generated by an air pump. The VOCs in water samples were then separated and preconcentrated with sorbent tubes. After their complete separation and preconcentration, the tubes were subsequently preserved in the semiconductor refrigeration unit of the μP&T device. Notably, the high integration, small size, light weight, and low power consumption of the device makes it easy to be hand-carried to the field and transport by drone from remote locations, significantly enhancing the flexibility of field sampling. The performances of the device were evaluated by comparing analytical figures of merit for the detection of four cyclic volatile methylsiloxanes(cVMSs) in water. Compared to conventional collection and preservation methods, our proposed device preserved the VOCs more consistently in the sorbent tubes, with less than 5% loss of all analytes, and maintained stability for at least 20 days at 4℃. As a proof-of-concept,10 municipal wastewater samples were pretreated using this device with recoveries ranging from 82.5% to 99.9% for the target VOCs.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
文摘China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches the magnitude of PB,and the amount of data is still increasing.The safe management of these marine data storage is the basis of building a Smart Ocean.This paper discusses the current situation of security management of marine data storage in China,analyzes the problems of security management in domestic marine data storage,and puts forward suggestions.
基金supported by the National Key Research and Development Program of China(2018YFA0900100)the Natural Science Foundation of Tianjin,China(19JCJQJC63300)Tianjin University。
文摘DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.
基金supports from the National Key R&D Program of China (No. 2021YFB2802000 and 2021YFB2800500)the National Natural Science Foundation of China (Grant Nos. U20A20211, 51902286, 61775192, 61905215, and 62005164)+2 种基金Key Research Project of Zhejiang Labthe State Key Laboratory of High Field Laser Physics (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences)China Postdoctoral Science Foundation (2021M702799)。
文摘Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.
基金financial supports from the National Natural Science Foundation of China(Grant Nos.62174073,61875073,11674130,91750110 and 61522504)the National Key R&D Program of China(Grant No.2018YFB1107200)+3 种基金the Guangdong Provincial Innovation and Entrepren-eurship Project(Grant No.2016ZT06D081)the Natural Science Founda-tion of Guangdong Province,China(Grant Nos.2016A030306016 and 2016TQ03X981)the Pearl River Nova Program of Guangzhou(Grant No.201806010040)the Technology Innovation and Development Plan of Yantai(Grant No.2020XDRH095).
文摘Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.
基金We are grateful for financial supports from National Key Research and Development Program of China(2018YFA0701800)Project of Fujian Province Major Science and Technology(2020HZ01012)+1 种基金Natural Science Foundation of Fujian Province(2021J01160)National Natural Science Foundation of China(62061136005).
文摘To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61774034 and 12104090)。
文摘The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.
基金This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)in part by the 2022 Yeungnam University Research Grant.
文摘The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.
文摘In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.
基金the National Key R&D Program of China(Grant no.2019YFC1709803)National Natural Science Foundation of China(Grant no.81873183).
文摘Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61922035 and 11904118)
文摘Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing. The light-induced phase transition is the key for this technology. Traditional understanding on the role of light is the heating effect. Generally, the RESET operation of phase-change memory is believed to be a melt-quenching-amorphization process. However, some recent experimental and theoretical investigations have revealed that ultrafast laser can manipulate the structures of phase-change materials by non-thermal effects and induces unconventional phase transitions including solid-to-solid amorphization and order-to-order phase transitions. Compared with the conventional thermal amorphization,these transitions have potential superiors such as faster speed, better endurance, and low power consumption. This article summarizes some recent progress of experimental observations and theoretical analyses on these unconventional phase transitions. The discussions mainly focus on the physical mechanism at atomic scale to provide guidance to control the phase transitions for optical storage. Outlook on some possible applications of the non-thermal phase transition is also presented to develop new types of devices.
基金the Natural Science Foundation of Heilongjiang Province of China under Grant No.LC2016024Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No.17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX–108.
文摘In the digital era,electronic medical record(EMR)has been a major way for hospitals to store patients’medical data.The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dynamic balance between privacy protection and data sharing.The storage capacity of blockchain is limited and single blockchain schemes have poor scalability and low throughput.To address these issues,we propose a secure and efficient medical data storage and sharing scheme based on double blockchain.In our scheme,we encrypt the original EMR and store it in the cloud.The storage blockchain stores the index of the complete EMR,and the shared blockchain stores the index of the shared part of the EMR.Users with different attributes can make requests to different blockchains to share different parts according to their own permissions.Through experiments,it was found that cloud storage combined with blockchain not only solved the problem of limited storage capacity of blockchain,but also greatly reduced the risk of leakage of the original EMR.Content Extraction Signature(CES)combined with the double blockchain technology realized the separation of the privacy part and the shared part of the original EMR.The symmetric encryption technology combined with Ciphertext-Policy Attribute-Based Encryption(CP–ABE)not only ensures the safe storage of data in the cloud,but also achieves the consistency and convenience of data update,avoiding redundant backup of data.Safety analysis and performance analysis verified the feasibility and effectiveness of our scheme.
文摘Multi-level searching is called Drill down search.Right now,no drill down search feature is available in the existing search engines like Google,Yahoo,Bing and Baidu.Drill down search is very much useful for the end user tofind the exact search results among the huge paginated search results.Higher level of drill down search with category based search feature leads to get the most accurate search results but it increases the number and size of thefile system.The purpose of this manuscript is to implement a big data storage reduction binaryfile system model for category based drill down search engine that offers fast multi-levelfiltering capability.The basic methodology of the proposed model stores the search engine data in the binaryfile system model.To verify the effectiveness of the proposedfile system model,5 million unique keyword data are stored into a binaryfile,thereby analysing the proposedfile system with efficiency.Some experimental results are also provided based on real data that show our storage model speed and superiority.Experiments demonstrated that ourfile system expansion ratio is constant and it reduces the disk storage space up to 30%with conventional database/file system and it also increases the search performance for any levels of search.To discuss deeply,the paper starts with the short introduction of drill down search followed by the discussion of important technologies used to implement big data storage reduction system in detail.
文摘The tremendous growth of the cloud computing environments requires new architecture for security services. Cloud computing is the utilization of many servers/data centers or cloud data storages (CDSs) housed in many different locations and interconnected by high speed networks. CDS, like any other emerging technology, is experiencing growing pains. It is immature, it is fragmented and it lacks standardization. Although security issues are delaying its fast adoption, cloud computing is an unstoppable force and we need to provide security mechanisms to ensure its secure adoption. In this paper a comprehensive security framework based on Multi-Agent System (MAS) architecture for CDS to facilitate confidentiality, correctness assurance, availability and integrity of users' data in the cloud is proposed. Our security framework consists of two main layers as agent layer and CDS layer. Our propose MAS architecture includes main five types of agents: Cloud Service Provider Agent (CSPA), Cloud Data Confidentiality Agent (CDConA), Cloud Data Correctness Agent (CDCorA), Cloud Data Availability Agent (CDAA) and Cloud Data Integrity Agent (CDIA). In order to verify our proposed security framework based on MAS architecture, pilot study is conducted using a questionnaire survey. Rasch Methodology is used to analyze the pilot data. Item reliability is found to be poor and a few respondents and items are identified as misfits with distorted measurements. As a result, some problematic questions are revised and some predictably easy questions are excluded from the questionnaire. A prototype of the system is implemented using Java. To simulate the agents, oracle database packages and triggers are used to implement agent functions and oracle jobs are utilized to create agents.
文摘With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.
基金We are grateful for financial supports from National Key R&D Program of China(2018YFA0701800)National Natural Science Foundation of China(Grant No.61475019).
文摘In the era of information explosion,the demand of data storage is increased dramatically.Holographic data storage technology is one of the most promising next-generation data storage technologies due to its high storage density,fast data transfer rate,long data life time and less energy consumption.Collinear holographic data storage technology is the typical solution of the holographic data storage technology which owns a more compact,compatible and practical system.This paper gives a brief review of holographic data storage,introduces collinear holographic data storage technology and discusses phase modulation technology being used in the holographic data storage system to achieve higher storage density and higher data transfer rate.
文摘The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.
文摘Cloud computing is a new paradigm of computing and is considered to be the next generation of information technology infrastructure for an enterprise. The distributed architecture of cloud data storage facilitates the customer to get benefits from the greater quality of storage and minimized the operating cost. This technology also brought numerous possible threats including data confidentiality, integrity and availability. A homomorphic based model of storage is proposed, which enable the customer and a third party auditor to perform the authentication of data stored on the cloud storage. This model performs the verification of huge file’s integrity and availability with less consumption of computation, storage and communication resources. The proposed model also supports public verifiability and dynamic data recovery.