Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than...Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than other approaches such as reducing the recording volume and multiplexing technology.However,the architecture of current MRLs is identical to that of recording data into physical layers with rigid space,which leads to either severe interlayer crosstalk or finite recording layers constrained by the short working distances of the objectives.Here,we propose the concept of hybrid-layer ODS,which can record optical information into a physical layer and multiple virtual layers by using high-orthogonality random meta-channels.In the virtual layer,32 images are experimentally reconstructed through holog-raphy,where their holographic phases are encoded into 16 printed images and complementary images in the physical layer,yielding a capacity of 2.5 Tbit cm^(-3).A higher capacity is achievable with more virtual layers,suggesting hybrid-layer ODS as a possible candidate for next-generation ODS.展开更多
DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the product...DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.展开更多
Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)...Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.展开更多
Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic ...Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.展开更多
To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is nece...To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.展开更多
The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarizatio...The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.展开更多
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b...The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and center...In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.展开更多
Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and com...Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.展开更多
The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to ...The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.展开更多
DNA storage,characterized by its durability,data density,and cost-effectiveness,is a promising solution for managing the increasing data volumes in healthcare.This review explores state-of-the-art DNA storage technolo...DNA storage,characterized by its durability,data density,and cost-effectiveness,is a promising solution for managing the increasing data volumes in healthcare.This review explores state-of-the-art DNA storage technologies,and provides insights into designing a DNA storage system tailored for medical cold data.We anticipate that a practical approach for medical cold data storage will involve establishing regional,in vitro DNA storage centers that can serve multiple hospitals.The immediacy of DNA storage for medical data hinges on the development of novel,high-density,specialized coding methods.Established commercial techniques,such as DNA chemical synthesis and next-generation sequencing(NGS),along with mixed drying with alkaline salts and refined Polymerase Chain Reaction(PCR),potentially represent the optimal options for data writing,reading,storage,and accessing,respectively.Data security could be promised by the integration of traditional digital encryption and DNA steganography.Although breakthrough developments like artificial nucleotides and DNA nanostructures show potential,they remain in the laboratory research phase.In conclusion,DNA storage is a viable preservation strategy for medical cold data in the near future.展开更多
In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecomm...In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecommunications networks,the need for high-end security and efficient management of these large data files is a great challenge for network designers.The proposed model provides the efficient real-time virtual data storage of UE data files(light and heavy)using an object storage system MinIO having inbuilt Software Development Kits(SDKs)that are compatible with Amazon(S3)Application Program Interface(API)making operations like file uploading,and data retrieval extremely efficient as compared to legacy virtual storage system requiring low-level HTTP requests for data management.To provide integrity,authenticity,and confidentiality(integrity checking via an authentication tag)to the data files of UE,the encrypted algorithm 256-bit oriented-Advanced Encryption Standard(256-AES)in Galois/Counter Mode(GCM)is utilized in combination with MinIO.The AES-based MinIO signifies in more secure and faster approach than older models like Cipher Block Chaining(CBC).The performance of the proposed model is analyzed using the Iperf utility to perform the Teletraffic parametric(bandwidth,throughput,latency,and transmission delay)analysis for three different cases namely:(a)light UE traffic(uploading and retrieval)(b)heavy UE traffic(uploading and retrieval)and(c)comparison of Teletraffic parameters namely:bandwidth(Bava),throughput(Tput),data transfer(D_(Trans)),latency(L_(ms)),and transmission delay(TDelay)obtained from proposed method with legacy virtual storage methods.The results show that the suggested MinIO-based system outperforms conventional systems in terms of latency,encryption efficiency,and performance under varying data load conditions.展开更多
The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation...The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation optical data storage,offering insights into various technological roadmaps.We pay particular attention to multidimensional and superresolution approaches,each of which uniquely addresses the challenge of dense storage.The multidimensional approach exploits multiple parameters of light,allowing for the storage of multiple bits of information within a single voxel while still adhering to diffraction limitation.Alternatively,superresolution approaches leverage the photoexcitation and photoinhibition properties of materials to create diffraction-unlimited data voxels.We conclude by summarizing the immense opportunities these approaches present,while also outlining the formidable challenges they face in the transition to industrial applications.展开更多
The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined opt...The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined optical probes in nanocomposite materials.These emerging nanophotonic devices and materials have enabled researchers to develop disruptive methods of tremendously increasing the storage capacity of current optical memory.In this paper,we present a review of the recent advancements in nanophotonics-enabled optical storage techniques.Particularly,we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.展开更多
There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage m...There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mecha- nism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the ex- isling approaches using Brewer's CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.展开更多
Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abro...Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.展开更多
The possibility to achieve unprecedented multiplexing of light-matter interaction in nanoscale is of virtue importance from both fundamental science and practical application points of view. Cylindrical vector beams(C...The possibility to achieve unprecedented multiplexing of light-matter interaction in nanoscale is of virtue importance from both fundamental science and practical application points of view. Cylindrical vector beams(CVBs) manifested as polarization vortices represent a robust and emerging degree of freedom for information multiplexing with increased capacities. Here, we propose and demonstrate massivelyencoded optical data storage(ODS) by harnessing spatially variant electric fields mediated by segmented CVBs. By tight focusing polychromatic segmented CVBs to plasmonic nanoparticle aggregates, recordhigh multiplexing channels of ODS through different combinations of polarization states and wavelengths have been experimentally demonstrated with a low error rate. Our result not only casts new perceptions for tailoring light-matter interactions utilizing structured light but also enables a new prospective for ultra-high capacity optical memory with minimalist system complexity by combining CVB’s compatibility with fiber optics.展开更多
Holographic data storage system (HDSS) has been a good candidate for a volumetric recording technology, due to their large storage capacities and high transfer rates, and have been researched for tens of years after...Holographic data storage system (HDSS) has been a good candidate for a volumetric recording technology, due to their large storage capacities and high transfer rates, and have been researched for tens of years after the principle of holography was first proposed. However, these systems, called conventional 2-axis holography, still have essential issues for commercialization of products. Collinear HDSS, in which the information and reference beams are modulated co-axially by the same spatial light modulator (SLM), as a new read/write method for HDSS are very promising. With this unique configuration, the optical pickup can be designed as small as DVDs, and can be placed on one side of the recording media (disc). In the disc structure, the preformatted reflective layer is used for the focus/tracking servo and reading address information, and a dichroic mirror layer is used for detecting holographic recording information without interfering with the preformatted information. A 2-dimensional digital page data format is used and the shift-multiplexing method is employed to increase recording density. As servo technologies are being introduced to control the objective lens to be maintained precisely to the disc in the recording and reconstructing process, a vibration isolator is no longer necessary. Collinear holography can produce a small, practical HDSS more easily than conventional 2-axis holography. In this paper, we introduced the principle of the collinear holography and its media structure of disc. Some results of experimental and theoretical studies suggest that it is a very effective method. We also discussed some methods to increase the recording density and data transfer rates of collinear holography.展开更多
In the era of big data,sensor networks have been pervasively deployed,producing a large amount of data for various applications.However,because sensor networks are usually placed in hostile environments,managing the h...In the era of big data,sensor networks have been pervasively deployed,producing a large amount of data for various applications.However,because sensor networks are usually placed in hostile environments,managing the huge volume of data is a very challenging issue.In this study,we mainly focus on the data storage reliability problem in heterogeneous wireless sensor networks where robust storage nodes are deployed in sensor networks and data redundancy is utilized through coding techniques.To minimize data delivery and data storage costs,we design an algorithm to jointly optimize data routing and storage node deployment.The problem can be formulated as a binary nonlinear combinatorial optimization problem,and due to its NP-hardness,designing approximation algorithms is highly nontrivial.By leveraging the Markov approximation framework,we elaborately design an efficient algorithm driven by a continuous-time Markov chain to schedule the deployment of the storage node and corresponding routing strategy.We also perform extensive simulations to verify the efficacy of our algorithm.展开更多
基金the National Key Research and Development Program of China(2022YFB3607300)the National Natural Science Foundation of China(62322512 and 12134013)+3 种基金the Chinese Acad-emy of Sciences Project for Young Scientists in Basic Research(YSBR-049)support from the University of Science and Technology of China’s Center for Micro and Nanoscale Research and Fabricationsupported by the China Postdoctoral Science Foundation(2023M743364)supercomputing system in Hefei Advanced Computing Center and the Supercomputing Center of University of Science and Technology of China.
文摘Optical data storage(ODS)is a low-cost and high-durability counterpart of traditional electronic or mag-netic storage.As a means of enhancing ODS capacity,the multiple recording layer(MRL)method is more promising than other approaches such as reducing the recording volume and multiplexing technology.However,the architecture of current MRLs is identical to that of recording data into physical layers with rigid space,which leads to either severe interlayer crosstalk or finite recording layers constrained by the short working distances of the objectives.Here,we propose the concept of hybrid-layer ODS,which can record optical information into a physical layer and multiple virtual layers by using high-orthogonality random meta-channels.In the virtual layer,32 images are experimentally reconstructed through holog-raphy,where their holographic phases are encoded into 16 printed images and complementary images in the physical layer,yielding a capacity of 2.5 Tbit cm^(-3).A higher capacity is achievable with more virtual layers,suggesting hybrid-layer ODS as a possible candidate for next-generation ODS.
基金supported by the National Key Research and Development Program of China(2018YFA0900100)the Natural Science Foundation of Tianjin,China(19JCJQJC63300)Tianjin University。
文摘DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.
基金supports from the National Key R&D Program of China (No. 2021YFB2802000 and 2021YFB2800500)the National Natural Science Foundation of China (Grant Nos. U20A20211, 51902286, 61775192, 61905215, and 62005164)+2 种基金Key Research Project of Zhejiang Labthe State Key Laboratory of High Field Laser Physics (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences)China Postdoctoral Science Foundation (2021M702799)。
文摘Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.
基金financial supports from the National Natural Science Foundation of China(Grant Nos.62174073,61875073,11674130,91750110 and 61522504)the National Key R&D Program of China(Grant No.2018YFB1107200)+3 种基金the Guangdong Provincial Innovation and Entrepren-eurship Project(Grant No.2016ZT06D081)the Natural Science Founda-tion of Guangdong Province,China(Grant Nos.2016A030306016 and 2016TQ03X981)the Pearl River Nova Program of Guangzhou(Grant No.201806010040)the Technology Innovation and Development Plan of Yantai(Grant No.2020XDRH095).
文摘Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.
基金We are grateful for financial supports from National Key Research and Development Program of China(2018YFA0701800)Project of Fujian Province Major Science and Technology(2020HZ01012)+1 种基金Natural Science Foundation of Fujian Province(2021J01160)National Natural Science Foundation of China(62061136005).
文摘To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61774034 and 12104090)。
文摘The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.
基金This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)in part by the 2022 Yeungnam University Research Grant.
文摘The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
文摘In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.
基金the National Key R&D Program of China(Grant no.2019YFC1709803)National Natural Science Foundation of China(Grant no.81873183).
文摘Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.
文摘The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.
文摘DNA storage,characterized by its durability,data density,and cost-effectiveness,is a promising solution for managing the increasing data volumes in healthcare.This review explores state-of-the-art DNA storage technologies,and provides insights into designing a DNA storage system tailored for medical cold data.We anticipate that a practical approach for medical cold data storage will involve establishing regional,in vitro DNA storage centers that can serve multiple hospitals.The immediacy of DNA storage for medical data hinges on the development of novel,high-density,specialized coding methods.Established commercial techniques,such as DNA chemical synthesis and next-generation sequencing(NGS),along with mixed drying with alkaline salts and refined Polymerase Chain Reaction(PCR),potentially represent the optimal options for data writing,reading,storage,and accessing,respectively.Data security could be promised by the integration of traditional digital encryption and DNA steganography.Although breakthrough developments like artificial nucleotides and DNA nanostructures show potential,they remain in the laboratory research phase.In conclusion,DNA storage is a viable preservation strategy for medical cold data in the near future.
文摘In the modern era of 5th generation(5G)networks,the data generated by User Equipments(UE)has increased significantly,with data file sizes varying from modest sensor logs to enormous multimedia files.In modern telecommunications networks,the need for high-end security and efficient management of these large data files is a great challenge for network designers.The proposed model provides the efficient real-time virtual data storage of UE data files(light and heavy)using an object storage system MinIO having inbuilt Software Development Kits(SDKs)that are compatible with Amazon(S3)Application Program Interface(API)making operations like file uploading,and data retrieval extremely efficient as compared to legacy virtual storage system requiring low-level HTTP requests for data management.To provide integrity,authenticity,and confidentiality(integrity checking via an authentication tag)to the data files of UE,the encrypted algorithm 256-bit oriented-Advanced Encryption Standard(256-AES)in Galois/Counter Mode(GCM)is utilized in combination with MinIO.The AES-based MinIO signifies in more secure and faster approach than older models like Cipher Block Chaining(CBC).The performance of the proposed model is analyzed using the Iperf utility to perform the Teletraffic parametric(bandwidth,throughput,latency,and transmission delay)analysis for three different cases namely:(a)light UE traffic(uploading and retrieval)(b)heavy UE traffic(uploading and retrieval)and(c)comparison of Teletraffic parameters namely:bandwidth(Bava),throughput(Tput),data transfer(D_(Trans)),latency(L_(ms)),and transmission delay(TDelay)obtained from proposed method with legacy virtual storage methods.The results show that the suggested MinIO-based system outperforms conventional systems in terms of latency,encryption efficiency,and performance under varying data load conditions.
基金supported by the National Key Research and Development Program of China(No.2022YFB2804300)the Creative Research Group Project of NSFC(No.61821003)+2 种基金the Innovation Fund of the Wuhan National Laboratory for Optoelectronicsthe Program for HUST Academic Frontier Youth Teamthe Innovation Project of Optics Valley Laboratory.
文摘The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation optical data storage,offering insights into various technological roadmaps.We pay particular attention to multidimensional and superresolution approaches,each of which uniquely addresses the challenge of dense storage.The multidimensional approach exploits multiple parameters of light,allowing for the storage of multiple bits of information within a single voxel while still adhering to diffraction limitation.Alternatively,superresolution approaches leverage the photoexcitation and photoinhibition properties of materials to create diffraction-unlimited data voxels.We conclude by summarizing the immense opportunities these approaches present,while also outlining the formidable challenges they face in the transition to industrial applications.
基金The authors thank the Australian Research Council for its support through the Laureate Fellowship project(FL100100099).
文摘The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined optical probes in nanocomposite materials.These emerging nanophotonic devices and materials have enabled researchers to develop disruptive methods of tremendously increasing the storage capacity of current optical memory.In this paper,we present a review of the recent advancements in nanophotonics-enabled optical storage techniques.Particularly,we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.
文摘There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mecha- nism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the ex- isling approaches using Brewer's CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.
基金supported by the National Basic Research Program of China ("973" Program) (Grant No.61399)
文摘Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.
基金the financial support from the National Key R&D Program of China (2018YFB1107200)the National Natural Science Foundation of China (91750110, 11674130, 61605061, 11674110 and 11874020)+2 种基金the Guangdong Provincial Innovation and Entrepreneurship Project (2016ZT06D081)the Natural Science Foundation of Guangdong Province (2016A030306016, 2016TQ03X981 and 2016A030308010)Pearl River S and T Nova Program of Guangzhou (201806010040)。
文摘The possibility to achieve unprecedented multiplexing of light-matter interaction in nanoscale is of virtue importance from both fundamental science and practical application points of view. Cylindrical vector beams(CVBs) manifested as polarization vortices represent a robust and emerging degree of freedom for information multiplexing with increased capacities. Here, we propose and demonstrate massivelyencoded optical data storage(ODS) by harnessing spatially variant electric fields mediated by segmented CVBs. By tight focusing polychromatic segmented CVBs to plasmonic nanoparticle aggregates, recordhigh multiplexing channels of ODS through different combinations of polarization states and wavelengths have been experimentally demonstrated with a low error rate. Our result not only casts new perceptions for tailoring light-matter interactions utilizing structured light but also enables a new prospective for ultra-high capacity optical memory with minimalist system complexity by combining CVB’s compatibility with fiber optics.
文摘Holographic data storage system (HDSS) has been a good candidate for a volumetric recording technology, due to their large storage capacities and high transfer rates, and have been researched for tens of years after the principle of holography was first proposed. However, these systems, called conventional 2-axis holography, still have essential issues for commercialization of products. Collinear HDSS, in which the information and reference beams are modulated co-axially by the same spatial light modulator (SLM), as a new read/write method for HDSS are very promising. With this unique configuration, the optical pickup can be designed as small as DVDs, and can be placed on one side of the recording media (disc). In the disc structure, the preformatted reflective layer is used for the focus/tracking servo and reading address information, and a dichroic mirror layer is used for detecting holographic recording information without interfering with the preformatted information. A 2-dimensional digital page data format is used and the shift-multiplexing method is employed to increase recording density. As servo technologies are being introduced to control the objective lens to be maintained precisely to the disc in the recording and reconstructing process, a vibration isolator is no longer necessary. Collinear holography can produce a small, practical HDSS more easily than conventional 2-axis holography. In this paper, we introduced the principle of the collinear holography and its media structure of disc. Some results of experimental and theoretical studies suggest that it is a very effective method. We also discussed some methods to increase the recording density and data transfer rates of collinear holography.
基金partially supported by the Shandong Provincial Natural Science Foundation(No.ZR2017QF005)the National Natural Science Foundation of China(Nos.61702304,61971269,61832012,61602195,61672321,61771289,and 61602269)the China Postdoctoral Science Foundation(No.2017M622136)。
文摘In the era of big data,sensor networks have been pervasively deployed,producing a large amount of data for various applications.However,because sensor networks are usually placed in hostile environments,managing the huge volume of data is a very challenging issue.In this study,we mainly focus on the data storage reliability problem in heterogeneous wireless sensor networks where robust storage nodes are deployed in sensor networks and data redundancy is utilized through coding techniques.To minimize data delivery and data storage costs,we design an algorithm to jointly optimize data routing and storage node deployment.The problem can be formulated as a binary nonlinear combinatorial optimization problem,and due to its NP-hardness,designing approximation algorithms is highly nontrivial.By leveraging the Markov approximation framework,we elaborately design an efficient algorithm driven by a continuous-time Markov chain to schedule the deployment of the storage node and corresponding routing strategy.We also perform extensive simulations to verify the efficacy of our algorithm.