With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored ha...With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored has increased dramatically. The use of compression techniques for these bitstream files is emerging as a key strategy to alleviate the burden on storage resources. Due to the severe resource constraints of space-based electronics and the unique application environment, the simplicity, efficiency and robustness of the decompression circuitry is also a key design consideration. Through comparative analysis current bitstream file compression technologies, this research suggests that the Lempel Ziv Oberhumer (LZO) compression algorithm is more suitable for satellite applications. This paper also delves into the compression process and format of the LZO compression algorithm, as well as the inherent characteristics of configuration bitstream files. We propose an improved algorithm based on LZO for bitstream file compression, which optimises the compression process by refining the format and reducing the offset. Furthermore, a low-cost, robust decompression hardware architecture is proposed based on this method. Experimental results show that the compression speed of the improved LZO algorithm is increased by 3%, the decompression hardware cost is reduced by approximately 60%, and the compression ratio is slightly reduced by 0.47%.展开更多
[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data...[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data file support for meteorological forecasting and services.[Method]In this paper,an efficient and accurate method for data file quality control and fusion processing is developed.By locating the missing measurement time,data are extracted from the"AWZ.db"database and the minute routine meteorological element data file,and merged into the hourly routine meteorological element data file.[Result]Data processing efficiency and accuracy are significantly improved,and the problem of incomplete hourly routine meteorological element data files is solved.At the same time,it emphasizes the importance of ensuring the accuracy of the files used and carefully checking and verifying the fusion results,and proposes strategies to improve data quality.[Conclusion]This method provides convenience for observation personnel and effectively improves the integrity and accuracy of data files.In the future,it is expected to provide more reliable data support for meteorological forecasting and services.展开更多
At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that...At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that can be accurately accessed,which in turn affects the density in a single oligo poolof digital DNA storage.In this paper,a multi-mode DNA sequence design method based on PCR file retrie-val in a single oligonucleotide pool is proposed for high-capacity DNA data storage.Firstly,by analyzingthe maximum number of orthogonal primers at each predicted primer length,it was found that the rela-tionship between primer length and the maximum available primer number does not increase linearly,and the maximum number of orthogonal primers is on the order of 10^(4).Next,this paper analyzes themaximum address space capacity of DNA sequences with different types of primer binding sites for filemapping.In the case where the capacity of the primer library is R(where R is even),the number ofaddress spaces that can be mapped by the single-primer DNA sequence design scheme proposed in thispaper is four times that of the previous one,and the two-level primer DNA sequence design scheme can reach [R/2·(R/2-1)]^(2)times.Finally,a multi-mode DNA sequence generation method is designed based onthe number of files to be stored in the oligonucleotide pool,in order to meet the requirements of the ran-dom retrieval of target files in an oligonucleotide pool with large-scale file numbers.The performance ofthe primers generated by the orthogonal primer library generator proposed in this paper is verified,andthe average Gibbs free energy of the most stable heterodimer formed between the orthogonal primersproduced is−1 kcal·(mol·L^(−1))^(−1)(1 kcal=4.184 kJ).At the same time,by selectively PCR-amplifying theDNA sequences of the two-level primer binding sites for random access,the target sequence can be accu-rately read with a minimum of 10^(3) reads,when the primer binding site sequences at different positionsare mutually different.This paper provides a pipeline for orthogonal primer library generation and multi-mode mapping schemes between files and primers,which can help achieve precise random access to filesin large-scale DNA oligo pools.展开更多
The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilita...The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.展开更多
Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video proces...Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video processing technologies have facilitated the widespread availability of powerful editing tools,such as Deepfakes,enabling anyone to easily create manipulated or fake visual content,which poses an enormous threat to social security and public trust.To verify the authenticity and integrity of images and videos,numerous approaches have been proposed,which are primarily based on content analysis and their effectiveness is susceptible to interference from various image or video post-processing operations.Recent research has highlighted the potential of file containers analysis as a promising forensic approach that offers efficient and interpretable results.However,there is still a lack of review articles on this kind of approach.In order to fill this gap,we present a comprehensive review of file containers-based image and video forensics in this paper.Specifically,we categorize the existing methods into two distinct stages,qualitative analysis and quantitative analysis.In addition,an overall framework is proposed to organize the exiting approaches.Then,the advantages and disadvantages of the schemes used across different forensic tasks are provided.Finally,we outline the trends in this research area,aiming to provide valuable insights and technical guidance for future research.展开更多
The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic ...The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic research on distributed file system is inadequate. To address the forensic problems, this paper uses the Hadoop distributed file system (HDFS) as a case study and proposes a forensic method for efficient file extraction based on three-level (3L) mapping. First, HDFS is analyzed from overall architecture to local file system. Second, the 3L mapping of an HDFS file from HDFS namespace to data blocks on local file system is established and a recovery method for deleted files based on 3L mapping is presented. Third, a multi-node Hadoop framework via Xen virtualization platform is set up to test the performance of the method. The results indicate that the proposed method could succeed in efficient location of large files stored across data nodes, make selective image of disk data and get high recovery rate of deleted files.展开更多
This paper deeply discusses the design method of the File Transfer System(FTS)which is based on the File Transfer, Access and Management(FTAM) protocol standard, and probes into the construction principle of the Virtu...This paper deeply discusses the design method of the File Transfer System(FTS)which is based on the File Transfer, Access and Management(FTAM) protocol standard, and probes into the construction principle of the Virtual Filestore(VFS). Finally we introduce the implementation and the key technology of the FTS system.展开更多
基金supported in part by the National Key Laboratory of Science and Technology on Space Microwave(Grant Nos.HTKJ2022KL504009 and HTKJ2022KL5040010).
文摘With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored has increased dramatically. The use of compression techniques for these bitstream files is emerging as a key strategy to alleviate the burden on storage resources. Due to the severe resource constraints of space-based electronics and the unique application environment, the simplicity, efficiency and robustness of the decompression circuitry is also a key design consideration. Through comparative analysis current bitstream file compression technologies, this research suggests that the Lempel Ziv Oberhumer (LZO) compression algorithm is more suitable for satellite applications. This paper also delves into the compression process and format of the LZO compression algorithm, as well as the inherent characteristics of configuration bitstream files. We propose an improved algorithm based on LZO for bitstream file compression, which optimises the compression process by refining the format and reducing the offset. Furthermore, a low-cost, robust decompression hardware architecture is proposed based on this method. Experimental results show that the compression speed of the improved LZO algorithm is increased by 3%, the decompression hardware cost is reduced by approximately 60%, and the compression ratio is slightly reduced by 0.47%.
基金the Fifth Batch of Innovation Teams of Wuzhou Meteorological Bureau"Wuzhou Innovation Team for Enhancing the Comprehensive Meteorological Observation Ability through Digitization and Intelligence"Wuzhou Science and Technology Planning Project(202402122,202402119).
文摘[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data file support for meteorological forecasting and services.[Method]In this paper,an efficient and accurate method for data file quality control and fusion processing is developed.By locating the missing measurement time,data are extracted from the"AWZ.db"database and the minute routine meteorological element data file,and merged into the hourly routine meteorological element data file.[Result]Data processing efficiency and accuracy are significantly improved,and the problem of incomplete hourly routine meteorological element data files is solved.At the same time,it emphasizes the importance of ensuring the accuracy of the files used and carefully checking and verifying the fusion results,and proposes strategies to improve data quality.[Conclusion]This method provides convenience for observation personnel and effectively improves the integrity and accuracy of data files.In the future,it is expected to provide more reliable data support for meteorological forecasting and services.
基金supported by the fund from Tianjin Municipal Science and Technology Bureau(22JCYBJC01390).
文摘At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that can be accurately accessed,which in turn affects the density in a single oligo poolof digital DNA storage.In this paper,a multi-mode DNA sequence design method based on PCR file retrie-val in a single oligonucleotide pool is proposed for high-capacity DNA data storage.Firstly,by analyzingthe maximum number of orthogonal primers at each predicted primer length,it was found that the rela-tionship between primer length and the maximum available primer number does not increase linearly,and the maximum number of orthogonal primers is on the order of 10^(4).Next,this paper analyzes themaximum address space capacity of DNA sequences with different types of primer binding sites for filemapping.In the case where the capacity of the primer library is R(where R is even),the number ofaddress spaces that can be mapped by the single-primer DNA sequence design scheme proposed in thispaper is four times that of the previous one,and the two-level primer DNA sequence design scheme can reach [R/2·(R/2-1)]^(2)times.Finally,a multi-mode DNA sequence generation method is designed based onthe number of files to be stored in the oligonucleotide pool,in order to meet the requirements of the ran-dom retrieval of target files in an oligonucleotide pool with large-scale file numbers.The performance ofthe primers generated by the orthogonal primer library generator proposed in this paper is verified,andthe average Gibbs free energy of the most stable heterodimer formed between the orthogonal primersproduced is−1 kcal·(mol·L^(−1))^(−1)(1 kcal=4.184 kJ).At the same time,by selectively PCR-amplifying theDNA sequences of the two-level primer binding sites for random access,the target sequence can be accu-rately read with a minimum of 10^(3) reads,when the primer binding site sequences at different positionsare mutually different.This paper provides a pipeline for orthogonal primer library generation and multi-mode mapping schemes between files and primers,which can help achieve precise random access to filesin large-scale DNA oligo pools.
基金supported by the Ongoing Research Funding program(ORF-2025-636),King Saud University,Riyadh,Saudi Arabia.
文摘The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.
基金supported in part by Natural Science Foundation of Hubei Province of China under Grant 2023AFB016the 2022 Opening Fund for Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering under Grant 2022SDSJ02the Construction Fund for Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering under Grant 2019ZYYD007.
文摘Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video processing technologies have facilitated the widespread availability of powerful editing tools,such as Deepfakes,enabling anyone to easily create manipulated or fake visual content,which poses an enormous threat to social security and public trust.To verify the authenticity and integrity of images and videos,numerous approaches have been proposed,which are primarily based on content analysis and their effectiveness is susceptible to interference from various image or video post-processing operations.Recent research has highlighted the potential of file containers analysis as a promising forensic approach that offers efficient and interpretable results.However,there is still a lack of review articles on this kind of approach.In order to fill this gap,we present a comprehensive review of file containers-based image and video forensics in this paper.Specifically,we categorize the existing methods into two distinct stages,qualitative analysis and quantitative analysis.In addition,an overall framework is proposed to organize the exiting approaches.Then,the advantages and disadvantages of the schemes used across different forensic tasks are provided.Finally,we outline the trends in this research area,aiming to provide valuable insights and technical guidance for future research.
基金Supported by the National High Technology Research and Development Program of China(863 Program)(2015AA016006)the National Natural Science Foundation of China(60903220)
文摘The large scale and distribution of cloud computing storage have become the major challenges in cloud forensics for file extraction. Current disk forensic methods do not adapt to cloud computing well and the forensic research on distributed file system is inadequate. To address the forensic problems, this paper uses the Hadoop distributed file system (HDFS) as a case study and proposes a forensic method for efficient file extraction based on three-level (3L) mapping. First, HDFS is analyzed from overall architecture to local file system. Second, the 3L mapping of an HDFS file from HDFS namespace to data blocks on local file system is established and a recovery method for deleted files based on 3L mapping is presented. Third, a multi-node Hadoop framework via Xen virtualization platform is set up to test the performance of the method. The results indicate that the proposed method could succeed in efficient location of large files stored across data nodes, make selective image of disk data and get high recovery rate of deleted files.
文摘This paper deeply discusses the design method of the File Transfer System(FTS)which is based on the File Transfer, Access and Management(FTAM) protocol standard, and probes into the construction principle of the Virtual Filestore(VFS). Finally we introduce the implementation and the key technology of the FTS system.