With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored ha...With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored has increased dramatically. The use of compression techniques for these bitstream files is emerging as a key strategy to alleviate the burden on storage resources. Due to the severe resource constraints of space-based electronics and the unique application environment, the simplicity, efficiency and robustness of the decompression circuitry is also a key design consideration. Through comparative analysis current bitstream file compression technologies, this research suggests that the Lempel Ziv Oberhumer (LZO) compression algorithm is more suitable for satellite applications. This paper also delves into the compression process and format of the LZO compression algorithm, as well as the inherent characteristics of configuration bitstream files. We propose an improved algorithm based on LZO for bitstream file compression, which optimises the compression process by refining the format and reducing the offset. Furthermore, a low-cost, robust decompression hardware architecture is proposed based on this method. Experimental results show that the compression speed of the improved LZO algorithm is increased by 3%, the decompression hardware cost is reduced by approximately 60%, and the compression ratio is slightly reduced by 0.47%.展开更多
The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilita...The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.展开更多
[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data...[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data file support for meteorological forecasting and services.[Method]In this paper,an efficient and accurate method for data file quality control and fusion processing is developed.By locating the missing measurement time,data are extracted from the"AWZ.db"database and the minute routine meteorological element data file,and merged into the hourly routine meteorological element data file.[Result]Data processing efficiency and accuracy are significantly improved,and the problem of incomplete hourly routine meteorological element data files is solved.At the same time,it emphasizes the importance of ensuring the accuracy of the files used and carefully checking and verifying the fusion results,and proposes strategies to improve data quality.[Conclusion]This method provides convenience for observation personnel and effectively improves the integrity and accuracy of data files.In the future,it is expected to provide more reliable data support for meteorological forecasting and services.展开更多
At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that...At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that can be accurately accessed,which in turn affects the density in a single oligo poolof digital DNA storage.In this paper,a multi-mode DNA sequence design method based on PCR file retrie-val in a single oligonucleotide pool is proposed for high-capacity DNA data storage.Firstly,by analyzingthe maximum number of orthogonal primers at each predicted primer length,it was found that the rela-tionship between primer length and the maximum available primer number does not increase linearly,and the maximum number of orthogonal primers is on the order of 10^(4).Next,this paper analyzes themaximum address space capacity of DNA sequences with different types of primer binding sites for filemapping.In the case where the capacity of the primer library is R(where R is even),the number ofaddress spaces that can be mapped by the single-primer DNA sequence design scheme proposed in thispaper is four times that of the previous one,and the two-level primer DNA sequence design scheme can reach [R/2·(R/2-1)]^(2)times.Finally,a multi-mode DNA sequence generation method is designed based onthe number of files to be stored in the oligonucleotide pool,in order to meet the requirements of the ran-dom retrieval of target files in an oligonucleotide pool with large-scale file numbers.The performance ofthe primers generated by the orthogonal primer library generator proposed in this paper is verified,andthe average Gibbs free energy of the most stable heterodimer formed between the orthogonal primersproduced is−1 kcal·(mol·L^(−1))^(−1)(1 kcal=4.184 kJ).At the same time,by selectively PCR-amplifying theDNA sequences of the two-level primer binding sites for random access,the target sequence can be accu-rately read with a minimum of 10^(3) reads,when the primer binding site sequences at different positionsare mutually different.This paper provides a pipeline for orthogonal primer library generation and multi-mode mapping schemes between files and primers,which can help achieve precise random access to filesin large-scale DNA oligo pools.展开更多
Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video proces...Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video processing technologies have facilitated the widespread availability of powerful editing tools,such as Deepfakes,enabling anyone to easily create manipulated or fake visual content,which poses an enormous threat to social security and public trust.To verify the authenticity and integrity of images and videos,numerous approaches have been proposed,which are primarily based on content analysis and their effectiveness is susceptible to interference from various image or video post-processing operations.Recent research has highlighted the potential of file containers analysis as a promising forensic approach that offers efficient and interpretable results.However,there is still a lack of review articles on this kind of approach.In order to fill this gap,we present a comprehensive review of file containers-based image and video forensics in this paper.Specifically,we categorize the existing methods into two distinct stages,qualitative analysis and quantitative analysis.In addition,an overall framework is proposed to organize the exiting approaches.Then,the advantages and disadvantages of the schemes used across different forensic tasks are provided.Finally,we outline the trends in this research area,aiming to provide valuable insights and technical guidance for future research.展开更多
This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algori...This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algorithms.The first step is to determine of weights of the input features while using the object-based approach with MRS to processing such images.Given the high number of input features,an automatic method is needed for estimation of this parameter.Moreover,we used the Variable Importance(VI),one of the outputs of the RFC,to determine the importance of each image band.Then,based on this parameter and other required parameters,the image is segmented into some homogenous regions.Finally,the RFC is carried out based on the characteristics of segments for converting them into meaningful objects.The proposed method,as well as,the conventional pixel-based RFC and Support Vector Machine(SVM)method was applied to three different hyperspectral data-sets with various spectral and spatial characteristics.These data were acquired by the HyMap,the Airborne Prism Experiment(APEX),and the Compact Airborne Spectrographic Imager(CASI)hyperspectral sensors.The experimental results show that the proposed method is more consistent for land cover mapping in various areas.The overall classification accuracy(OA),obtained by the proposed method was 95.48,86.57,and 84.29%for the HyMap,the APEX,and the CASI datasets,respectively.Moreover,this method showed better efficiency in comparison to the spectralbased classifications because the OAs of the proposed method was 5.67 and 3.75%higher than the conventional RFC and SVM classifiers,respectively.展开更多
Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great ...Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.展开更多
Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free a...Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.展开更多
Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) a...Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.展开更多
The Baltic Sea is a brackish, mediterranean sea located in the middle latitudes of Europe. It is seasonally covered with ice. The ice covered areas during a typical winter are the Bothnian Bay, the Gulf of Finnland an...The Baltic Sea is a brackish, mediterranean sea located in the middle latitudes of Europe. It is seasonally covered with ice. The ice covered areas during a typical winter are the Bothnian Bay, the Gulf of Finnland and the Gulf of Riga. Sea ice plays an important role in dynamic and thermodynamic processes and also has a strong impact on the heat budget of the sea. Also a large part of transport goes by sea, and there is a need to create ice charts to make the marine transport safe. Because of high cloudiness in winter season and small amount of light in the northern part of the Baltic Sea, radar data are the most important remote sensing source of sea ice information. The main goal of the following studies is classification of the Baltic sea ice cover using radar data. The ENVISAT ASAR (Advanced Synthetic Aperture Radar) acquires data in five different modes. In the following studies ASAR Wide Swath Mode data were used. The Wide Swath Mode, using the ScanSAR technique provides medium resolution images (150 m) over a swath of 405 kin, at HH or VV polarization. In following work data from February 13th, February 24th and April 6th, 2011, representing three different sea ice situations were chosen. OBIA (object-based image analysis) methods and texture parameters were used to create sea ice extent and sea ice concentration charts. Based on object-based methods, it can separate single sea ice floes within the ice pack and calculate more accurately sea ice concentration.展开更多
The detection of impervious surface (IS) in heterogeneous urban areas is one of the most challenging tasks in urban remote sensing. One of the limitations in IS detection at the parcel level is the lack of sufficient ...The detection of impervious surface (IS) in heterogeneous urban areas is one of the most challenging tasks in urban remote sensing. One of the limitations in IS detection at the parcel level is the lack of sufficient training data. In this study, a generic model of spatial distribution of roof materials is considered to overcome this limitation. A generic model that is based on spectral, spatial and textural information which is extracted from available training data is proposed. An object-based approach is used to extract the information inherent in the image. Furthermore, linear discriminant analysis is used for dimensionality reduction and to discriminate between different spatial, spectral and textural attributes. The generic model is composed of a discriminant function based on linear combinations of the predictor variables that provide the best discrimination among the groups. The discriminate analysis result shows that of the 54 attributes extracted from the WorldView-2 image, only 13 attributes related to spatial, spectral and textural information are useful for discriminating different roof materials. Finally, this model is applied to different WorldView-2 images from different areas and proves that this model has good potential to predict roof materials from the WorldView-2 images without using training data.展开更多
An object-based approach is proposed for land cover classification using optimal polarimetric parameters.The ability to identify targets is effectively enhanced by the integration of SAR and optical images.The innovat...An object-based approach is proposed for land cover classification using optimal polarimetric parameters.The ability to identify targets is effectively enhanced by the integration of SAR and optical images.The innovation of the presented method can be summarized in the following two main points:①estimating polarimetric parameters(H-A-Alpha decomposition)through the optical image as a driver;②a multi-resolution segmentation based on the optical image only is deployed to refine classification results.The proposed method is verified by using Sentinel-1/2 datasets over the Bakersfield area,California.The results are compared against those from pixel-based SVM classification using the ground truth from the National Land Cover Database(NLCD).A detailed accuracy assessment complied with seven classes shows that the proposed method outperforms the conventional approach by around 10%,with an overall accuracy of 92.6%over regions with rich texture.展开更多
Many researches have been performed comparing object-based classification (OBC) and pixel-based classification (PBC), particularly in classifying high-resolution satellite images. VNREDSat-1 is the first optical remot...Many researches have been performed comparing object-based classification (OBC) and pixel-based classification (PBC), particularly in classifying high-resolution satellite images. VNREDSat-1 is the first optical remote sensing satellite of Vietnam with resolution of 2.5 m (Panchromatic) and 10 m (Multispectral). The objective of this research is to compare two classification approaches using VNREDSat-1 image for mapping mangrove forest in Vien An Dong commune, Ngoc Hien district, Ca Mau province. ISODATA algorithm (in PBC method) and membership function classifier (in OBC method) were chosen to classify the same image. The results show that the overall accuracies of OBC and PBC are 73% and 62.16% respectively, and OBC solved the “salt and pepper” which is the main issue of PBC as well. Therefore, OBC is supposed to be the better approach to classify VNREDSat-1 for mapping mangrove forest in Ngoc Hien commune.展开更多
With the deterioration of the environment,it is imperative to protect coastal wetlands.Using multi-source remote sensing data and object-based hierarchical classification to classify coastal wetlands is an effective m...With the deterioration of the environment,it is imperative to protect coastal wetlands.Using multi-source remote sensing data and object-based hierarchical classification to classify coastal wetlands is an effective method.The object-based hierarchical classification using remote sensing indices(OBH-RSI)for coastal wetland is proposed to achieve fine classification of coastal wetland.First,the original categories are divided into four groups according to the category characteristics.Second,the training and test maps of each group are extracted according to the remote sensing indices.Third,four groups are passed through the classifier in order.Finally,the results of the four groups are combined to get the final classification result map.The experimental results demonstrate that the overall accuracy,average accuracy and kappa coefficient of the proposed strategy are over 94%using the Yellow River Delta dataset.展开更多
As forest is of great significance for our whole development and the sustainable plan is so focus on it. It is very urgent for us to have the whole distribution,stock volume and other related information about that. S...As forest is of great significance for our whole development and the sustainable plan is so focus on it. It is very urgent for us to have the whole distribution,stock volume and other related information about that. So the forest inventory program is on our schedule. Aiming at dealing with the problem in extraction of dominant tree species,we tested the highly hot method-object-based analysis. Based on the ALOS image data,we combined multi-resolution in e Cognition software and fuzzy classification algorithm. Through analyzing the segmentation results,we basically extract the spruce,the pine,the birch and the oak of the study area. Both the spectral and spatial characteristics were derived from those objects,and with the help of GLCM,we got the differences of each species. We use confusion matrix to do the Classification accuracy assessment compared with the actual ground data and this method showed a comparatively good precision as 87% with the kappa coefficient 0. 837.展开更多
Mapping regional spatial patterns of coral reef geomorphology provides the primary information to understand the constructive processes in the reef ecosystem. However, this work is challenged by the pixel-based image ...Mapping regional spatial patterns of coral reef geomorphology provides the primary information to understand the constructive processes in the reef ecosystem. However, this work is challenged by the pixel-based image classification method for its comparatively low accuracy. In this paper, an object-based image analysis(OBIA)method was presented to map intra-reef geomorphology of coral reefs in the Xisha Islands, China using Landsat 8satellite imagery. Following the work of the Millennium Coral Reef Mapping Project, a regional reef class hierarchy with ten geomorphic classes was first defined. Then, incorporating the hierarchical concept and integrating the spectral and additional spatial information such as context, shape and contextual relationships, a large-scale geomorphic map was produced by OBIA with accuracies generally more than 80%. Although the robustness of OBIA has been validated in the applications of coral reef mapping from individual reefs to reef system in this paper, further work is still required to improve its transferability.展开更多
The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problem...The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problems associated with its geographical position and the intensive exploitation of resources by an overabundant population (population density of 962 inhabitants/km2). Some thirty years after the economic liberalization and the opening of the country to international markets, agricultural land use patterns in the Red River Delta, particularly in the coastal area, have undergone many changes. Remote sensing is a particularly powerful tool in processing and providing spatial information for monitoring land use changes. The main methodological objective is to find a solution to process the many heterogeneous coastal land use parameters, so as to describe it in all its complexity, specifically by making use of the latest European satellite data (Sentinel-2). This complexity is due to local variations in ecological conditions, but also to anthropogenic factors that directly and indirectly influence land use dynamics. The methodological objective was to develop a new Geographic Object-based Image Analysis (GEOBIA) approach for mapping coastal areas using Sentinel-2 data and Landsat 8. By developing a new segmentation, accuracy measure, in this study was determined that segmentation accuracies decrease with increasing segmentation scales and that the negative impact of under-segmentation errors significantly increases at a large scale. An Estimation of Scale Parameter (ESP) tool was then used to determine the optimal segmentation parameter values. A popular machine learning algorithms (Random Forests-RFs) is used. For all classifications algorithm, an increase in overall accuracy was observed with the full synergistic combination of available data sets.展开更多
This paper proposes an unequal error protection(UEP)coding method to improve the transmission performance of three-dimensional(3D)audio based on expanding window fountain(EWF).Different from other transmissions ...This paper proposes an unequal error protection(UEP)coding method to improve the transmission performance of three-dimensional(3D)audio based on expanding window fountain(EWF).Different from other transmissions with equal error protection(EEP)when transmitting the 3D audio objects.An approach of extracting the important audio object is presented,and more protection is given to more important audio object and comparatively less protection is given to the normal audio objects.Objective and subjective experiments have shown that the proposed UEP method achieves better performance than equal error protection method,while the bits error rates(BER)of the important audio object can decrease from 10^(–3) to 10^(–4),and the subjective quality of UEP is better than that of EEP by 14%.展开更多
基金supported in part by the National Key Laboratory of Science and Technology on Space Microwave(Grant Nos.HTKJ2022KL504009 and HTKJ2022KL5040010).
文摘With the increase in the quantity and scale of Static Random-Access Memory Field Programmable Gate Arrays (SRAM-based FPGAs) for aerospace application, the volume of FPGA configuration bit files that must be stored has increased dramatically. The use of compression techniques for these bitstream files is emerging as a key strategy to alleviate the burden on storage resources. Due to the severe resource constraints of space-based electronics and the unique application environment, the simplicity, efficiency and robustness of the decompression circuitry is also a key design consideration. Through comparative analysis current bitstream file compression technologies, this research suggests that the Lempel Ziv Oberhumer (LZO) compression algorithm is more suitable for satellite applications. This paper also delves into the compression process and format of the LZO compression algorithm, as well as the inherent characteristics of configuration bitstream files. We propose an improved algorithm based on LZO for bitstream file compression, which optimises the compression process by refining the format and reducing the offset. Furthermore, a low-cost, robust decompression hardware architecture is proposed based on this method. Experimental results show that the compression speed of the improved LZO algorithm is increased by 3%, the decompression hardware cost is reduced by approximately 60%, and the compression ratio is slightly reduced by 0.47%.
基金supported by the Ongoing Research Funding program(ORF-2025-636),King Saud University,Riyadh,Saudi Arabia.
文摘The healthcare sector involves many steps to ensure efficient care for patients,such as appointment scheduling,consultation plans,online follow-up,and more.However,existing healthcare mechanisms are unable to facilitate a large number of patients,as these systems are centralized and hence vulnerable to various issues,including single points of failure,performance bottlenecks,and substantial monetary costs.Furthermore,these mechanisms are unable to provide an efficient mechanism for saving data against unauthorized access.To address these issues,this study proposes a blockchain-based authentication mechanism that authenticates all healthcare stakeholders based on their credentials.Furthermore,also utilize the capabilities of the InterPlanetary File System(IPFS)to store the Electronic Health Record(EHR)in a distributed way.This IPFS platform addresses not only the issue of high data storage costs on blockchain but also the issue of a single point of failure in the traditional centralized data storage model.The simulation results demonstrate that our model outperforms the benchmark schemes and provides an efficient mechanism for managing healthcare sector operations.The results show that it takes approximately 3.5 s for the smart contract to authenticate the node and provide it with the decryption key,which is ultimately used to access the data.The simulation results show that our proposed model outperforms existing solutions in terms of execution time and scalability.The execution time of our model smart contract is around 9000 transactions in just 6.5 s,while benchmark schemes require approximately 7 s for the same number of transactions.
基金the Fifth Batch of Innovation Teams of Wuzhou Meteorological Bureau"Wuzhou Innovation Team for Enhancing the Comprehensive Meteorological Observation Ability through Digitization and Intelligence"Wuzhou Science and Technology Planning Project(202402122,202402119).
文摘[Objective]In response to the issue of insufficient integrity in hourly routine meteorological element data files,this paper aims to improve the availability and reliability of data files,and provide high-quality data file support for meteorological forecasting and services.[Method]In this paper,an efficient and accurate method for data file quality control and fusion processing is developed.By locating the missing measurement time,data are extracted from the"AWZ.db"database and the minute routine meteorological element data file,and merged into the hourly routine meteorological element data file.[Result]Data processing efficiency and accuracy are significantly improved,and the problem of incomplete hourly routine meteorological element data files is solved.At the same time,it emphasizes the importance of ensuring the accuracy of the files used and carefully checking and verifying the fusion results,and proposes strategies to improve data quality.[Conclusion]This method provides convenience for observation personnel and effectively improves the integrity and accuracy of data files.In the future,it is expected to provide more reliable data support for meteorological forecasting and services.
基金supported by the fund from Tianjin Municipal Science and Technology Bureau(22JCYBJC01390).
文摘At present,the polymerase chain reaction(PCR)amplification-based file retrieval method is the mostcommonly used and effective means of DNA file retrieval.The number of orthogonal primers limitsthe number of files that can be accurately accessed,which in turn affects the density in a single oligo poolof digital DNA storage.In this paper,a multi-mode DNA sequence design method based on PCR file retrie-val in a single oligonucleotide pool is proposed for high-capacity DNA data storage.Firstly,by analyzingthe maximum number of orthogonal primers at each predicted primer length,it was found that the rela-tionship between primer length and the maximum available primer number does not increase linearly,and the maximum number of orthogonal primers is on the order of 10^(4).Next,this paper analyzes themaximum address space capacity of DNA sequences with different types of primer binding sites for filemapping.In the case where the capacity of the primer library is R(where R is even),the number ofaddress spaces that can be mapped by the single-primer DNA sequence design scheme proposed in thispaper is four times that of the previous one,and the two-level primer DNA sequence design scheme can reach [R/2·(R/2-1)]^(2)times.Finally,a multi-mode DNA sequence generation method is designed based onthe number of files to be stored in the oligonucleotide pool,in order to meet the requirements of the ran-dom retrieval of target files in an oligonucleotide pool with large-scale file numbers.The performance ofthe primers generated by the orthogonal primer library generator proposed in this paper is verified,andthe average Gibbs free energy of the most stable heterodimer formed between the orthogonal primersproduced is−1 kcal·(mol·L^(−1))^(−1)(1 kcal=4.184 kJ).At the same time,by selectively PCR-amplifying theDNA sequences of the two-level primer binding sites for random access,the target sequence can be accu-rately read with a minimum of 10^(3) reads,when the primer binding site sequences at different positionsare mutually different.This paper provides a pipeline for orthogonal primer library generation and multi-mode mapping schemes between files and primers,which can help achieve precise random access to filesin large-scale DNA oligo pools.
基金supported in part by Natural Science Foundation of Hubei Province of China under Grant 2023AFB016the 2022 Opening Fund for Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering under Grant 2022SDSJ02the Construction Fund for Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering under Grant 2019ZYYD007.
文摘Images and videos play an increasingly vital role in daily life and are widely utilized as key evidentiary sources in judicial investigations and forensic analysis.Simultaneously,advancements in image and video processing technologies have facilitated the widespread availability of powerful editing tools,such as Deepfakes,enabling anyone to easily create manipulated or fake visual content,which poses an enormous threat to social security and public trust.To verify the authenticity and integrity of images and videos,numerous approaches have been proposed,which are primarily based on content analysis and their effectiveness is susceptible to interference from various image or video post-processing operations.Recent research has highlighted the potential of file containers analysis as a promising forensic approach that offers efficient and interpretable results.However,there is still a lack of review articles on this kind of approach.In order to fill this gap,we present a comprehensive review of file containers-based image and video forensics in this paper.Specifically,we categorize the existing methods into two distinct stages,qualitative analysis and quantitative analysis.In addition,an overall framework is proposed to organize the exiting approaches.Then,the advantages and disadvantages of the schemes used across different forensic tasks are provided.Finally,we outline the trends in this research area,aiming to provide valuable insights and technical guidance for future research.
文摘This paper presents a new framework for object-based classification of high-resolution hyperspectral data.This multi-step framework is based on multi-resolution segmentation(MRS)and Random Forest classifier(RFC)algorithms.The first step is to determine of weights of the input features while using the object-based approach with MRS to processing such images.Given the high number of input features,an automatic method is needed for estimation of this parameter.Moreover,we used the Variable Importance(VI),one of the outputs of the RFC,to determine the importance of each image band.Then,based on this parameter and other required parameters,the image is segmented into some homogenous regions.Finally,the RFC is carried out based on the characteristics of segments for converting them into meaningful objects.The proposed method,as well as,the conventional pixel-based RFC and Support Vector Machine(SVM)method was applied to three different hyperspectral data-sets with various spectral and spatial characteristics.These data were acquired by the HyMap,the Airborne Prism Experiment(APEX),and the Compact Airborne Spectrographic Imager(CASI)hyperspectral sensors.The experimental results show that the proposed method is more consistent for land cover mapping in various areas.The overall classification accuracy(OA),obtained by the proposed method was 95.48,86.57,and 84.29%for the HyMap,the APEX,and the CASI datasets,respectively.Moreover,this method showed better efficiency in comparison to the spectralbased classifications because the OAs of the proposed method was 5.67 and 3.75%higher than the conventional RFC and SVM classifiers,respectively.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA28070503)the National Key Research and Development Program of China(2021YFD1500100)+2 种基金the Open Fund of State Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing,Wuhan University (20R04)Land Observation Satellite Supporting Platform of National Civil Space Infrastructure Project(CASPLOS-CCSI)a PhD studentship ‘‘Deep Learning in massive area,multi-scale resolution remotely sensed imagery”(EAA7369),sponsored by Lancaster University and Ordnance Survey (the national mapping agency of Great Britain)。
文摘Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1402003)the CAS Earth Big Data Science Project(No.XDA19060303)the Innovation Project of the State Key Laboratory of Resources and Environmental Information System(No.O88RAA01YA)
文摘Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.
基金Under the auspices of Priority Academic Program Development of Jiangsu Higher Education Institutions,National Natural Science Foundation of China(No.41271438,41471316,41401440,41671389)
文摘Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.
文摘The Baltic Sea is a brackish, mediterranean sea located in the middle latitudes of Europe. It is seasonally covered with ice. The ice covered areas during a typical winter are the Bothnian Bay, the Gulf of Finnland and the Gulf of Riga. Sea ice plays an important role in dynamic and thermodynamic processes and also has a strong impact on the heat budget of the sea. Also a large part of transport goes by sea, and there is a need to create ice charts to make the marine transport safe. Because of high cloudiness in winter season and small amount of light in the northern part of the Baltic Sea, radar data are the most important remote sensing source of sea ice information. The main goal of the following studies is classification of the Baltic sea ice cover using radar data. The ENVISAT ASAR (Advanced Synthetic Aperture Radar) acquires data in five different modes. In the following studies ASAR Wide Swath Mode data were used. The Wide Swath Mode, using the ScanSAR technique provides medium resolution images (150 m) over a swath of 405 kin, at HH or VV polarization. In following work data from February 13th, February 24th and April 6th, 2011, representing three different sea ice situations were chosen. OBIA (object-based image analysis) methods and texture parameters were used to create sea ice extent and sea ice concentration charts. Based on object-based methods, it can separate single sea ice floes within the ice pack and calculate more accurately sea ice concentration.
文摘The detection of impervious surface (IS) in heterogeneous urban areas is one of the most challenging tasks in urban remote sensing. One of the limitations in IS detection at the parcel level is the lack of sufficient training data. In this study, a generic model of spatial distribution of roof materials is considered to overcome this limitation. A generic model that is based on spectral, spatial and textural information which is extracted from available training data is proposed. An object-based approach is used to extract the information inherent in the image. Furthermore, linear discriminant analysis is used for dimensionality reduction and to discriminate between different spatial, spectral and textural attributes. The generic model is composed of a discriminant function based on linear combinations of the predictor variables that provide the best discrimination among the groups. The discriminate analysis result shows that of the 54 attributes extracted from the WorldView-2 image, only 13 attributes related to spatial, spectral and textural information are useful for discriminating different roof materials. Finally, this model is applied to different WorldView-2 images from different areas and proves that this model has good potential to predict roof materials from the WorldView-2 images without using training data.
基金The National Key Research and Development Program of China(No.2018YFC0407900)The National Natural Science Foundation of China(No.41774003)+2 种基金The Natural Science Foundation of Jiangsu Province(No.BK20171432)The Fundamental Research Funds for the Central Universities(No.2018B177142019B60714)。
文摘An object-based approach is proposed for land cover classification using optimal polarimetric parameters.The ability to identify targets is effectively enhanced by the integration of SAR and optical images.The innovation of the presented method can be summarized in the following two main points:①estimating polarimetric parameters(H-A-Alpha decomposition)through the optical image as a driver;②a multi-resolution segmentation based on the optical image only is deployed to refine classification results.The proposed method is verified by using Sentinel-1/2 datasets over the Bakersfield area,California.The results are compared against those from pixel-based SVM classification using the ground truth from the National Land Cover Database(NLCD).A detailed accuracy assessment complied with seven classes shows that the proposed method outperforms the conventional approach by around 10%,with an overall accuracy of 92.6%over regions with rich texture.
文摘Many researches have been performed comparing object-based classification (OBC) and pixel-based classification (PBC), particularly in classifying high-resolution satellite images. VNREDSat-1 is the first optical remote sensing satellite of Vietnam with resolution of 2.5 m (Panchromatic) and 10 m (Multispectral). The objective of this research is to compare two classification approaches using VNREDSat-1 image for mapping mangrove forest in Vien An Dong commune, Ngoc Hien district, Ca Mau province. ISODATA algorithm (in PBC method) and membership function classifier (in OBC method) were chosen to classify the same image. The results show that the overall accuracies of OBC and PBC are 73% and 62.16% respectively, and OBC solved the “salt and pepper” which is the main issue of PBC as well. Therefore, OBC is supposed to be the better approach to classify VNREDSat-1 for mapping mangrove forest in Ngoc Hien commune.
基金supported by the Beijing Natural Science Foundation(No.JQ20021)the National Natural Science Foundation of China(Nos.61922013,61421001 and U1833203)the Remote Sensing Monitoring Project of Geographical Elements in Shandong Yellow River Delta National Nature Reserve。
文摘With the deterioration of the environment,it is imperative to protect coastal wetlands.Using multi-source remote sensing data and object-based hierarchical classification to classify coastal wetlands is an effective method.The object-based hierarchical classification using remote sensing indices(OBH-RSI)for coastal wetland is proposed to achieve fine classification of coastal wetland.First,the original categories are divided into four groups according to the category characteristics.Second,the training and test maps of each group are extracted according to the remote sensing indices.Third,four groups are passed through the classifier in order.Finally,the results of the four groups are combined to get the final classification result map.The experimental results demonstrate that the overall accuracy,average accuracy and kappa coefficient of the proposed strategy are over 94%using the Yellow River Delta dataset.
文摘As forest is of great significance for our whole development and the sustainable plan is so focus on it. It is very urgent for us to have the whole distribution,stock volume and other related information about that. So the forest inventory program is on our schedule. Aiming at dealing with the problem in extraction of dominant tree species,we tested the highly hot method-object-based analysis. Based on the ALOS image data,we combined multi-resolution in e Cognition software and fuzzy classification algorithm. Through analyzing the segmentation results,we basically extract the spruce,the pine,the birch and the oak of the study area. Both the spectral and spatial characteristics were derived from those objects,and with the help of GLCM,we got the differences of each species. We use confusion matrix to do the Classification accuracy assessment compared with the actual ground data and this method showed a comparatively good precision as 87% with the kappa coefficient 0. 837.
基金The National Natural Science Foundation of China under contract No.41201328the Science Foundation for Young Scholars of China’s State Oceanic Administration under contract No.2013415
文摘Mapping regional spatial patterns of coral reef geomorphology provides the primary information to understand the constructive processes in the reef ecosystem. However, this work is challenged by the pixel-based image classification method for its comparatively low accuracy. In this paper, an object-based image analysis(OBIA)method was presented to map intra-reef geomorphology of coral reefs in the Xisha Islands, China using Landsat 8satellite imagery. Following the work of the Millennium Coral Reef Mapping Project, a regional reef class hierarchy with ten geomorphic classes was first defined. Then, incorporating the hierarchical concept and integrating the spectral and additional spatial information such as context, shape and contextual relationships, a large-scale geomorphic map was produced by OBIA with accuracies generally more than 80%. Although the robustness of OBIA has been validated in the applications of coral reef mapping from individual reefs to reef system in this paper, further work is still required to improve its transferability.
文摘The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problems associated with its geographical position and the intensive exploitation of resources by an overabundant population (population density of 962 inhabitants/km2). Some thirty years after the economic liberalization and the opening of the country to international markets, agricultural land use patterns in the Red River Delta, particularly in the coastal area, have undergone many changes. Remote sensing is a particularly powerful tool in processing and providing spatial information for monitoring land use changes. The main methodological objective is to find a solution to process the many heterogeneous coastal land use parameters, so as to describe it in all its complexity, specifically by making use of the latest European satellite data (Sentinel-2). This complexity is due to local variations in ecological conditions, but also to anthropogenic factors that directly and indirectly influence land use dynamics. The methodological objective was to develop a new Geographic Object-based Image Analysis (GEOBIA) approach for mapping coastal areas using Sentinel-2 data and Landsat 8. By developing a new segmentation, accuracy measure, in this study was determined that segmentation accuracies decrease with increasing segmentation scales and that the negative impact of under-segmentation errors significantly increases at a large scale. An Estimation of Scale Parameter (ESP) tool was then used to determine the optimal segmentation parameter values. A popular machine learning algorithms (Random Forests-RFs) is used. For all classifications algorithm, an increase in overall accuracy was observed with the full synergistic combination of available data sets.
基金Supported by the National High Technology Research and Development Program of China(863 Program,2015AA016306)the National Natural Science Foundation of China(61662010,61231015,61471271)+1 种基金Science and Technology Plan Projects of Shenzhen(ZDSYS2014050916575763)Science and Technology Foundation of Guizhou Province(LKS[2011]1)
文摘This paper proposes an unequal error protection(UEP)coding method to improve the transmission performance of three-dimensional(3D)audio based on expanding window fountain(EWF).Different from other transmissions with equal error protection(EEP)when transmitting the 3D audio objects.An approach of extracting the important audio object is presented,and more protection is given to more important audio object and comparatively less protection is given to the normal audio objects.Objective and subjective experiments have shown that the proposed UEP method achieves better performance than equal error protection method,while the bits error rates(BER)of the important audio object can decrease from 10^(–3) to 10^(–4),and the subjective quality of UEP is better than that of EEP by 14%.