We propose a Cross-Chain Mapping Blockchain(CCMB)for scalable data management in massive Internet of Things(IoT)networks.Specifically,CCMB aims to improve the scalability of securely storing,tracing,and transmitting I...We propose a Cross-Chain Mapping Blockchain(CCMB)for scalable data management in massive Internet of Things(IoT)networks.Specifically,CCMB aims to improve the scalability of securely storing,tracing,and transmitting IoT behavior and reputation data based on our proposed cross-mapped Behavior Chain(BChain)and Reputation Chain(RChain).To improve off-chain IoT data storage scalability,we show that our lightweight CCMB architecture efficiently utilizes available fog-cloud resources.The scalability of on-chain IoT data tracing is enhanced using our Mapping Smart Contract(MSC)and cross-chain mapping design to perform rapid Reputation-to-Behavior(R2B)traceability queries between BChain and RChain blocks.To maximize off-chain to on-chain throughput,we optimize the CCMB block settings and producers based on a general Poisson Point Process(PPP)network model.The constrained optimization problem is formulated as a Markov Decision Process(MDP),and solved using a dual-network Deep Reinforcement Learning(DRL)algorithm.Simulation results validate CCMB’s scalability advantages in storage,traceability,and throughput.In specific massive IoT scenarios,CCMB can reduce the storage footprint by 50%and traceability query time by 90%,while improving system throughput by 55%compared to existing benchmarks.展开更多
Most semi-structured data are of certain structure regularity. Having beenstored as structured data in relational database (RDB), they can be effectively managed by databasemanagement system (DBMS). Some semi-structur...Most semi-structured data are of certain structure regularity. Having beenstored as structured data in relational database (RDB), they can be effectively managed by databasemanagement system (DBMS). Some semi-structured data are difficult to transform due to theirirregular structures. We design an efficient algorithm and data structure for ensuring losslesstransformation. We bring forward an approach of schema extraction through data mining, in whichdifferent kinds of elements are transformed respectively and lossless mapping from semi-structureddata to structured data can be achieved.展开更多
By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution a...By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution and intermediate spatial resolution, a remote sensing-based model for mapping winter wheat on the North China Plain was built through integration with Landsat images and land-use data. First, a phenological window, PBW was drawn from time-series MODIS data. Next, feature extraction was performed for the PBW to reduce feature dimension and enhance its information. Finally, a regression model was built to model the relationship of the phenological feature and the sample data. The amount of information of the PBW was evaluated and compared with that of the main peak (MP). The relative precision of the mapping reached up to 92% in comparison to the Landsat sample data, and ranged between 87 and 96% in comparison to the statistical data. These results were sufficient to satisfy the accuracy requirements for winter wheat mapping at a large scale. Moreover, the proposed method has the ability to obtain the distribution information for winter wheat in an earlier period than previous studies. This study could throw light on the monitoring of winter wheat in China by using unique phenological feature of winter wheat.展开更多
Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challeng...Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.展开更多
With rising population,decline in soil productivity and land-based conflicts,the per-capita land availability for cultivation is rapidly decreasing within Benue State,a largely agrarian and small-holder setting.This s...With rising population,decline in soil productivity and land-based conflicts,the per-capita land availability for cultivation is rapidly decreasing within Benue State,a largely agrarian and small-holder setting.This study attempts a local-level support for the actualisation of Sustainable Development Goal Number 2(“end hunger,achieve food security and improved nutrition,and promote sustainable agriculture”)by 2030.Using Multi-Criteria Decision Making(MCDM)method,remote sensing data from Climate Research Unit(CRU)and in-situ data from Nigeria Meteorological Agency(NIMET)were analyzed by GIS techniques to map the suitability of rice cultivation in the study area,with the integration of Normalized Difference Vegetation Index(NDVI),land cover,slope,temperature,precipitation and soil parameters(cation exchange capacity,pH,bulk density,organic carbon).We apply the various statistical parameters that include mean spatial NDVI;correlation coefficient,standard deviation and Root Mean Square(RMS)between CRU and NIMET data.Spatial regression trend analysis is conducted between CRU precipitation and NDVI and between CRU temperature and NDVI from 1985 to 2015.The results reveal that NDVI in highly suitable rice planting regions is higher than marginally suitable regions except in the months of October and November,which shows that the highly suitable regions will yield better than the marginally suitable regions during the dry season.Additionally,NDVI is seasonally bimodal in response to precipitation,meaning that vegetation vigor is more dependent on precipitation than temperature.Finally,the correlation coefficient,standard deviation and RMS between CRU and NIMET precipitation data shows 0.42,108,and 110,respectively,while these three factors between CRU and NIMET temperature data shows 0.88,1.60,and 0.86,respectively.In conclusion,the MCDM approach reveals that upland is more suitable for rice cultivation in Benue State when comparing with the area provided by the Global Land Cover and National Mappings Organization(GLCNMO)data.展开更多
The relatively rapid recession of glaciers in the Himalayas and formation of moraine dammed glacial lakes(MDGLs) in the recent past have increased the risk of glacier lake outburst floods(GLOF) in the countries of Nep...The relatively rapid recession of glaciers in the Himalayas and formation of moraine dammed glacial lakes(MDGLs) in the recent past have increased the risk of glacier lake outburst floods(GLOF) in the countries of Nepal and Bhutan and in the mountainous territory of Sikkim in India. As a product of climate change and global warming, such a risk has not only raised the level of threats to the habitation and infrastructure of the region, but has also contributed to the worsening of the balance of the unique ecosystem that exists in this domain that sustains several of the highest mountain peaks of the world. This study attempts to present an up to date mapping of the MDGLs in the central and eastern Himalayan regions using remote sensing data, with an objective to analyse their surface area variations with time from 1990 through 2015, disaggregated over six episodes. The study also includes the evaluation for susceptibility of MDGLs to GLOF with the least criteria decision analysis(LCDA). Forty two major MDGLs, each having a lake surface area greater than 0.2 km2, that were identified in the Himalayan ranges of Nepal, Bhutan, and Sikkim, have been categorized according to their surface area expansion rates in space and time. The lakes have been identified as located within the elevation range of 3800 m and6800 m above mean sea level(a msl). With a total surface area of 37.9 km2, these MDGLs as a whole were observed to have expanded by an astonishing 43.6% in area over the 25 year period of this study. A factor is introduced to numerically sort the lakes in terms of their relative yearly expansion rates, based on their interpretation of their surface area extents from satellite imageries. Verification of predicted GLOF events in the past using this factor with the limited field data as reported in literature indicates that the present analysis may be considered a sufficiently reliable and rapid technique for assessing the potential bursting susceptibility of the MDGLs. The analysis also indicates that, as of now, there are eight MDGLs in the region which appear to be in highly vulnerable states and have high chances in causing potential GLOF events anytime in the recent future.展开更多
In this study,we developed a high-resolution(3 arcsec,approximately 90 m)V_(S30) map and associated open-access dataset for the 140 km×200 km region affected by the January 2025 M6.8 Dingri Xizang,China earthquak...In this study,we developed a high-resolution(3 arcsec,approximately 90 m)V_(S30) map and associated open-access dataset for the 140 km×200 km region affected by the January 2025 M6.8 Dingri Xizang,China earthquake.This map provides a significantly finer resolution compared to existing V_(S30) maps,which typically use a 30 arcsec grid.The V_(S30) values were estimated using the Cokriging-based V_(S30) proxy model(SCK model),which integrates V_(S30) measurements as primary constraints and utilizes topographic slope as a secondary parameter.The findings indicate that the V_(S30) values range from 200 to 250 m/s in the sedimentary deposit areas near the earthquake’s epicenter and from 400 to 600 m/s in the surrounding mountainous regions.This study showcases the capability of the SCK model to efficiently generate V_(S30) estimations across various spatial resolutions and demonstrates its effectiveness in producing reliable estimations in data-sparse regions.展开更多
The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Consi...The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Considering the limitations of traditional high-frequency compensation methods,this paper presents a new method based on adaptive generalized S transform.This method is based on the study of frequency spectrum attenuation law of seismic signals,and the Gauss window function of adaptive generalized S transform is used to fi t the attenuation trend of seismic signals to seek the optimal Gauss window function.The amplitude spectrum compensation function constructed using the optimal Gauss window function is used to modify the time-frequency spectrum of the adaptive generalized S transform of seismic signals and reconstruct seismic signals to compensate for high-frequency attenuation.Practical data processing results show that the method can compensate for the high-frequency components that are absorbed and attenuated by the stratum,thereby eff ectively improving the resolution and quality of seismic data.展开更多
Data fusion has shown potential to improve the accuracy of land cover mapping,and selection of the optimal fusion technique remains a challenge.This study investigated the performance of fusing Sentinel-1(S-1)and Sent...Data fusion has shown potential to improve the accuracy of land cover mapping,and selection of the optimal fusion technique remains a challenge.This study investigated the performance of fusing Sentinel-1(S-1)and Sentinel-2(S-2)data,using layer-stacking method at the pixel level and Dempster-Shafer(D-S)theory-based approach at the decision level,for mapping six land cover classes in Thu Dau Mot City,Vietnam.At the pixel level,S-1 and S-2 bands and their extracted textures and indices were stacked into the different single-sensor and multi-sensor datasets(i.e.fused datasets).The datasets were categorized into two groups.One group included the datasets containing only spectral and backscattering bands,and the other group included the datasets consisting of these bands and their extracted features.The random forest(RF)classifier was then applied to the datasets within each group.At the decision level,the RF classification outputs of the single-sensor datasets within each group were fused together based on D-S theory.Finally,the accuracy of the mapping results at both levels within each group was compared.The results showed that fusion at the decision level provided the best mapping accuracy compared to the results from other products within each group.The highest overall accuracy(OA)and Kappa coefficient of the map using D-S theory were 92.67%and 0.91,respectively.The decision-level fusion helped increase the OA of the map by 0.75%to 2.07%compared to that of corresponding S-2 products in the groups.Meanwhile,the data fusion at the pixel level delivered the mapping results,which yielded an OA of 4.88%to 6.58%lower than that of corresponding S-2 products in the groups.展开更多
With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to th...With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to the applications of earth observation and information retrieval,including climate change monitoring,natural resource investigation,ecological environment protection,and territorial space planning.Over the past decade,artificial intelligence technology represented by deep learning has made significant contributions to the field of Earth observation.Therefore,this review will focus on the bottlenecks and development process of using deep learning methods for land use/land cover mapping of the Earth’s surface.Firstly,it introduces the basic framework of semantic segmentation network models for land use/land cover mapping.Then,we summarize the development of semantic segmentation models in geographical field,focusing on spatial and semantic feature extraction,context relationship perception,multi-scale effects modelling,and the transferability of models under geographical differences.Then,the application of semantic segmentation models in agricultural management,building boundary extraction,single tree segmentation and inter-species classification are reviewed.Finally,we discuss the future development prospects of deep learning technology in the context of remote sensing big data.展开更多
Characterized by lithological diversity and rich mineral resources, Benshangul-Gumuz National Regional State located in Asosa Zones, Western Ethiopia has been investigated for geological mapping and morpho-structural ...Characterized by lithological diversity and rich mineral resources, Benshangul-Gumuz National Regional State located in Asosa Zones, Western Ethiopia has been investigated for geological mapping and morpho-structural lineaments extraction using PALSAR (Phased Array type L-band Synthetic Aperture Radar ) Fine Beam Single (FBS) L-HH polarization and Landsat-5 TM (Thematic Mapper ) datasets. These data were preprocessed to retrieve ground surface reflectance and backscatter coefficients. To overcome the geometry acquisition between the two sensors, they were geometrically and topographically rectified using ASTER-V2 DEM. Intensity-Hue-Saturation, directional filters and automatic lineaments extraction were applied on the datasets for lithological units’ discrimination and structural delimitation for potential mineral exploration. The obtained results showed good relationship among the topographic morphology, rock-substrate, structural variations properties, and drainage network. The spectral variations were easily associated with lithological units. Likewise, the morpho-structural information highlighted in the PALSAR image was visible without altering the radiometric integrity of the details in TM bands through the fusion process. Moreover, predominant lineaments directions trending NE-SW, NS, and NW-SE were identified. Results of this study highlighted the importance of the PALSAR FBS L-HH mode and TM data fusion to enhance geological features and lithological units for mineral exploration particularly in tropical zones.展开更多
A neutral density surface is a logical study frame for water-mass mixing since water parcels spread along such a surface without doing work against buoyancy restoring force. Mesoscale eddies are believed to stir and s...A neutral density surface is a logical study frame for water-mass mixing since water parcels spread along such a surface without doing work against buoyancy restoring force. Mesoscale eddies are believed to stir and subsequently mix predominantly along such surfaces. Because of the nonlinear nature of the equation of state of seawater, the process of accurately mapping a neutral density surface necessarily involves lateral computation from one conductivity, temperature and depth (CTD) cast to the next in a logical sequence. By contrast, the depth of a potential density surface on any CTD cast is found solely from the data on this cast. The lateral calculation procedure causes a significant inconvenience. In a previous paper by present author published in this journal (You, 2006), the mapping of neutral density surfaces with regularly gridded data such as Levitus data has been introduced. In this note, I present a new method to find the depth of a neutral density surface from a cast without having to specify an integration path in space. An appropriate reference point is required that is on the neutral density surface and thereafter the neutral density surface can be de- termined by using the CTD casts in any order. This method is only approximate and the likely errors can be estimated by plotting a scatter diagram of all the pressures and potential temperatures on the neutral density surfaces. The method assumes that the variations of potential temperature and pressure (with respect to the values at the reference point) on the neutral density surface are proportional. It is important to select the most appropriate reference point in order to approximately satisfy this assumption, and in practice this is found by inspecting the θ-p plot of data on the surface. This may require that the algorithm be used twice. When the straight lines on the θ-p plot, drawn from the reference point to other points on the neutral density surface, enclose an area that is external to the clus- ter of θ-p points of the neutral density surface, errors will occur, and these errors can be quantified from this diagram. Examples showing the use of the method are presented for each of the world’s main oceans.展开更多
The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with stati...The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with statistical unit. Making use of geographic information system (GIS) techniques, and supported by Auto CAD software, the author of this paper has put forward a practical method for making statistical map and developed a software (SMT) for the making of small scale statistical map using C language.展开更多
In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan...In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.展开更多
A detailed inspection of roads requires highly detailed spatial data with sufficient precision to deliver an accurate geometry and to describe road defects visually.This paper presents a novel method for the detection...A detailed inspection of roads requires highly detailed spatial data with sufficient precision to deliver an accurate geometry and to describe road defects visually.This paper presents a novel method for the detection of road defects.The input data for road defect detection included point clouds and orthomosaics gathered by mobile mapping technology.The defects were categorized in three major groups with the following geometric primitives:points,lines and polygons.The method suggests the detection of point objects from matched point clouds,panoramic images and ortho photos.Defects were mapped as point,line or polygon geometries,directly derived from orthomosaics and panoramic images.Besides the geometric position of road defects,all objects were assigned to a variety of attributes:defect type,surface material,center-of-gravity,area,length,corresponding image of the defect and degree of damage.A spatial dataset comprising defect values with a matching data type was created to perform the attribute analysis quickly and correctly.The final product is a spatial vector data set,consisting of points,lines and polygons,which contains attributes with further information and geometry.This paper demonstrates that mobile mapping suits a large-scale feature extraction of road infrastructure defects.By its simplicity and flexibility,the presented methodology allows it to be easily adapted to extract further feature types with their attributes.This makes the proposed approach a vital tool for data extraction settings with multiple mobile mapping data analysts,e.g.,offline crowdsourcing.展开更多
Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present researc...Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present research aims at mapping landslide susceptibility at the metropolitan area of Chittagong district of Bangladesh utilizing obtainable open source spatial data from various web portals. In this regard, we targeted a study region where rainfall induced landslides reportedly causes causalities as well as property damage each year. In this study, however, we employed multi-criteria evaluation (MCE) technique i.e., heuristic, a knowledge driven approach based on expert opinions from various discipline for landslide susceptibility mapping combining nine causative factors—geomorphology, geology, land use/land cover (LULC), slope, aspect, plan curvature, drainage distance, relative relief and vegetation in geographic information system (GIS) environment. The final susceptibility map was devised into five hazard classes viz., very low, low, moderate, high, and very high, representing 22 km2 (13%), 90 km2 (53%);24 km2 (15%);22 km2 (13%) and 10 km2 (6%) areas respectively. This particular study might be beneficial to the local authorities and other stake-holders, concerned in disaster risk reduction and mitigation activities. Moreover this study can also be advantageous for risk sensitive land use planning in the study area.展开更多
Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we sho...Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we show the usage in groundwater mapping. Due to scarcity of water in many regions of the world, including southwestern United States, monitoring and management of groundwater is of utmost importance. A complete mapping of groundwater is difficult since the monitored sites are far from one another, and thus the data sets are considered extremely “sparse”. To overcome this difficulty in complete mapping of groundwater, compressive sensing is an ideal tool, as it bypasses the classical Nyquist criterion. We show that compressive sensing can effectively be used for reconstructions of groundwater level maps, by validating against data. This approach can have an impact on geographical sensing and information, as effective monitoring and management are enabled without constructing numerous or expensive measurement sites for groundwater.展开更多
In this paper, we consider the error estimation of the Ishikawa iteration process for strongly demicontractive(SDC) mappings in real Hilbert spaces(without the Lipschitz condition), some convergence theorems of the Is...In this paper, we consider the error estimation of the Ishikawa iteration process for strongly demicontractive(SDC) mappings in real Hilbert spaces(without the Lipschitz condition), some convergence theorems of the Ishikawa iteration process are also obtained. Moreover,we provide data dependence results for SDC mappings in three cases. Some numerical examples are given to verify our results.展开更多
Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why v...Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why various design approaches have been put forward. These approaches can be classified in three different paradigms according to the origin of the information requirements: supply-driven, demand-driven, and hybrids of these. This article compares the methodologies for the multidimensional design of DW through a systematic mapping as research methodology. The study is presented for each paradigm, the main characteristics of the methodologies, their notations and problem areas exhibited in each one of them. The results indicate that there is no follow-up to the complete process of implementing a DW in either an academic or industrial environment;however, there is also no evidence that the attempt is made to address the design and development of a DW by applying and comparing different methodologies existing in the field.展开更多
Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale buildin...Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.展开更多
基金supported in part by the National Key Research and Development Program of China under Grant 2023YFB3106900the National Natural Science Foundation of China under Grant 62171113the China Scholarship Council under Grant 202406080100.
文摘We propose a Cross-Chain Mapping Blockchain(CCMB)for scalable data management in massive Internet of Things(IoT)networks.Specifically,CCMB aims to improve the scalability of securely storing,tracing,and transmitting IoT behavior and reputation data based on our proposed cross-mapped Behavior Chain(BChain)and Reputation Chain(RChain).To improve off-chain IoT data storage scalability,we show that our lightweight CCMB architecture efficiently utilizes available fog-cloud resources.The scalability of on-chain IoT data tracing is enhanced using our Mapping Smart Contract(MSC)and cross-chain mapping design to perform rapid Reputation-to-Behavior(R2B)traceability queries between BChain and RChain blocks.To maximize off-chain to on-chain throughput,we optimize the CCMB block settings and producers based on a general Poisson Point Process(PPP)network model.The constrained optimization problem is formulated as a Markov Decision Process(MDP),and solved using a dual-network Deep Reinforcement Learning(DRL)algorithm.Simulation results validate CCMB’s scalability advantages in storage,traceability,and throughput.In specific massive IoT scenarios,CCMB can reduce the storage footprint by 50%and traceability query time by 90%,while improving system throughput by 55%compared to existing benchmarks.
文摘Most semi-structured data are of certain structure regularity. Having beenstored as structured data in relational database (RDB), they can be effectively managed by databasemanagement system (DBMS). Some semi-structured data are difficult to transform due to theirirregular structures. We design an efficient algorithm and data structure for ensuring losslesstransformation. We bring forward an approach of schema extraction through data mining, in whichdifferent kinds of elements are transformed respectively and lossless mapping from semi-structureddata to structured data can be achieved.
基金supported by the open research fund of the Key Laboratory of Agri-informatics,Ministry of Agriculture and the fund of Outstanding Agricultural Researcher,Ministry of Agriculture,China
文摘By employing the unique phenological feature of winter wheat extracted from peak before winter (PBW) and the advantages of moderate resolution imaging spectroradiometer (MODIS) data with high temporal resolution and intermediate spatial resolution, a remote sensing-based model for mapping winter wheat on the North China Plain was built through integration with Landsat images and land-use data. First, a phenological window, PBW was drawn from time-series MODIS data. Next, feature extraction was performed for the PBW to reduce feature dimension and enhance its information. Finally, a regression model was built to model the relationship of the phenological feature and the sample data. The amount of information of the PBW was evaluated and compared with that of the main peak (MP). The relative precision of the mapping reached up to 92% in comparison to the Landsat sample data, and ranged between 87 and 96% in comparison to the statistical data. These results were sufficient to satisfy the accuracy requirements for winter wheat mapping at a large scale. Moreover, the proposed method has the ability to obtain the distribution information for winter wheat in an earlier period than previous studies. This study could throw light on the monitoring of winter wheat in China by using unique phenological feature of winter wheat.
文摘Multidatabase systems are designed to achieve schema integration and data interoperation among distributed and heterogeneous database systems. But data model heterogeneity and schema heterogeneity make this a challenging task. A multidatabase common data model is firstly introduced based on XML, named XML-based Integration Data Model (XIDM), which is suitable for integrating different types of schemas. Then an approach of schema mappings based on XIDM in multidatabase systems has been presented. The mappings include global mappings, dealing with horizontal and vertical partitioning between global schemas and export schemas, and local mappings, processing the transformation between export schemas and local schemas. Finally, the illustration and implementation of schema mappings in a multidatabase prototype - Panorama system are also discussed. The implementation results demonstrate that the XIDM is an efficient model for managing multiple heterogeneous data sources and the approaches of schema mapping based on XIDM behave very well when integrating relational, object-oriented database systems and other file systems.
文摘With rising population,decline in soil productivity and land-based conflicts,the per-capita land availability for cultivation is rapidly decreasing within Benue State,a largely agrarian and small-holder setting.This study attempts a local-level support for the actualisation of Sustainable Development Goal Number 2(“end hunger,achieve food security and improved nutrition,and promote sustainable agriculture”)by 2030.Using Multi-Criteria Decision Making(MCDM)method,remote sensing data from Climate Research Unit(CRU)and in-situ data from Nigeria Meteorological Agency(NIMET)were analyzed by GIS techniques to map the suitability of rice cultivation in the study area,with the integration of Normalized Difference Vegetation Index(NDVI),land cover,slope,temperature,precipitation and soil parameters(cation exchange capacity,pH,bulk density,organic carbon).We apply the various statistical parameters that include mean spatial NDVI;correlation coefficient,standard deviation and Root Mean Square(RMS)between CRU and NIMET data.Spatial regression trend analysis is conducted between CRU precipitation and NDVI and between CRU temperature and NDVI from 1985 to 2015.The results reveal that NDVI in highly suitable rice planting regions is higher than marginally suitable regions except in the months of October and November,which shows that the highly suitable regions will yield better than the marginally suitable regions during the dry season.Additionally,NDVI is seasonally bimodal in response to precipitation,meaning that vegetation vigor is more dependent on precipitation than temperature.Finally,the correlation coefficient,standard deviation and RMS between CRU and NIMET precipitation data shows 0.42,108,and 110,respectively,while these three factors between CRU and NIMET temperature data shows 0.88,1.60,and 0.86,respectively.In conclusion,the MCDM approach reveals that upland is more suitable for rice cultivation in Benue State when comparing with the area provided by the Global Land Cover and National Mappings Organization(GLCNMO)data.
文摘The relatively rapid recession of glaciers in the Himalayas and formation of moraine dammed glacial lakes(MDGLs) in the recent past have increased the risk of glacier lake outburst floods(GLOF) in the countries of Nepal and Bhutan and in the mountainous territory of Sikkim in India. As a product of climate change and global warming, such a risk has not only raised the level of threats to the habitation and infrastructure of the region, but has also contributed to the worsening of the balance of the unique ecosystem that exists in this domain that sustains several of the highest mountain peaks of the world. This study attempts to present an up to date mapping of the MDGLs in the central and eastern Himalayan regions using remote sensing data, with an objective to analyse their surface area variations with time from 1990 through 2015, disaggregated over six episodes. The study also includes the evaluation for susceptibility of MDGLs to GLOF with the least criteria decision analysis(LCDA). Forty two major MDGLs, each having a lake surface area greater than 0.2 km2, that were identified in the Himalayan ranges of Nepal, Bhutan, and Sikkim, have been categorized according to their surface area expansion rates in space and time. The lakes have been identified as located within the elevation range of 3800 m and6800 m above mean sea level(a msl). With a total surface area of 37.9 km2, these MDGLs as a whole were observed to have expanded by an astonishing 43.6% in area over the 25 year period of this study. A factor is introduced to numerically sort the lakes in terms of their relative yearly expansion rates, based on their interpretation of their surface area extents from satellite imageries. Verification of predicted GLOF events in the past using this factor with the limited field data as reported in literature indicates that the present analysis may be considered a sufficiently reliable and rapid technique for assessing the potential bursting susceptibility of the MDGLs. The analysis also indicates that, as of now, there are eight MDGLs in the region which appear to be in highly vulnerable states and have high chances in causing potential GLOF events anytime in the recent future.
基金supported by the National Natural Science Foundation of China(No.42120104002).
文摘In this study,we developed a high-resolution(3 arcsec,approximately 90 m)V_(S30) map and associated open-access dataset for the 140 km×200 km region affected by the January 2025 M6.8 Dingri Xizang,China earthquake.This map provides a significantly finer resolution compared to existing V_(S30) maps,which typically use a 30 arcsec grid.The V_(S30) values were estimated using the Cokriging-based V_(S30) proxy model(SCK model),which integrates V_(S30) measurements as primary constraints and utilizes topographic slope as a secondary parameter.The findings indicate that the V_(S30) values range from 200 to 250 m/s in the sedimentary deposit areas near the earthquake’s epicenter and from 400 to 600 m/s in the surrounding mountainous regions.This study showcases the capability of the SCK model to efficiently generate V_(S30) estimations across various spatial resolutions and demonstrates its effectiveness in producing reliable estimations in data-sparse regions.
基金This research is supported by the National Science and Technology Major Project of China(No.2011ZX05024-001-03)the Natural Science Basic Research Plan in Shaanxi Province of China(No.2021JQ-588)Innovation Fund for graduate students of Xi’an Shiyou University(No.YCS17111017).
文摘The low-pass fi ltering eff ect of the Earth results in the absorption and attenuation of the high-frequency components of seismic signals by the stratum during propagation.Hence,seismic data have low resolution.Considering the limitations of traditional high-frequency compensation methods,this paper presents a new method based on adaptive generalized S transform.This method is based on the study of frequency spectrum attenuation law of seismic signals,and the Gauss window function of adaptive generalized S transform is used to fi t the attenuation trend of seismic signals to seek the optimal Gauss window function.The amplitude spectrum compensation function constructed using the optimal Gauss window function is used to modify the time-frequency spectrum of the adaptive generalized S transform of seismic signals and reconstruct seismic signals to compensate for high-frequency attenuation.Practical data processing results show that the method can compensate for the high-frequency components that are absorbed and attenuated by the stratum,thereby eff ectively improving the resolution and quality of seismic data.
基金the Hungarian Scientific Research Fund in support of the ongoing research,“Time series analysis of land cover dynamics using medium-and high-resolution satellite images”[grant number NKFIH 124648K],at the Department of Physical Geography and Geoinformatics(the former name of the Department of Geoinformatics,Physical and Environmental Geography),University of Szeged,Szeged,Hungary.
文摘Data fusion has shown potential to improve the accuracy of land cover mapping,and selection of the optimal fusion technique remains a challenge.This study investigated the performance of fusing Sentinel-1(S-1)and Sentinel-2(S-2)data,using layer-stacking method at the pixel level and Dempster-Shafer(D-S)theory-based approach at the decision level,for mapping six land cover classes in Thu Dau Mot City,Vietnam.At the pixel level,S-1 and S-2 bands and their extracted textures and indices were stacked into the different single-sensor and multi-sensor datasets(i.e.fused datasets).The datasets were categorized into two groups.One group included the datasets containing only spectral and backscattering bands,and the other group included the datasets consisting of these bands and their extracted features.The random forest(RF)classifier was then applied to the datasets within each group.At the decision level,the RF classification outputs of the single-sensor datasets within each group were fused together based on D-S theory.Finally,the accuracy of the mapping results at both levels within each group was compared.The results showed that fusion at the decision level provided the best mapping accuracy compared to the results from other products within each group.The highest overall accuracy(OA)and Kappa coefficient of the map using D-S theory were 92.67%and 0.91,respectively.The decision-level fusion helped increase the OA of the map by 0.75%to 2.07%compared to that of corresponding S-2 products in the groups.Meanwhile,the data fusion at the pixel level delivered the mapping results,which yielded an OA of 4.88%to 6.58%lower than that of corresponding S-2 products in the groups.
基金National Natural Science Foundation of China(Nos.42371406,42071441,42222106,61976234).
文摘With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to the applications of earth observation and information retrieval,including climate change monitoring,natural resource investigation,ecological environment protection,and territorial space planning.Over the past decade,artificial intelligence technology represented by deep learning has made significant contributions to the field of Earth observation.Therefore,this review will focus on the bottlenecks and development process of using deep learning methods for land use/land cover mapping of the Earth’s surface.Firstly,it introduces the basic framework of semantic segmentation network models for land use/land cover mapping.Then,we summarize the development of semantic segmentation models in geographical field,focusing on spatial and semantic feature extraction,context relationship perception,multi-scale effects modelling,and the transferability of models under geographical differences.Then,the application of semantic segmentation models in agricultural management,building boundary extraction,single tree segmentation and inter-species classification are reviewed.Finally,we discuss the future development prospects of deep learning technology in the context of remote sensing big data.
文摘Characterized by lithological diversity and rich mineral resources, Benshangul-Gumuz National Regional State located in Asosa Zones, Western Ethiopia has been investigated for geological mapping and morpho-structural lineaments extraction using PALSAR (Phased Array type L-band Synthetic Aperture Radar ) Fine Beam Single (FBS) L-HH polarization and Landsat-5 TM (Thematic Mapper ) datasets. These data were preprocessed to retrieve ground surface reflectance and backscatter coefficients. To overcome the geometry acquisition between the two sensors, they were geometrically and topographically rectified using ASTER-V2 DEM. Intensity-Hue-Saturation, directional filters and automatic lineaments extraction were applied on the datasets for lithological units’ discrimination and structural delimitation for potential mineral exploration. The obtained results showed good relationship among the topographic morphology, rock-substrate, structural variations properties, and drainage network. The spectral variations were easily associated with lithological units. Likewise, the morpho-structural information highlighted in the PALSAR image was visible without altering the radiometric integrity of the details in TM bands through the fusion process. Moreover, predominant lineaments directions trending NE-SW, NS, and NW-SE were identified. Results of this study highlighted the importance of the PALSAR FBS L-HH mode and TM data fusion to enhance geological features and lithological units for mineral exploration particularly in tropical zones.
文摘A neutral density surface is a logical study frame for water-mass mixing since water parcels spread along such a surface without doing work against buoyancy restoring force. Mesoscale eddies are believed to stir and subsequently mix predominantly along such surfaces. Because of the nonlinear nature of the equation of state of seawater, the process of accurately mapping a neutral density surface necessarily involves lateral computation from one conductivity, temperature and depth (CTD) cast to the next in a logical sequence. By contrast, the depth of a potential density surface on any CTD cast is found solely from the data on this cast. The lateral calculation procedure causes a significant inconvenience. In a previous paper by present author published in this journal (You, 2006), the mapping of neutral density surfaces with regularly gridded data such as Levitus data has been introduced. In this note, I present a new method to find the depth of a neutral density surface from a cast without having to specify an integration path in space. An appropriate reference point is required that is on the neutral density surface and thereafter the neutral density surface can be de- termined by using the CTD casts in any order. This method is only approximate and the likely errors can be estimated by plotting a scatter diagram of all the pressures and potential temperatures on the neutral density surfaces. The method assumes that the variations of potential temperature and pressure (with respect to the values at the reference point) on the neutral density surface are proportional. It is important to select the most appropriate reference point in order to approximately satisfy this assumption, and in practice this is found by inspecting the θ-p plot of data on the surface. This may require that the algorithm be used twice. When the straight lines on the θ-p plot, drawn from the reference point to other points on the neutral density surface, enclose an area that is external to the clus- ter of θ-p points of the neutral density surface, errors will occur, and these errors can be quantified from this diagram. Examples showing the use of the method are presented for each of the world’s main oceans.
文摘The statistical map is usually used to indicate the quantitative features of various socio economic phenomena among regions on the base map of administrative divisions or on other base maps which connected with statistical unit. Making use of geographic information system (GIS) techniques, and supported by Auto CAD software, the author of this paper has put forward a practical method for making statistical map and developed a software (SMT) for the making of small scale statistical map using C language.
文摘In the frame of landslide susceptibility assessment, a spectral library was created to support the identification of materials confined to a particular region using remote sensing images. This library, called Pakistan spectral library(pklib) version 0.1, contains the analysis data of sixty rock samples taken in the Balakot region in Northern Pakistan.The spectral library is implemented as SQLite database. Structure and naming are inspired by the convention system of the ASTER Spectral Library. Usability, application and benefit of the pklib were evaluated and depicted taking two approaches, the multivariate and the spectral based. The spectral information were used to create indices. The indices were applied to Landsat and ASTER data tosupportthespatial delineation of outcropping rock sequences instratigraphic formations. The application of the indices introduced in this paper helps to identify spots where specific lithological characteristics occur. Especially in areas with sparse or missing detailed geological mapping, the spectral discrimination via remote sensing data can speed up the survey. The library can be used not only to support the improvement of factor maps for landslide susceptibility analysis, but also to provide a geoscientific basisto further analyze the lithological spotin numerous regions in the Hindu Kush.
基金The project presented in the paper is published with kind permission of the contributor.The original data were provided by DataDEV Company,Novi Sad,Republic of SerbiaThe paper presents the part of research realized within the project“Multidisciplinary theoretical and experimental research in education and science in the fields of civil engineering,risk management and fire safety and geodesy”conducted by the Department of Civil Engineering and Geodesy,Faculty of Technical Sciences,University of Novi Sad。
文摘A detailed inspection of roads requires highly detailed spatial data with sufficient precision to deliver an accurate geometry and to describe road defects visually.This paper presents a novel method for the detection of road defects.The input data for road defect detection included point clouds and orthomosaics gathered by mobile mapping technology.The defects were categorized in three major groups with the following geometric primitives:points,lines and polygons.The method suggests the detection of point objects from matched point clouds,panoramic images and ortho photos.Defects were mapped as point,line or polygon geometries,directly derived from orthomosaics and panoramic images.Besides the geometric position of road defects,all objects were assigned to a variety of attributes:defect type,surface material,center-of-gravity,area,length,corresponding image of the defect and degree of damage.A spatial dataset comprising defect values with a matching data type was created to perform the attribute analysis quickly and correctly.The final product is a spatial vector data set,consisting of points,lines and polygons,which contains attributes with further information and geometry.This paper demonstrates that mobile mapping suits a large-scale feature extraction of road infrastructure defects.By its simplicity and flexibility,the presented methodology allows it to be easily adapted to extract further feature types with their attributes.This makes the proposed approach a vital tool for data extraction settings with multiple mobile mapping data analysts,e.g.,offline crowdsourcing.
文摘Since creation of spatial data is a costly and time consuming process, researchers, in this domain, in most of the cases rely on open source spatial attributes for their specific purpose. Likewise, the present research aims at mapping landslide susceptibility at the metropolitan area of Chittagong district of Bangladesh utilizing obtainable open source spatial data from various web portals. In this regard, we targeted a study region where rainfall induced landslides reportedly causes causalities as well as property damage each year. In this study, however, we employed multi-criteria evaluation (MCE) technique i.e., heuristic, a knowledge driven approach based on expert opinions from various discipline for landslide susceptibility mapping combining nine causative factors—geomorphology, geology, land use/land cover (LULC), slope, aspect, plan curvature, drainage distance, relative relief and vegetation in geographic information system (GIS) environment. The final susceptibility map was devised into five hazard classes viz., very low, low, moderate, high, and very high, representing 22 km2 (13%), 90 km2 (53%);24 km2 (15%);22 km2 (13%) and 10 km2 (6%) areas respectively. This particular study might be beneficial to the local authorities and other stake-holders, concerned in disaster risk reduction and mitigation activities. Moreover this study can also be advantageous for risk sensitive land use planning in the study area.
文摘Compressive sensing is a powerful method for reconstruction of sparsely-sampled data, based on statistical optimization. It can be applied to a range of flow measurement and visualization data, and in this work we show the usage in groundwater mapping. Due to scarcity of water in many regions of the world, including southwestern United States, monitoring and management of groundwater is of utmost importance. A complete mapping of groundwater is difficult since the monitored sites are far from one another, and thus the data sets are considered extremely “sparse”. To overcome this difficulty in complete mapping of groundwater, compressive sensing is an ideal tool, as it bypasses the classical Nyquist criterion. We show that compressive sensing can effectively be used for reconstructions of groundwater level maps, by validating against data. This approach can have an impact on geographical sensing and information, as effective monitoring and management are enabled without constructing numerous or expensive measurement sites for groundwater.
基金Supported by the National Natural Science Foundation of China (Grant No. 61573192)。
文摘In this paper, we consider the error estimation of the Ishikawa iteration process for strongly demicontractive(SDC) mappings in real Hilbert spaces(without the Lipschitz condition), some convergence theorems of the Ishikawa iteration process are also obtained. Moreover,we provide data dependence results for SDC mappings in three cases. Some numerical examples are given to verify our results.
文摘Data warehouses (DW) must integrate information from the different areas and sources of an organization in order to extract knowledge relevant to decision-making. The DW development is not an easy task, which is why various design approaches have been put forward. These approaches can be classified in three different paradigms according to the origin of the information requirements: supply-driven, demand-driven, and hybrids of these. This article compares the methodologies for the multidimensional design of DW through a systematic mapping as research methodology. The study is presented for each paradigm, the main characteristics of the methodologies, their notations and problem areas exhibited in each one of them. The results indicate that there is no follow-up to the complete process of implementing a DW in either an academic or industrial environment;however, there is also no evidence that the attempt is made to address the design and development of a DW by applying and comparing different methodologies existing in the field.
基金supported by the National Science Foundation[grant numbers 1854502 and 1855902]Publication was made possible in part by support from the HKU Libraries Open Access Author Fund sponsored by the HKU Libraries.USDA is an equal opportunity provider and employer.Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S.Department of Agriculture.
文摘Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.