In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach th...In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.展开更多
Monitoring of the earth’s surface has been significantly improved thanks to optical remote sensing by satellites such as SPOT,Landsat and Sentinel-2,which produce vast datasets.The processing of this data,often refer...Monitoring of the earth’s surface has been significantly improved thanks to optical remote sensing by satellites such as SPOT,Landsat and Sentinel-2,which produce vast datasets.The processing of this data,often referred to as Big Data,is essential for decision-making,requiring the application of advanced algorithms to analyze changes in land cover.In the age of artificial intelligence,supervised machine learning algorithms are widely used,although their application in urban contexts remains complex.Researchers have to evaluate and tune various algorithms according to assumptions and experiments,which requires time and resources.This paper presents a meta-modeling approach for urban satellite image classification,using model-driven engineering techniques.The aim is to provide urban planners with standardized solutions for geospatial processing,promoting reusability and interoperability.Formalization includes the creation of a knowledge base and the modeling of processing chains to analyze land use.展开更多
National mapping agencies are responsible for creating and maintaining country wide geospatial datasets that are highly accurate and homogenous.The Netherlands’Cadastre,Land Registry and Mapping Agency,in short,the K...National mapping agencies are responsible for creating and maintaining country wide geospatial datasets that are highly accurate and homogenous.The Netherlands’Cadastre,Land Registry and Mapping Agency,in short,the Kadaster,has created a database of information related to solar installations,using GeoAI.Deep Learning techniques were employed to detect small and medium-scale solar installations on buildings from very highresolution aerial images for the whole of the Netherlands.The impact of data pre-processing and postprocessing are addressed and evaluated.The process was automatized to deal with enormous data and the method was scaled-up nation-wide with the help of cloud solutions.In order to make this information visible,consistent and usable,we built-upon the existing TernausNet;a convolution neural network(CNN)architecture.Model metrics were evaluated after post-processing.The algorithm when used in combination with automated or custom post-processing improves the results.The precision and recall rates of the model for 3 different regions were evaluated and are on average about 0.93 and 0.92 respectively after implementation of post-processing.Use of custom post-processing improves the results by removing the false positives by at least 50%.The final results were compared with the existing national PV register.Overall,the results are not only useful for policy makers to assist them to take the necessary steps in achieving the energy transition goals but also serves as a register for infrastructure planning.展开更多
GIS (Geographic Information Systems) data showcase locations of earth observations or features, their associated attributes and spatial relationships that exist between such observations. Analysis of GIS data varies w...GIS (Geographic Information Systems) data showcase locations of earth observations or features, their associated attributes and spatial relationships that exist between such observations. Analysis of GIS data varies widely and may include some modeling and predictions which are usually computing-intensive and complicated, especially, when large datasets are involved. With advancement in computing technologies, techniques such as Machine learning (ML) are being suggested as a potential game changer in the analysis of GIS data because of their comparative speed, accuracy, automation, and repeatability. Perhaps, the greatest benefit of using both GIS and ML is the ability to transfer results from one database to another. GIS and ML tools have been used extensively in medicine, urban development, and environmental modeling such as landslide susceptibility prediction (LSP). There is also the problem of data loss during conversion between GIS systems in medicine, while in geotechnical areas such as erosion and flood prediction, lack of data and variability in soil has limited the use of GIS and ML techniques. This paper gives an overview of the current ML methods that have been incorporated into the spatial analysis of data obtained from GIS tools for LSP, health, and urban development. The use of Supervised Machine Learning (SML) algorithms such as decision trees, SVM, KNN, and perceptron including Unsupervised Machine Learning algorithms such as k-means, elbow algorithms, and hierarchal algorithm have been discussed. Their benefits, as well as their shortcomings as studied by several researchers have been elucidated in this review. Finally, this review also discusses future optimization techniques.展开更多
The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as ...The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as natural disasters,presidential elections,and infectious diseases.The observations have provided an unprecedented opportunity to better understand and respond to the spatiotemporal dynamics of the environment,urban settings,health and disease propagation,business decisions,and crisis and crime.Spatiotemporal event detection serves as a gateway to enable a better understanding by detecting events that represent the abnormal status of relevant phenomena.This paper reviews the literature for different sensing capabilities,spatiotemporal event extraction methods,and categories of applications for the detected events.The novelty of this review is to revisit the definition and requirements of event detection and to layout the overall workflow(from sensing and event extraction methods to the operations and decision-supporting processes based on the extracted events)as an agenda for future event detection research.Guidance is presented on the current challenges to this research agenda,and future directions are discussed for conducting spatiotemporal event detection in the era of big data,advanced sensing,and artificial intelligence.展开更多
To find disaster relevant social media messages,current approaches utilize natural language processing methods or machine learning algorithms relying on text only,which have not been perfected due to the variability a...To find disaster relevant social media messages,current approaches utilize natural language processing methods or machine learning algorithms relying on text only,which have not been perfected due to the variability and uncertainty in the language used on social media and ignoring the geographic context of the messages when posted.Meanwhile,a disaster relevant social media message is highly sensitive to its posting location and time.However,limited studies exist to explore what spatial features and the extent of how temporal,and especially spatial features can aid text classification.This paper proposes a geographic context-aware text mining method to incorporate spatial and temporal information derived from social media and authoritative datasets,along with the text information,for classifying disaster relevant social media posts.This work designed and demonstrated how diverse types of spatial and temporal features can be derived from spatial data,and then used to enhance text mining.The deep learning-based method and commonly used machine learning algorithms,assessed the accuracy of the enhanced text-mining method.The performance results of different classification models generated by various combinations of textual,spatial,and temporal features indicate that additional spatial and temporal features help improve the overall accuracy of the classification.展开更多
基金National Natural Science Foundation of china(No.42371446)Natural Science Foundatiorof Hubei Province(No.2024AFD412)Fundamental Research Funds for National Universities,China University of Geosciences(Wuhan)(No.2024XLA17).
文摘In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.
文摘Monitoring of the earth’s surface has been significantly improved thanks to optical remote sensing by satellites such as SPOT,Landsat and Sentinel-2,which produce vast datasets.The processing of this data,often referred to as Big Data,is essential for decision-making,requiring the application of advanced algorithms to analyze changes in land cover.In the age of artificial intelligence,supervised machine learning algorithms are widely used,although their application in urban contexts remains complex.Researchers have to evaluate and tune various algorithms according to assumptions and experiments,which requires time and resources.This paper presents a meta-modeling approach for urban satellite image classification,using model-driven engineering techniques.The aim is to provide urban planners with standardized solutions for geospatial processing,promoting reusability and interoperability.Formalization includes the creation of a knowledge base and the modeling of processing chains to analyze land use.
文摘National mapping agencies are responsible for creating and maintaining country wide geospatial datasets that are highly accurate and homogenous.The Netherlands’Cadastre,Land Registry and Mapping Agency,in short,the Kadaster,has created a database of information related to solar installations,using GeoAI.Deep Learning techniques were employed to detect small and medium-scale solar installations on buildings from very highresolution aerial images for the whole of the Netherlands.The impact of data pre-processing and postprocessing are addressed and evaluated.The process was automatized to deal with enormous data and the method was scaled-up nation-wide with the help of cloud solutions.In order to make this information visible,consistent and usable,we built-upon the existing TernausNet;a convolution neural network(CNN)architecture.Model metrics were evaluated after post-processing.The algorithm when used in combination with automated or custom post-processing improves the results.The precision and recall rates of the model for 3 different regions were evaluated and are on average about 0.93 and 0.92 respectively after implementation of post-processing.Use of custom post-processing improves the results by removing the false positives by at least 50%.The final results were compared with the existing national PV register.Overall,the results are not only useful for policy makers to assist them to take the necessary steps in achieving the energy transition goals but also serves as a register for infrastructure planning.
文摘GIS (Geographic Information Systems) data showcase locations of earth observations or features, their associated attributes and spatial relationships that exist between such observations. Analysis of GIS data varies widely and may include some modeling and predictions which are usually computing-intensive and complicated, especially, when large datasets are involved. With advancement in computing technologies, techniques such as Machine learning (ML) are being suggested as a potential game changer in the analysis of GIS data because of their comparative speed, accuracy, automation, and repeatability. Perhaps, the greatest benefit of using both GIS and ML is the ability to transfer results from one database to another. GIS and ML tools have been used extensively in medicine, urban development, and environmental modeling such as landslide susceptibility prediction (LSP). There is also the problem of data loss during conversion between GIS systems in medicine, while in geotechnical areas such as erosion and flood prediction, lack of data and variability in soil has limited the use of GIS and ML techniques. This paper gives an overview of the current ML methods that have been incorporated into the spatial analysis of data obtained from GIS tools for LSP, health, and urban development. The use of Supervised Machine Learning (SML) algorithms such as decision trees, SVM, KNN, and perceptron including Unsupervised Machine Learning algorithms such as k-means, elbow algorithms, and hierarchal algorithm have been discussed. Their benefits, as well as their shortcomings as studied by several researchers have been elucidated in this review. Finally, this review also discusses future optimization techniques.
基金supported by NSF[CNS 1841520 and ACI 1835507]NASA Goddard[80NSSC19P2033]the NSF Spatiotemporal I/UCRC IAB members.
文摘The advancements of sensing technologies,including remote sensing,in situ sensing,social sensing,and health sensing,have tremendously improved our capability to observe and record natural and social phenomena,such as natural disasters,presidential elections,and infectious diseases.The observations have provided an unprecedented opportunity to better understand and respond to the spatiotemporal dynamics of the environment,urban settings,health and disease propagation,business decisions,and crisis and crime.Spatiotemporal event detection serves as a gateway to enable a better understanding by detecting events that represent the abnormal status of relevant phenomena.This paper reviews the literature for different sensing capabilities,spatiotemporal event extraction methods,and categories of applications for the detected events.The novelty of this review is to revisit the definition and requirements of event detection and to layout the overall workflow(from sensing and event extraction methods to the operations and decision-supporting processes based on the extracted events)as an agenda for future event detection research.Guidance is presented on the current challenges to this research agenda,and future directions are discussed for conducting spatiotemporal event detection in the era of big data,advanced sensing,and artificial intelligence.
基金the funding support from the Vilas Associates Competition Award at University of Wisconsin-Madison(UW-Madison)the National Science Foundation[grant number 1940091].
文摘To find disaster relevant social media messages,current approaches utilize natural language processing methods or machine learning algorithms relying on text only,which have not been perfected due to the variability and uncertainty in the language used on social media and ignoring the geographic context of the messages when posted.Meanwhile,a disaster relevant social media message is highly sensitive to its posting location and time.However,limited studies exist to explore what spatial features and the extent of how temporal,and especially spatial features can aid text classification.This paper proposes a geographic context-aware text mining method to incorporate spatial and temporal information derived from social media and authoritative datasets,along with the text information,for classifying disaster relevant social media posts.This work designed and demonstrated how diverse types of spatial and temporal features can be derived from spatial data,and then used to enhance text mining.The deep learning-based method and commonly used machine learning algorithms,assessed the accuracy of the enhanced text-mining method.The performance results of different classification models generated by various combinations of textual,spatial,and temporal features indicate that additional spatial and temporal features help improve the overall accuracy of the classification.