Accurate geospatial data are essential for geographic information systems(GIS),environmental monitoring,and urban planning.The deep integration of the open Internet and geographic information technology has led to inc...Accurate geospatial data are essential for geographic information systems(GIS),environmental monitoring,and urban planning.The deep integration of the open Internet and geographic information technology has led to increasing challenges in the integrity and security of spatial data.In this paper,we consider abnormal spatial data as missing data and focus on abnormal spatial data recovery.Existing geospatial data recovery methods require complete datasets for training,resulting in time-consuming data recovery and lack of generalization.To address these issues,we propose a GAIN-LSTM-based geospatial data recovery method(TGAIN),which consists of two main works:(1)it uses a long-short-term recurrent neural network(LSTM)as a generator to analyze geospatial temporal data and capture its temporal correlation;(2)it constructs a complete TGAIN network using a cue-masked fusion matrix mechanism to obtain data that matches the original distribution of the input data.The experimental results on two publicly accessible datasets demonstrate that our proposed TGAIN approach surpasses four contemporary and traditional models in terms of mean absolute error(MAE),root mean square error(RMSE),mean square error(MSE),mean absolute percentage error(MAPE),coefficient of determination(R2)and average computational time across various data missing rates.Concurrently,TGAIN exhibits superior accuracy and robustness in data recovery compared to existing models,especially when dealing with a high rate of missing data.Our model is of great significance in improving the integrity of geospatial data and provides data support for practical applications such as urban traffic optimization prediction and personal mobility analysis.展开更多
Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale buildin...Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.展开更多
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran...Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.展开更多
Mangroves are woody plant communities that appear in tropical and subtropical regions,mainly in intertidal zones along the coastlines.Despite their considerable benefits to humans and the surrounding environment,their...Mangroves are woody plant communities that appear in tropical and subtropical regions,mainly in intertidal zones along the coastlines.Despite their considerable benefits to humans and the surrounding environment,their existence is threatened by anthropogenic activities and natural drivers.Accordingly,it is vital to conduct efficient efforts to increase mangrove plantations by identifying suitable locations.These efforts are required to support conservation and plantation practices and lower the mortality rate of seedlings.Therefore,identifying ecologically potential areas for plantation practices is mandatory to ensure a higher success rate.This study aimed to identify suitable locations for mangrove plantations along the southern coastal frontiers of Hormozgan,Iran.To this end,we applied a hybrid Fuzzy-DEMATEL-ANP(FDANP)model as a Multi-Criteria Decision Making(MCDM)approach to determine the relative importance of different criteria,combined with geospatial and remote sensing data.In this regard,ten relevant sources of environmental criteria,including meteorological,topographical,and geomorphological,were used in the modeling.The statistical evaluation demonstrated the high potential of the developed approach for suitable location identification.Based on the final results,6.10%and 20.80%of the study area were classified as very-high suitable and very-low suitable areas.The obtained values can elucidate the path for decision-makers and managers for better conservation and plantation planning.Moreover,the utility of charge-free remote sensing data allows cost-effective implementation of such an approach for other regions by interested researchers and governing organizations.展开更多
Recognition of ship traffic patterns can provide insights into the rules of navigation,maneuvering,and collision avoidance for ships at sea.This is essential for ensuring safe navigation at sea and improving navigatio...Recognition of ship traffic patterns can provide insights into the rules of navigation,maneuvering,and collision avoidance for ships at sea.This is essential for ensuring safe navigation at sea and improving navigational efficiency.With the popularization of the Automatic Identification System(AIS),numerous studies utilized ship trajectories to identify maritime traffic patterns.However,the current research focuses on the spatiotemporal behavioral feature clustering of ship trajectory points or segments while lacking consideration for multiple factors that influence ship behavior,such as ship static and maritime geospatial features,resulting in insufficient precision in ship traffic pattern recognition.This study proposes a ship traffic pattern recognition method that considers multi-attribute trajectory similarity(STPMTS),which considers ship static feature,dynamic feature,port geospatial feature,as well as semantic relationships between these features.First,A ship trajectory reconstruction method based on grid compression was introduced to eliminate redundant data and enhance the efficiency of trajectory similarity measurements.Subsequently,to quantify the degree of similarity of ship trajectories,a trajectory similarity measurement method is proposed that combines ship static and dynamic information with port geospatial features.Furthermore,trajectory clustering with hierarchical methods was applied based on the trajectory similarity matrix for dividing trajectories into different clusters.The quality of the similarity measurement results was evaluated by quality criterion to recognize the optimal number of ship traffic patterns.Finally,the effectiveness of the proposed method was verified using actual port ship trajectory data from the Tianjin Port of China,ranging from September to November 2016.Compared with other methods,the proposed method exhibits significant advantages in identifying traffic patterns of ships entering and leaving the port in terms of geometric features,dynamic features,and adherence to navigation rules.This study could serve as an inspiration for a comprehensive exploration of maritime transportation knowledge from multiple perspectives.展开更多
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr...Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.展开更多
Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully...Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully exploited.To extract dominant woody plant species,GEE combined Sen-tinel-1(S1)and Sentinel-2(S2)data with the addition of the National Forest Resources Inventory(NFRI)and topographic data,resulting in a 10 m resolution multimodal geospatial dataset for subtropical forests in southeast China.Spectral and texture features,red-edge bands,and vegetation indices of S1 and S2 data were computed.A hierarchical model obtained information on forest distribution and area and the dominant woody plant species.The results suggest that combining data sources from the S1 winter and S2 yearly ranges enhances accuracy in forest distribution and area extraction compared to using either data source independently.Similarly,for dominant woody species recognition,using S1 winter and S2 data across all four seasons was accurate.Including terrain factors and removing spatial correlation from NFRI sample points further improved the recognition accuracy.The optimal forest extraction achieved an overall accuracy(OA)of 97.4%and a maplevel image classification efficacy(MICE)of 96.7%.OA and MICE were 83.6%and 80.7%for dominant species extraction,respectively.The high accuracy and efficacy values indicate that the hierarchical recognition model based on multimodal remote sensing data performed extremely well for extracting information about dominant woody plant species.Visualizing the results using the GEE application allows for an intuitive display of forest and species distribution,offering significant convenience for forest resource monitoring.展开更多
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s...The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.展开更多
Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of ...Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.展开更多
As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and oper...As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and operation and supervision[1,2].Healthcare data elements include biolog.ical and clinical data that are related to disease,environ-mental health data that are associated with life,and operational and healthcare management data that are related to healthcare activities(Figure 1).Activities such as the construction of a data value assessment system,the devel-opment of a data circulation and sharing platform,and the authorization of data compliance and operation products support the strong growth momentum of the market for health care data elements in China[3].展开更多
In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach th...In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i...Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.展开更多
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facili...The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facilitating fine-grained access control,Ciphertext Policy Attribute-Based Encryption(CP-ABE)can effectively ensure the confidentiality of shared data.Nevertheless,the conventional centralized CP-ABE scheme is plagued by the issues of keymisuse,key escrow,and large computation,which will result in security risks.This paper suggests a lightweight IoT data security sharing scheme that integrates blockchain technology and CP-ABE to address the abovementioned issues.The integrity and traceability of shared data are guaranteed by the use of blockchain technology to store and verify access transactions.The encryption and decryption operations of the CP-ABE algorithm have been implemented using elliptic curve scalarmultiplication to accommodate lightweight IoT devices,as opposed to themore arithmetic bilinear pairing found in the traditional CP-ABE algorithm.Additionally,a portion of the computation is delegated to the edge nodes to alleviate the computational burden on users.A distributed key management method is proposed to address the issues of key escrow andmisuse.Thismethod employs the edge blockchain to facilitate the storage and distribution of attribute private keys.Meanwhile,data security sharing is enhanced by combining off-chain and on-chain ciphertext storage.The security and performance analysis indicates that the proposed scheme is more efficient and secure.展开更多
调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的...调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的健康科学数据管理平台进行调研,梳理所应用的元数据标准,分析典型元数据标准在平台中的应用情况,并归纳其在健康科学数据描述中的适用性。re3data中各健康科学数据平台共使用14种元数据标准,其中DC、DataCite、DDI、仓储自建元数据标准的使用最为广泛,多数平台组合使用多种元数据标准。各类元数据标准可分为通用型、社会科学型、自建型3类,分别适用于描述健康科学数据通用属性、社会科学研究产生的健康科学数据、特色和专业性强及政府开放的健康科学数据。展开更多
Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s pro...Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s progress in poverty reduction from 2016 to 2019 through time-series multi-source geospatial data and a deep learning model.The poverty reduction efficiency(PRE)is measured by the difference in the out-of-poverty rates(which measures the probability of being not poor)of 2016 and 2019.The study shows that the probability of poverty in all regions of China has shown an overall decreasing trend(PRE=0.264),which indicates that the progress in poverty reduction during this period is significant.The Hu Huanyong Line(Hu Line)shows an uneven geographical pattern of out-of-poverty rate between Southeast and Northwest China.From 2016 to 2019,the centroid of China’s out-of-poverty rate moved 105.786 km to the northeast while the standard deviation ellipse of the out-of-poverty rate moved 3 degrees away from the Hu Line,indicating that the regions with high out-of-poverty rates are more concentrated on the east side of the Hu Line from 2016 to 2019.The results imply that the government’s future poverty reduction policies should pay attention to the infrastructure construction in poor areas and appropriately increase the population density in poor areas.This study fills the gap in the research on poverty reduction under multiple scales and provides useful implications for the government’s poverty reduction policy.展开更多
Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie...Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.展开更多
This article explores the design of a wireless fire alarm system supported by advanced data fusion technology.It includes discussions on the basic design ideas of the wireless fire alarm system,hardware design analysi...This article explores the design of a wireless fire alarm system supported by advanced data fusion technology.It includes discussions on the basic design ideas of the wireless fire alarm system,hardware design analysis,software design analysis,and simulation analysis,all supported by data fusion technology.Hopefully,this analysis can provide some reference for the rational application of data fusion technology to meet the actual design and application requirements of the system.展开更多
We combine gradient data from the Macao Science Satellite-1(MSS-1),CHAllenging Minisatellite Payload(CHAMP),Swarm-A,and Swarm-C satellites to develop a 110-degree lithospheric magnetic field model.We then comprehensiv...We combine gradient data from the Macao Science Satellite-1(MSS-1),CHAllenging Minisatellite Payload(CHAMP),Swarm-A,and Swarm-C satellites to develop a 110-degree lithospheric magnetic field model.We then comprehensively evaluate the performance of the model by power spectral comparisons,correlation analyses,sensitivity matrix assessments,and comparisons with existing lithospheric field models.Results showed that using near east–west gradient data from MSS-1 significantly enhances the model correlation in the spherical harmonic degree(N) range of 45–60 while also mitigating the decline in correlation at higher degrees(N > 60).Furthermore,the unique orbital characteristics of MSS-1 enable its gradient data to provide substantial contributions to modeling in the mid-to low-latitude regions.With continued data acquisition from MSS-1 and further optimization of data processing methods,the performance of the model is expected to improve.展开更多
基金supported by the National Natural Science Foundation of China(No.62002144)Ministry of Education Chunhui Plan Research Project(Nos.202200345,HZKY20220125).
文摘Accurate geospatial data are essential for geographic information systems(GIS),environmental monitoring,and urban planning.The deep integration of the open Internet and geographic information technology has led to increasing challenges in the integrity and security of spatial data.In this paper,we consider abnormal spatial data as missing data and focus on abnormal spatial data recovery.Existing geospatial data recovery methods require complete datasets for training,resulting in time-consuming data recovery and lack of generalization.To address these issues,we propose a GAIN-LSTM-based geospatial data recovery method(TGAIN),which consists of two main works:(1)it uses a long-short-term recurrent neural network(LSTM)as a generator to analyze geospatial temporal data and capture its temporal correlation;(2)it constructs a complete TGAIN network using a cue-masked fusion matrix mechanism to obtain data that matches the original distribution of the input data.The experimental results on two publicly accessible datasets demonstrate that our proposed TGAIN approach surpasses four contemporary and traditional models in terms of mean absolute error(MAE),root mean square error(RMSE),mean square error(MSE),mean absolute percentage error(MAPE),coefficient of determination(R2)and average computational time across various data missing rates.Concurrently,TGAIN exhibits superior accuracy and robustness in data recovery compared to existing models,especially when dealing with a high rate of missing data.Our model is of great significance in improving the integrity of geospatial data and provides data support for practical applications such as urban traffic optimization prediction and personal mobility analysis.
基金supported by the National Science Foundation[grant numbers 1854502 and 1855902]Publication was made possible in part by support from the HKU Libraries Open Access Author Fund sponsored by the HKU Libraries.USDA is an equal opportunity provider and employer.Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S.Department of Agriculture.
文摘Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.
基金research was funded by Science and Technology Project of State Grid Corporation of China under grant number 5200-202319382A-2-3-XG.
文摘Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.
基金funded by Erasmus+ICM programme for a 3-month and 5-month stay at Lund University,Lund,Sweden,and thank the European Union.
文摘Mangroves are woody plant communities that appear in tropical and subtropical regions,mainly in intertidal zones along the coastlines.Despite their considerable benefits to humans and the surrounding environment,their existence is threatened by anthropogenic activities and natural drivers.Accordingly,it is vital to conduct efficient efforts to increase mangrove plantations by identifying suitable locations.These efforts are required to support conservation and plantation practices and lower the mortality rate of seedlings.Therefore,identifying ecologically potential areas for plantation practices is mandatory to ensure a higher success rate.This study aimed to identify suitable locations for mangrove plantations along the southern coastal frontiers of Hormozgan,Iran.To this end,we applied a hybrid Fuzzy-DEMATEL-ANP(FDANP)model as a Multi-Criteria Decision Making(MCDM)approach to determine the relative importance of different criteria,combined with geospatial and remote sensing data.In this regard,ten relevant sources of environmental criteria,including meteorological,topographical,and geomorphological,were used in the modeling.The statistical evaluation demonstrated the high potential of the developed approach for suitable location identification.Based on the final results,6.10%and 20.80%of the study area were classified as very-high suitable and very-low suitable areas.The obtained values can elucidate the path for decision-makers and managers for better conservation and plantation planning.Moreover,the utility of charge-free remote sensing data allows cost-effective implementation of such an approach for other regions by interested researchers and governing organizations.
基金supported by the National Natural Science Foundation of China[grant number 52371359]the Dalian Science and Technology Innovation Fund[grant number 2022JJ12GX015].
文摘Recognition of ship traffic patterns can provide insights into the rules of navigation,maneuvering,and collision avoidance for ships at sea.This is essential for ensuring safe navigation at sea and improving navigational efficiency.With the popularization of the Automatic Identification System(AIS),numerous studies utilized ship trajectories to identify maritime traffic patterns.However,the current research focuses on the spatiotemporal behavioral feature clustering of ship trajectory points or segments while lacking consideration for multiple factors that influence ship behavior,such as ship static and maritime geospatial features,resulting in insufficient precision in ship traffic pattern recognition.This study proposes a ship traffic pattern recognition method that considers multi-attribute trajectory similarity(STPMTS),which considers ship static feature,dynamic feature,port geospatial feature,as well as semantic relationships between these features.First,A ship trajectory reconstruction method based on grid compression was introduced to eliminate redundant data and enhance the efficiency of trajectory similarity measurements.Subsequently,to quantify the degree of similarity of ship trajectories,a trajectory similarity measurement method is proposed that combines ship static and dynamic information with port geospatial features.Furthermore,trajectory clustering with hierarchical methods was applied based on the trajectory similarity matrix for dividing trajectories into different clusters.The quality of the similarity measurement results was evaluated by quality criterion to recognize the optimal number of ship traffic patterns.Finally,the effectiveness of the proposed method was verified using actual port ship trajectory data from the Tianjin Port of China,ranging from September to November 2016.Compared with other methods,the proposed method exhibits significant advantages in identifying traffic patterns of ships entering and leaving the port in terms of geometric features,dynamic features,and adherence to navigation rules.This study could serve as an inspiration for a comprehensive exploration of maritime transportation knowledge from multiple perspectives.
基金supported by the National Natural Science Foundation of China(32370703)the CAMS Innovation Fund for Medical Sciences(CIFMS)(2022-I2M-1-021,2021-I2M-1-061)the Major Project of Guangzhou National Labora-tory(GZNL2024A01015).
文摘Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.
基金supported by the National Technology Extension Fund of Forestry,Forest Vegetation Carbon Storage Monitoring Technology Based on Watershed Algorithm ([2019]06)Fundamental Research Funds for the Central Universities (No.PTYX202107).
文摘Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully exploited.To extract dominant woody plant species,GEE combined Sen-tinel-1(S1)and Sentinel-2(S2)data with the addition of the National Forest Resources Inventory(NFRI)and topographic data,resulting in a 10 m resolution multimodal geospatial dataset for subtropical forests in southeast China.Spectral and texture features,red-edge bands,and vegetation indices of S1 and S2 data were computed.A hierarchical model obtained information on forest distribution and area and the dominant woody plant species.The results suggest that combining data sources from the S1 winter and S2 yearly ranges enhances accuracy in forest distribution and area extraction compared to using either data source independently.Similarly,for dominant woody species recognition,using S1 winter and S2 data across all four seasons was accurate.Including terrain factors and removing spatial correlation from NFRI sample points further improved the recognition accuracy.The optimal forest extraction achieved an overall accuracy(OA)of 97.4%and a maplevel image classification efficacy(MICE)of 96.7%.OA and MICE were 83.6%and 80.7%for dominant species extraction,respectively.The high accuracy and efficacy values indicate that the hierarchical recognition model based on multimodal remote sensing data performed extremely well for extracting information about dominant woody plant species.Visualizing the results using the GEE application allows for an intuitive display of forest and species distribution,offering significant convenience for forest resource monitoring.
基金supported in part by the National Natural Science Foundation of China under Grant 62371181in part by the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029+1 种基金supported by a National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(2021R1A2B5B02087169)supported under the framework of international cooperation program managed by the National Research Foundation of Korea(2022K2A9A1A01098051)。
文摘The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.
基金supported by the National Natural Science Foundation of China(Grant No.92044303)。
文摘Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.
基金supported by National Natural Science Foundation of China(Grants 72474022,71974011,72174022,71972012,71874009)"BIT think tank"Promotion Plan of Science and Technology Innovation Program of Beijing Institute of Technology(Grants 2024CX14017,2023CX13029).
文摘As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and operation and supervision[1,2].Healthcare data elements include biolog.ical and clinical data that are related to disease,environ-mental health data that are associated with life,and operational and healthcare management data that are related to healthcare activities(Figure 1).Activities such as the construction of a data value assessment system,the devel-opment of a data circulation and sharing platform,and the authorization of data compliance and operation products support the strong growth momentum of the market for health care data elements in China[3].
基金National Natural Science Foundation of china(No.42371446)Natural Science Foundatiorof Hubei Province(No.2024AFD412)Fundamental Research Funds for National Universities,China University of Geosciences(Wuhan)(No.2024XLA17).
文摘In recent years,Volunteered Geographic Information(VGI)has emerged as a crucial source of mapping data,contributed by users through crowdsourcing platforms such as OpenStreetMap.This paper presents a novel approach that Integrates Large Language Models(LLMs)into a fully automated mapping workflow,utilizing VGI data.The process leverages Prompt Engineering,which involves designing and optimizing input instructions to ensure the LLM produces desired mapping outputs.By constructing precise and detailed prompts,LLM agents are able to accurately interpret mapping requirements,and autonomously extract,analyze,and process VGI geospatial data.They dynamically interact with mapping tools to automate the entire mapping process—from data acquisition to map generation.This approach significantly streamlines the creation of high-quality mapping outputs,reducing the time and resources typically required for such tasks.Moreover,the system lowers the barrier for non-expert users,enabling them to generate accurate maps without extensive technical expertise.Through various case studies,we demonstrate the LLM application across different mapping scenarios,highlighting its potential to enhance the efficiency,accuracy,and accessibility of map production.The results suggest that LLM-powered mapping systems can not only optimize VGI data processing but also expand the usability of ubiquitous mapping across diverse fields,including urban planning and infrastructure development.
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
基金supported by the National Natural Science Foundation of China(42250101)the Macao Foundation。
文摘Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
文摘The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facilitating fine-grained access control,Ciphertext Policy Attribute-Based Encryption(CP-ABE)can effectively ensure the confidentiality of shared data.Nevertheless,the conventional centralized CP-ABE scheme is plagued by the issues of keymisuse,key escrow,and large computation,which will result in security risks.This paper suggests a lightweight IoT data security sharing scheme that integrates blockchain technology and CP-ABE to address the abovementioned issues.The integrity and traceability of shared data are guaranteed by the use of blockchain technology to store and verify access transactions.The encryption and decryption operations of the CP-ABE algorithm have been implemented using elliptic curve scalarmultiplication to accommodate lightweight IoT devices,as opposed to themore arithmetic bilinear pairing found in the traditional CP-ABE algorithm.Additionally,a portion of the computation is delegated to the edge nodes to alleviate the computational burden on users.A distributed key management method is proposed to address the issues of key escrow andmisuse.Thismethod employs the edge blockchain to facilitate the storage and distribution of attribute private keys.Meanwhile,data security sharing is enhanced by combining off-chain and on-chain ciphertext storage.The security and performance analysis indicates that the proposed scheme is more efficient and secure.
文摘调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的健康科学数据管理平台进行调研,梳理所应用的元数据标准,分析典型元数据标准在平台中的应用情况,并归纳其在健康科学数据描述中的适用性。re3data中各健康科学数据平台共使用14种元数据标准,其中DC、DataCite、DDI、仓储自建元数据标准的使用最为广泛,多数平台组合使用多种元数据标准。各类元数据标准可分为通用型、社会科学型、自建型3类,分别适用于描述健康科学数据通用属性、社会科学研究产生的健康科学数据、特色和专业性强及政府开放的健康科学数据。
基金supported by the National Key Research and Development Program of China[grant number 2019YFB2102903]the National Natural Science Foundation of China[grant number 41801306]+1 种基金the“CUG Scholar”Scientific Research Funds at China University of Geosciences(Wuhan)[grant number 2022034]a grant from State Key Laboratory of Resources and Environmental Information System.
文摘Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s progress in poverty reduction from 2016 to 2019 through time-series multi-source geospatial data and a deep learning model.The poverty reduction efficiency(PRE)is measured by the difference in the out-of-poverty rates(which measures the probability of being not poor)of 2016 and 2019.The study shows that the probability of poverty in all regions of China has shown an overall decreasing trend(PRE=0.264),which indicates that the progress in poverty reduction during this period is significant.The Hu Huanyong Line(Hu Line)shows an uneven geographical pattern of out-of-poverty rate between Southeast and Northwest China.From 2016 to 2019,the centroid of China’s out-of-poverty rate moved 105.786 km to the northeast while the standard deviation ellipse of the out-of-poverty rate moved 3 degrees away from the Hu Line,indicating that the regions with high out-of-poverty rates are more concentrated on the east side of the Hu Line from 2016 to 2019.The results imply that the government’s future poverty reduction policies should pay attention to the infrastructure construction in poor areas and appropriately increase the population density in poor areas.This study fills the gap in the research on poverty reduction under multiple scales and provides useful implications for the government’s poverty reduction policy.
基金supported by the Deanship of Graduate Studies and Scientific Research at University of Bisha for funding this research through the promising program under grant number(UB-Promising-33-1445).
文摘Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.
基金Chongqing Engineering University Undergraduate Innovation and Entrepreneurship Training Program Project:Wireless Fire Automatic Alarm System(Project No.:CXCY2024017)Chongqing Municipal Education Commission Science and Technology Research Project:Development and Research of Chongqing Wireless Fire Automatic Alarm System(Project No.:KJQN202401906)。
文摘This article explores the design of a wireless fire alarm system supported by advanced data fusion technology.It includes discussions on the basic design ideas of the wireless fire alarm system,hardware design analysis,software design analysis,and simulation analysis,all supported by data fusion technology.Hopefully,this analysis can provide some reference for the rational application of data fusion technology to meet the actual design and application requirements of the system.
基金the support of the National Natural Science Foundation of China (Nos. 42250103, 41974073, and 41404053)the Macao Foundation and the preresearch project of Civil Aerospace Technologies (Nos. D020308 and D020303)funded by China’s National Space Administration, the Specialized Research Fund for State Key Laboratories。
文摘We combine gradient data from the Macao Science Satellite-1(MSS-1),CHAllenging Minisatellite Payload(CHAMP),Swarm-A,and Swarm-C satellites to develop a 110-degree lithospheric magnetic field model.We then comprehensively evaluate the performance of the model by power spectral comparisons,correlation analyses,sensitivity matrix assessments,and comparisons with existing lithospheric field models.Results showed that using near east–west gradient data from MSS-1 significantly enhances the model correlation in the spherical harmonic degree(N) range of 45–60 while also mitigating the decline in correlation at higher degrees(N > 60).Furthermore,the unique orbital characteristics of MSS-1 enable its gradient data to provide substantial contributions to modeling in the mid-to low-latitude regions.With continued data acquisition from MSS-1 and further optimization of data processing methods,the performance of the model is expected to improve.