Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from...Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.展开更多
Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.P...Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran...Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.展开更多
Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to pred...Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.展开更多
Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantita...Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantitative parameters.However,due to the harsh on-site construction conditions,it is rather difficult to obtain some of the evaluation parameters which are essential for the rock mass quality prediction.In this study,a novel improved Swin Transformer is proposed to detect,segment,and quantify rock mass characteristic parameters such as water leakage,fractures,weak interlayers.The site experiment results demonstrate that the improved Swin Transformer achieves optimal segmentation results and achieving accuracies of 92%,81%,and 86%for water leakage,fractures,and weak interlayers,respectively.A multisource rock tunnel face characteristic(RTFC)dataset includes 11 parameters for predicting rock mass quality is established.Considering the limitations in predictive performance of incomplete evaluation parameters exist in this dataset,a novel tree-augmented naive Bayesian network(BN)is proposed to address the challenge of the incomplete dataset and achieved a prediction accuracy of 88%.In comparison with other commonly used Machine Learning models the proposed BN-based approach proved an improved performance on predicting the rock mass quality with the incomplete dataset.By utilizing the established BN,a further sensitivity analysis is conducted to quantitatively evaluate the importance of the various parameters,results indicate that the rock strength and fractures parameter exert the most significant influence on rock mass quality.展开更多
Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of ...Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.展开更多
Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning fr...Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.展开更多
Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale buildin...Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.展开更多
The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,s...The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.展开更多
Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time...Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time-series data.These methods are not applicable on the unmanned aerial vehicle(UAV)platform due to the high cost of acquiring time-series UAV images and the shortage of UAV-based phenological monitoring methods.To address these challenges,we employed the Synthetic Minority Oversampling Technique(SMOTE)for sample augmentation,aiming to resolve the small sample modelling problem.Moreover,we utilized enhanced"separation"and"compactness"feature selection methods to identify input features from multiple data sources.In this process,we incorporated dynamic multi-source data fusion strategies,involving Vegetation index(VI),Color index(CI),and Texture features(TF).A two-stage neural network that combines Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)is proposed to identify maize phenological stages(including sowing,seedling,jointing,trumpet,tasseling,maturity,and harvesting)on UAV platforms.The results indicate that the dataset generated by SMOTE closely resembles the measured dataset.Among dynamic data fusion strategies,the VI-TF combination proves to be most effective,with CI-TF and VI-CI combinations following behind.Notably,as more data sources are integrated,the model's demand for input features experiences a significant decline.In particular,the CNN-LSTM model,based on the fusion of three data sources,exhibited remarkable reliability when validating the three datasets.For Dataset 1(Beijing Xiaotangshan,2023:Data from 12 UAV Flight Missions),the model achieved an overall accuracy(OA)of 86.53%.Additionally,its precision(Pre),recall(Rec),F1 score(F1),false acceptance rate(FAR),and false rejection rate(FRR)were 0.89,0.89,0.87,0.11,and 0.11,respectively.The model also showed strong generalizability in Dataset 2(Beijing Xiaotangshan,2023:Data from 6 UAV Flight Missions)and Dataset 3(Beijing Xiaotangshan,2022:Data from 4 UAV Flight Missions),with OAs of 89.4%and 85%,respectively.Meanwhile,the model has a low demand for input featu res,requiring only 54.55%(99 of all featu res).The findings of this study not only offer novel insights into near real-time crop phenology monitoring,but also provide technical support for agricultural field management and cropping system adaptation.展开更多
Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs ...Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs the Ding method to separate precipitation types from three datasets(CMFD,ERA5_Land,and CN05.1).Using data from 26meteorological observation stations in the Chinese Tianshan Mountains Region(CTMR)of China as the validation dataset,the precipitation type separation accuracy of three datasets was evaluated.Additionally,the impacts of relative humidity,precipitation amount,and air temperature on the accuracy of precipitation type separation were analyzed.The results indicate that the CMFD dataset provides the highest separation accuracy,followed by CN05.1,with ERA5_Land showing the poorest performance.Spatial correlation analysis reveals that CMFD outperforms the other two datasets at both annual and monthly scales.Root Mean Square Error(RMSE)and Mean Deviation(MD)values suggest that CMFD is more consistent with the station observational data.The analysis further demonstrates that relative humidity and precipitation amount significantly affect separation accuracy.After bias correction,the correlation coefficients between CMFD,ERA5_Land,and station observational data improved to 0.85-0.94,while the RMSE was controlled within 2 mm.The study also revealed that the overestimation of precipitation was positively correlated with the overestimation of rainfall days,negatively correlated with the overestimation of snowfall days,and that underestimated air temperatures led to an increase in the misclassification of snowfall days.This research provides a basis for selecting climate change datasets and managing water resources in alpine regions.展开更多
As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and oper...As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and operation and supervision[1,2].Healthcare data elements include biolog.ical and clinical data that are related to disease,environ-mental health data that are associated with life,and operational and healthcare management data that are related to healthcare activities(Figure 1).Activities such as the construction of a data value assessment system,the devel-opment of a data circulation and sharing platform,and the authorization of data compliance and operation products support the strong growth momentum of the market for health care data elements in China[3].展开更多
As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and use...As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.展开更多
Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these i...Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.展开更多
Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facili...The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facilitating fine-grained access control,Ciphertext Policy Attribute-Based Encryption(CP-ABE)can effectively ensure the confidentiality of shared data.Nevertheless,the conventional centralized CP-ABE scheme is plagued by the issues of keymisuse,key escrow,and large computation,which will result in security risks.This paper suggests a lightweight IoT data security sharing scheme that integrates blockchain technology and CP-ABE to address the abovementioned issues.The integrity and traceability of shared data are guaranteed by the use of blockchain technology to store and verify access transactions.The encryption and decryption operations of the CP-ABE algorithm have been implemented using elliptic curve scalarmultiplication to accommodate lightweight IoT devices,as opposed to themore arithmetic bilinear pairing found in the traditional CP-ABE algorithm.Additionally,a portion of the computation is delegated to the edge nodes to alleviate the computational burden on users.A distributed key management method is proposed to address the issues of key escrow andmisuse.Thismethod employs the edge blockchain to facilitate the storage and distribution of attribute private keys.Meanwhile,data security sharing is enhanced by combining off-chain and on-chain ciphertext storage.The security and performance analysis indicates that the proposed scheme is more efficient and secure.展开更多
调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的...调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的健康科学数据管理平台进行调研,梳理所应用的元数据标准,分析典型元数据标准在平台中的应用情况,并归纳其在健康科学数据描述中的适用性。re3data中各健康科学数据平台共使用14种元数据标准,其中DC、DataCite、DDI、仓储自建元数据标准的使用最为广泛,多数平台组合使用多种元数据标准。各类元数据标准可分为通用型、社会科学型、自建型3类,分别适用于描述健康科学数据通用属性、社会科学研究产生的健康科学数据、特色和专业性强及政府开放的健康科学数据。展开更多
Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s pro...Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s progress in poverty reduction from 2016 to 2019 through time-series multi-source geospatial data and a deep learning model.The poverty reduction efficiency(PRE)is measured by the difference in the out-of-poverty rates(which measures the probability of being not poor)of 2016 and 2019.The study shows that the probability of poverty in all regions of China has shown an overall decreasing trend(PRE=0.264),which indicates that the progress in poverty reduction during this period is significant.The Hu Huanyong Line(Hu Line)shows an uneven geographical pattern of out-of-poverty rate between Southeast and Northwest China.From 2016 to 2019,the centroid of China’s out-of-poverty rate moved 105.786 km to the northeast while the standard deviation ellipse of the out-of-poverty rate moved 3 degrees away from the Hu Line,indicating that the regions with high out-of-poverty rates are more concentrated on the east side of the Hu Line from 2016 to 2019.The results imply that the government’s future poverty reduction policies should pay attention to the infrastructure construction in poor areas and appropriately increase the population density in poor areas.This study fills the gap in the research on poverty reduction under multiple scales and provides useful implications for the government’s poverty reduction policy.展开更多
Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experie...Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.展开更多
基金Sponsored by Beijing Youth Innovation Talent Support Program for Urban Greening and Landscaping——The 2024 Special Project for Promoting High-Quality Development of Beijing’s Landscaping through Scientific and Technological Innovation(KJCXQT202410).
文摘Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.
基金supported by Natural Science Foundation of China(Nos.62303126,62362008,author Z.Z,https://www.nsfc.gov.cn/,accessed on 20 December 2024)Major Scientific and Technological Special Project of Guizhou Province([2024]014)+2 种基金Guizhou Provincial Science and Technology Projects(No.ZK[2022]General149) ,author Z.Z,https://kjt.guizhou.gov.cn/,accessed on 20 December 2024)The Open Project of the Key Laboratory of Computing Power Network and Information Security,Ministry of Education under Grant 2023ZD037,author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024)Open Research Project of the State Key Laboratory of Industrial Control Technology,Zhejiang University,China(No.ICT2024B25),author Z.Z,https://www.gzu.edu.cn/,accessed on 20 December 2024).
文摘Due to the development of cloud computing and machine learning,users can upload their data to the cloud for machine learning model training.However,dishonest clouds may infer user data,resulting in user data leakage.Previous schemes have achieved secure outsourced computing,but they suffer from low computational accuracy,difficult-to-handle heterogeneous distribution of data from multiple sources,and high computational cost,which result in extremely poor user experience and expensive cloud computing costs.To address the above problems,we propose amulti-precision,multi-sourced,andmulti-key outsourcing neural network training scheme.Firstly,we design a multi-precision functional encryption computation based on Euclidean division.Second,we design the outsourcing model training algorithm based on a multi-precision functional encryption with multi-sourced heterogeneity.Finally,we conduct experiments on three datasets.The results indicate that our framework achieves an accuracy improvement of 6%to 30%.Additionally,it offers a memory space optimization of 1.0×2^(24) times compared to the previous best approach.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
基金research was funded by Science and Technology Project of State Grid Corporation of China under grant number 5200-202319382A-2-3-XG.
文摘Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.
基金supported by the National Natural Science Foundation of China(41977215)。
文摘Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.
基金supported by the National Natural Science Foundation of China(Nos.52279107 and 52379106)the Qingdao Guoxin Jiaozhou Bay Second Submarine Tunnel Co.,Ltd.,the Academician and Expert Workstation of Yunnan Province(No.202205AF150015)the Science and Technology Innovation Project of YCIC Group Co.,Ltd.(No.YCIC-YF-2022-15)。
文摘Rock mass quality serves as a vital index for predicting the stability and safety status of rock tunnel faces.In tunneling practice,the rock mass quality is often assessed via a combination of qualitative and quantitative parameters.However,due to the harsh on-site construction conditions,it is rather difficult to obtain some of the evaluation parameters which are essential for the rock mass quality prediction.In this study,a novel improved Swin Transformer is proposed to detect,segment,and quantify rock mass characteristic parameters such as water leakage,fractures,weak interlayers.The site experiment results demonstrate that the improved Swin Transformer achieves optimal segmentation results and achieving accuracies of 92%,81%,and 86%for water leakage,fractures,and weak interlayers,respectively.A multisource rock tunnel face characteristic(RTFC)dataset includes 11 parameters for predicting rock mass quality is established.Considering the limitations in predictive performance of incomplete evaluation parameters exist in this dataset,a novel tree-augmented naive Bayesian network(BN)is proposed to address the challenge of the incomplete dataset and achieved a prediction accuracy of 88%.In comparison with other commonly used Machine Learning models the proposed BN-based approach proved an improved performance on predicting the rock mass quality with the incomplete dataset.By utilizing the established BN,a further sensitivity analysis is conducted to quantitatively evaluate the importance of the various parameters,results indicate that the rock strength and fractures parameter exert the most significant influence on rock mass quality.
基金supported by the National Natural Science Foundation of China(Grant No.92044303)。
文摘Air pollution in China covers a large area with complex sources and formation mechanisms,making it a unique place to conduct air pollution and atmospheric chemistry research.The National Natural Science Foundation of China’s Major Research Plan entitled“Fundamental Researches on the Formation and Response Mechanism of the Air Pollution Complex in China”(or the Plan)has funded 76 research projects to explore the causes of air pollution in China,and the key processes of air pollution in atmospheric physics and atmospheric chemistry.In order to summarize the abundant data from the Plan and exhibit the long-term impacts domestically and internationally,an integration project is responsible for collecting the various types of data generated by the 76 projects of the Plan.This project has classified and integrated these data,forming eight categories containing 258 datasets and 15 technical reports in total.The integration project has led to the successful establishment of the China Air Pollution Data Center(CAPDC)platform,providing storage,retrieval,and download services for the eight categories.This platform has distinct features including data visualization,related project information querying,and bilingual services in both English and Chinese,which allows for rapid searching and downloading of data and provides a solid foundation of data and support for future related research.Air pollution control in China,especially in the past decade,is undeniably a global exemplar,and this data center is the first in China to focus on research into the country’s air pollution complex.
基金supported by the National Natural Science Foundation of China(32370703)the CAMS Innovation Fund for Medical Sciences(CIFMS)(2022-I2M-1-021,2021-I2M-1-061)the Major Project of Guangzhou National Labora-tory(GZNL2024A01015).
文摘Viral infectious diseases,characterized by their intricate nature and wide-ranging diversity,pose substantial challenges in the domain of data management.The vast volume of data generated by these diseases,spanning from the molecular mechanisms within cells to large-scale epidemiological patterns,has surpassed the capabilities of traditional analytical methods.In the era of artificial intelligence(AI)and big data,there is an urgent necessity for the optimization of these analytical methods to more effectively handle and utilize the information.Despite the rapid accumulation of data associated with viral infections,the lack of a comprehensive framework for integrating,selecting,and analyzing these datasets has left numerous researchers uncertain about which data to select,how to access it,and how to utilize it most effectively in their research.This review endeavors to fill these gaps by exploring the multifaceted nature of viral infectious diseases and summarizing relevant data across multiple levels,from the molecular details of pathogens to broad epidemiological trends.The scope extends from the micro-scale to the macro-scale,encompassing pathogens,hosts,and vectors.In addition to data summarization,this review thoroughly investigates various dataset sources.It also traces the historical evolution of data collection in the field of viral infectious diseases,highlighting the progress achieved over time.Simultaneously,it evaluates the current limitations that impede data utilization.Furthermore,we propose strategies to surmount these challenges,focusing on the development and application of advanced computational techniques,AI-driven models,and enhanced data integration practices.By providing a comprehensive synthesis of existing knowledge,this review is designed to guide future research and contribute to more informed approaches in the surveillance,prevention,and control of viral infectious diseases,particularly within the context of the expanding big-data landscape.
基金supported by the National Science Foundation[grant numbers 1854502 and 1855902]Publication was made possible in part by support from the HKU Libraries Open Access Author Fund sponsored by the HKU Libraries.USDA is an equal opportunity provider and employer.Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S.Department of Agriculture.
文摘Morphological(e.g.shape,size,and height)and function(e.g.working,living,and shopping)information of buildings is highly needed for urban planning and management as well as other applications such as city-scale building energy use modeling.Due to the limited availability of socio-economic geospatial data,it is more challenging to map building functions than building morphological information,especially over large areas.In this study,we proposed an integrated framework to map building functions in 50 U.S.cities by integrating multi-source web-based geospatial data.First,a web crawler was developed to extract Points of Interest(POIs)from Tripadvisor.com,and a map crawler was developed to extract POIs and land use parcels from Google Maps.Second,an unsupervised machine learning algorithm named OneClassSVM was used to identify residential buildings based on landscape features derived from Microsoft building footprints.Third,the type ratio of POIs and the area ratio of land use parcels were used to identify six non-residential functions(i.e.hospital,hotel,school,shop,restaurant,and office).The accuracy assessment indicates that the proposed framework performed well,with an average overall accuracy of 94%and a kappa coefficient of 0.63.With the worldwide coverage of Google Maps and Tripadvisor.com,the proposed framework is transferable to other cities over the world.The data products generated from this study are of great use for quantitative city-scale urban studies,such as building energy use modeling at the single building level over large areas.
基金supported in part by the National Natural Science Foundation of China under Grant 62371181in part by the Changzhou Science and Technology International Cooperation Program under Grant CZ20230029+1 种基金supported by a National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(2021R1A2B5B02087169)supported under the framework of international cooperation program managed by the National Research Foundation of Korea(2022K2A9A1A01098051)。
文摘The Intelligent Internet of Things(IIoT)involves real-world things that communicate or interact with each other through networking technologies by collecting data from these“things”and using intelligent approaches,such as Artificial Intelligence(AI)and machine learning,to make accurate decisions.Data science is the science of dealing with data and its relationships through intelligent approaches.Most state-of-the-art research focuses independently on either data science or IIoT,rather than exploring their integration.Therefore,to address the gap,this article provides a comprehensive survey on the advances and integration of data science with the Intelligent IoT(IIoT)system by classifying the existing IoT-based data science techniques and presenting a summary of various characteristics.The paper analyzes the data science or big data security and privacy features,including network architecture,data protection,and continuous monitoring of data,which face challenges in various IoT-based systems.Extensive insights into IoT data security,privacy,and challenges are visualized in the context of data science for IoT.In addition,this study reveals the current opportunities to enhance data science and IoT market development.The current gap and challenges faced in the integration of data science and IoT are comprehensively presented,followed by the future outlook and possible solutions.
基金supported by grants from the National Key Research and Development Program of China(2022YFD2001103)the National Natural Science Foundation of China(42371373)。
文摘Near real-time maize phenology monitoring is crucial for field management,cropping system adjustments,and yield estimation.Most phenological monitoring methods are post-seasonal and heavily rely on high-frequency time-series data.These methods are not applicable on the unmanned aerial vehicle(UAV)platform due to the high cost of acquiring time-series UAV images and the shortage of UAV-based phenological monitoring methods.To address these challenges,we employed the Synthetic Minority Oversampling Technique(SMOTE)for sample augmentation,aiming to resolve the small sample modelling problem.Moreover,we utilized enhanced"separation"and"compactness"feature selection methods to identify input features from multiple data sources.In this process,we incorporated dynamic multi-source data fusion strategies,involving Vegetation index(VI),Color index(CI),and Texture features(TF).A two-stage neural network that combines Convolutional Neural Network(CNN)and Long Short-Term Memory Network(LSTM)is proposed to identify maize phenological stages(including sowing,seedling,jointing,trumpet,tasseling,maturity,and harvesting)on UAV platforms.The results indicate that the dataset generated by SMOTE closely resembles the measured dataset.Among dynamic data fusion strategies,the VI-TF combination proves to be most effective,with CI-TF and VI-CI combinations following behind.Notably,as more data sources are integrated,the model's demand for input features experiences a significant decline.In particular,the CNN-LSTM model,based on the fusion of three data sources,exhibited remarkable reliability when validating the three datasets.For Dataset 1(Beijing Xiaotangshan,2023:Data from 12 UAV Flight Missions),the model achieved an overall accuracy(OA)of 86.53%.Additionally,its precision(Pre),recall(Rec),F1 score(F1),false acceptance rate(FAR),and false rejection rate(FRR)were 0.89,0.89,0.87,0.11,and 0.11,respectively.The model also showed strong generalizability in Dataset 2(Beijing Xiaotangshan,2023:Data from 6 UAV Flight Missions)and Dataset 3(Beijing Xiaotangshan,2022:Data from 4 UAV Flight Missions),with OAs of 89.4%and 85%,respectively.Meanwhile,the model has a low demand for input featu res,requiring only 54.55%(99 of all featu res).The findings of this study not only offer novel insights into near real-time crop phenology monitoring,but also provide technical support for agricultural field management and cropping system adaptation.
基金financial support from the National Natural Sciences Foundation of China(42261026,and 42161025)the Open Foundation of Xinjiang Key Laboratory of Water Cycle and Utilization in Arid Zone(XJYS0907-2023-01)。
文摘Precipitation types primarily include rainfall,snowfall,and sleet,and the transformation of precipitation types has significant impacts on regional climate,ecosystems,and the land-atmosphere system.This study employs the Ding method to separate precipitation types from three datasets(CMFD,ERA5_Land,and CN05.1).Using data from 26meteorological observation stations in the Chinese Tianshan Mountains Region(CTMR)of China as the validation dataset,the precipitation type separation accuracy of three datasets was evaluated.Additionally,the impacts of relative humidity,precipitation amount,and air temperature on the accuracy of precipitation type separation were analyzed.The results indicate that the CMFD dataset provides the highest separation accuracy,followed by CN05.1,with ERA5_Land showing the poorest performance.Spatial correlation analysis reveals that CMFD outperforms the other two datasets at both annual and monthly scales.Root Mean Square Error(RMSE)and Mean Deviation(MD)values suggest that CMFD is more consistent with the station observational data.The analysis further demonstrates that relative humidity and precipitation amount significantly affect separation accuracy.After bias correction,the correlation coefficients between CMFD,ERA5_Land,and station observational data improved to 0.85-0.94,while the RMSE was controlled within 2 mm.The study also revealed that the overestimation of precipitation was positively correlated with the overestimation of rainfall days,negatively correlated with the overestimation of snowfall days,and that underestimated air temperatures led to an increase in the misclassification of snowfall days.This research provides a basis for selecting climate change datasets and managing water resources in alpine regions.
基金supported by National Natural Science Foundation of China(Grants 72474022,71974011,72174022,71972012,71874009)"BIT think tank"Promotion Plan of Science and Technology Innovation Program of Beijing Institute of Technology(Grants 2024CX14017,2023CX13029).
文摘As a new type of production factor in healthcare,healthcare data elements have been rapidly integrated into various health production processes,such as clinical assistance,health management,biological testing,and operation and supervision[1,2].Healthcare data elements include biolog.ical and clinical data that are related to disease,environ-mental health data that are associated with life,and operational and healthcare management data that are related to healthcare activities(Figure 1).Activities such as the construction of a data value assessment system,the devel-opment of a data circulation and sharing platform,and the authorization of data compliance and operation products support the strong growth momentum of the market for health care data elements in China[3].
基金supported by the National Key R&D Program of China(No.2023YFB2703700)the National Natural Science Foundation of China(Nos.U21A20465,62302457,62402444,62172292)+4 种基金the Fundamental Research Funds of Zhejiang Sci-Tech University(Nos.23222092-Y,22222266-Y)the Program for Leading Innovative Research Team of Zhejiang Province(No.2023R01001)the Zhejiang Provincial Natural Science Foundation of China(Nos.LQ24F020008,LQ24F020012)the Foundation of State Key Laboratory of Public Big Data(No.[2022]417)the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(No.2023C01119).
文摘As smart grid technology rapidly advances,the vast amount of user data collected by smart meter presents significant challenges in data security and privacy protection.Current research emphasizes data security and user privacy concerns within smart grids.However,existing methods struggle with efficiency and security when processing large-scale data.Balancing efficient data processing with stringent privacy protection during data aggregation in smart grids remains an urgent challenge.This paper proposes an AI-based multi-type data aggregation method designed to enhance aggregation efficiency and security by standardizing and normalizing various data modalities.The approach optimizes data preprocessing,integrates Long Short-Term Memory(LSTM)networks for handling time-series data,and employs homomorphic encryption to safeguard user privacy.It also explores the application of Boneh Lynn Shacham(BLS)signatures for user authentication.The proposed scheme’s efficiency,security,and privacy protection capabilities are validated through rigorous security proofs and experimental analysis.
基金supported by the National Natural Science Foundation of China(42250101)the Macao Foundation。
文摘Earth’s internal core and crustal magnetic fields,as measured by geomagnetic satellites like MSS-1(Macao Science Satellite-1)and Swarm,are vital for understanding core dynamics and tectonic evolution.To model these internal magnetic fields accurately,data selection based on specific criteria is often employed to minimize the influence of rapidly changing current systems in the ionosphere and magnetosphere.However,the quantitative impact of various data selection criteria on internal geomagnetic field modeling is not well understood.This study aims to address this issue and provide a reference for constructing and applying geomagnetic field models.First,we collect the latest MSS-1 and Swarm satellite magnetic data and summarize widely used data selection criteria in geomagnetic field modeling.Second,we briefly describe the method to co-estimate the core,crustal,and large-scale magnetospheric fields using satellite magnetic data.Finally,we conduct a series of field modeling experiments with different data selection criteria to quantitatively estimate their influence.Our numerical experiments confirm that without selecting data from dark regions and geomagnetically quiet times,the resulting internal field differences at the Earth’s surface can range from tens to hundreds of nanotesla(nT).Additionally,we find that the uncertainties introduced into field models by different data selection criteria are significantly larger than the measurement accuracy of modern geomagnetic satellites.These uncertainties should be considered when utilizing constructed magnetic field models for scientific research and applications.
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
文摘The accelerated advancement of the Internet of Things(IoT)has generated substantial data,including sensitive and private information.Consequently,it is imperative to guarantee the security of data sharing.While facilitating fine-grained access control,Ciphertext Policy Attribute-Based Encryption(CP-ABE)can effectively ensure the confidentiality of shared data.Nevertheless,the conventional centralized CP-ABE scheme is plagued by the issues of keymisuse,key escrow,and large computation,which will result in security risks.This paper suggests a lightweight IoT data security sharing scheme that integrates blockchain technology and CP-ABE to address the abovementioned issues.The integrity and traceability of shared data are guaranteed by the use of blockchain technology to store and verify access transactions.The encryption and decryption operations of the CP-ABE algorithm have been implemented using elliptic curve scalarmultiplication to accommodate lightweight IoT devices,as opposed to themore arithmetic bilinear pairing found in the traditional CP-ABE algorithm.Additionally,a portion of the computation is delegated to the edge nodes to alleviate the computational burden on users.A distributed key management method is proposed to address the issues of key escrow andmisuse.Thismethod employs the edge blockchain to facilitate the storage and distribution of attribute private keys.Meanwhile,data security sharing is enhanced by combining off-chain and on-chain ciphertext storage.The security and performance analysis indicates that the proposed scheme is more efficient and secure.
文摘调查和分析元数据标准在健康科学数据中的应用现状,有助于为我国健康科学数据描述中元数据标准的选择、健康科学数据平台的建设提供参考。通过网络调研法对科学数据仓储注册系统(registry of research data repositories,re3data)中的健康科学数据管理平台进行调研,梳理所应用的元数据标准,分析典型元数据标准在平台中的应用情况,并归纳其在健康科学数据描述中的适用性。re3data中各健康科学数据平台共使用14种元数据标准,其中DC、DataCite、DDI、仓储自建元数据标准的使用最为广泛,多数平台组合使用多种元数据标准。各类元数据标准可分为通用型、社会科学型、自建型3类,分别适用于描述健康科学数据通用属性、社会科学研究产生的健康科学数据、特色和专业性强及政府开放的健康科学数据。
基金supported by the National Key Research and Development Program of China[grant number 2019YFB2102903]the National Natural Science Foundation of China[grant number 41801306]+1 种基金the“CUG Scholar”Scientific Research Funds at China University of Geosciences(Wuhan)[grant number 2022034]a grant from State Key Laboratory of Resources and Environmental Information System.
文摘Poverty threatens human development especially for developing countries,so ending poverty has become one of the most important United Nations Sustainable Development Goals(SDGs).This study aims to explore China’s progress in poverty reduction from 2016 to 2019 through time-series multi-source geospatial data and a deep learning model.The poverty reduction efficiency(PRE)is measured by the difference in the out-of-poverty rates(which measures the probability of being not poor)of 2016 and 2019.The study shows that the probability of poverty in all regions of China has shown an overall decreasing trend(PRE=0.264),which indicates that the progress in poverty reduction during this period is significant.The Hu Huanyong Line(Hu Line)shows an uneven geographical pattern of out-of-poverty rate between Southeast and Northwest China.From 2016 to 2019,the centroid of China’s out-of-poverty rate moved 105.786 km to the northeast while the standard deviation ellipse of the out-of-poverty rate moved 3 degrees away from the Hu Line,indicating that the regions with high out-of-poverty rates are more concentrated on the east side of the Hu Line from 2016 to 2019.The results imply that the government’s future poverty reduction policies should pay attention to the infrastructure construction in poor areas and appropriately increase the population density in poor areas.This study fills the gap in the research on poverty reduction under multiple scales and provides useful implications for the government’s poverty reduction policy.
基金supported by the Deanship of Graduate Studies and Scientific Research at University of Bisha for funding this research through the promising program under grant number(UB-Promising-33-1445).
文摘Open networks and heterogeneous services in the Internet of Vehicles(IoV)can lead to security and privacy challenges.One key requirement for such systems is the preservation of user privacy,ensuring a seamless experience in driving,navigation,and communication.These privacy needs are influenced by various factors,such as data collected at different intervals,trip durations,and user interactions.To address this,the paper proposes a Support Vector Machine(SVM)model designed to process large amounts of aggregated data and recommend privacy preserving measures.The model analyzes data based on user demands and interactions with service providers or neighboring infrastructure.It aims to minimize privacy risks while ensuring service continuity and sustainability.The SVMmodel helps validate the system’s reliability by creating a hyperplane that distinguishes between maximum and minimum privacy recommendations.The results demonstrate the effectiveness of the proposed SVM model in enhancing both privacy and service performance.