The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luan...The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luanchuan,the case study area,southwestern Henan Province,is an important molybdenum-tungsten -lead-zinc polymetallic belt in China.展开更多
A reliable geological model plays a fundamental role in the efficiency and safety of mountain tunnel construction.However,regional models based on limited survey data represent macroscopic geological environments but ...A reliable geological model plays a fundamental role in the efficiency and safety of mountain tunnel construction.However,regional models based on limited survey data represent macroscopic geological environments but not detailed internal geological characteristics,especially at tunnel portals with complex geological conditions.This paper presents a comprehensive methodological framework for refined modeling of the tunnel surrounding rock and subsequent mechanics analysis,with a particular focus on natural space distortion of hard-soft rock interfaces at tunnel portals.The progressive prediction of geological structures is developed considering multi-source data derived from the tunnel survey and excavation stages.To improve the accuracy of the models,a novel modeling method is proposed to integrate multi-source and multi-scale data based on data extraction and potential field interpolation.Finally,a regional-scale model and an engineering-scale model are built,providing a clear insight into geological phenomena and supporting numerical calculation.In addition,the proposed framework is applied to a case study,the Long-tou mountain tunnel project in Guangzhou,China,where the dominant rock type is granite.The results show that the data integration and modeling methods effectively improve model structure refinement.The improved model’s calculation deviation is reduced by about 10%to 20%in the mechanical analysis.This study contributes to revealing the complex geological environment with singular interfaces and promoting the safety and performance of mountain tunneling.展开更多
Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of indivi...Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of individual prediction methods.This study introduces a novel Particle Swarm Optimization-based Permeability Integrated Prediction model(PSO-PIP),which incorporates a particle swarm optimization algorithm enhanced with dy-namic clustering and adaptive parameter tuning(KGPSO).The model integrates multi-source data from the Lattice Boltzmann Method(LBM),Pore Network Modeling(PNM),and Finite Difference Method(FDM).By assigning optimal weight coefficients to the outputs of these methods,the model minimizes deviations from actual values and enhances permeability prediction performance.Initially,the computational performances of the LBM,PNM,and FDM are comparatively analyzed on datasets consisting of sphere packings and real rock samples.It is observed that these methods exhibit computational biases in certain permeability ranges.The PSOPIP model is proposed to combine the strengths of each computational approach and mitigate their limitations.The PSO-PIP model consistently produces predictions that are highly congruent with actual permeability values across all prediction intervals,significantly enhancing prediction accuracy.The outcomes of this study provide a new tool and perspective for the comprehensive,rapid,and accurate prediction of permeability in porous media.展开更多
With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heter...With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.展开更多
Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on co...Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.展开更多
Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electron...Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.展开更多
Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from...Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.展开更多
How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learnin...How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learning of labeled samples and unlabeled database records in order to reduce the dependence on tediously hand-labeled training data. The pro- posed model was used to solve the problem of schema matching between data source schema and database schema. Experimental results using a large number of Web pages from diverse domains show the novel approach's effectiveness.展开更多
To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al...To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.展开更多
Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of ble...Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of blended seismic data is a linear inverse problem. According to the relationship between the shooting number and the simultaneous source number of the acquisition system, this separation of blended seismic data is divided into an easily determined or overdetermined linear inverse problem and an underdetermined linear inverse problem that is difficult to solve. For the latter, this paper presents an optimization method that imposes the sparsity constraint on wavefields to construct the object function of inversion, and the problem is solved by using the iterative thresholding method. For the most extremely underdetermined separation problem with single-shooting and multiple sources, this paper presents a method of pseudo-deblending with random noise filtering. In this method, approximate common shot gathers are received through the pseudo-deblending process, and the random noises that appear when the approximate common shot gathers are sorted into common receiver gathers are eliminated through filtering methods. The separation methods proposed in this paper are applied to three types of numerical simulation data, including pure data without noise, data with random noise, and data with linear regular noise to obtain satisfactory results. The noise suppression effects of these methods are sufficient, particularly with single-shooting blended seismic data, which verifies the effectiveness of the proposed methods.展开更多
Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced tran...Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.展开更多
For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for...For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for the ensemble-based data assimilation methods.In this paper,we propose a multi-source information fused generative adversarial network(MSIGAN)model,which is used for parameterization of the complex geologies.In MSIGAN,various information such as facies distribution,microseismic,and inter-well connectivity,can be integrated to learn the geological features.And two major generative models in deep learning,variational autoencoder(VAE)and generative adversarial network(GAN)are combined in our model.Then the proposed MSIGAN model is integrated into the ensemble smoother with multiple data assimilation(ESMDA)method to conduct history matching.We tested the proposed method on two reservoir models with fluvial facies.The experimental results show that the proposed MSIGAN model can effectively learn the complex geological features,which can promote the accuracy of history matching.展开更多
In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese...In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.展开更多
This paper analyzes the status of existing resources through extensive research and international cooperation on the basis of four typical global monthly surface temperature datasets including the climate research dat...This paper analyzes the status of existing resources through extensive research and international cooperation on the basis of four typical global monthly surface temperature datasets including the climate research dataset of the University of East Anglia(CRUTEM3), the dataset of the U.S. National Climatic Data Center(GHCN-V3), the dataset of the U.S. National Aeronautics and Space Administration(GISSTMP), and the Berkeley Earth surface temperature dataset(Berkeley). China's first global monthly temperature dataset over land was developed by integrating the four aforementioned global temperature datasets and several regional datasets from major countries or regions. This dataset contains information from 9,519 stations worldwide of at least 20 years for monthly mean temperature, 7,073 for maximum temperature, and 6,587 for minimum temperature. Compared with CRUTEM3 and GHCN-V3, the station density is much higher particularly for South America, Africa,and Asia. Moreover, data from significantly more stations were available after the year 1990 which dramatically reduced the uncertainty of the estimated global temperature trend during 1990e2011. The integrated dataset can serve as a reliable data source for global climate change research.展开更多
Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is p...Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is proposed in order to integrate and share industrial big data with high raw data security and low network traffic loads by moving the integration task from the cloud to the edge of networks.First,a task flow graph(TFG)is designed to model the data analysis process.The TFG is composed of several tasks,which are executed by the data owners through the Fog-IBDIS platform in order to protect raw data privacy.Second,the function of Fog-IBDIS to enable data integration and sharing is presented in five modules:TFG management,compilation and running control,the data integration model,the basic algorithm library,and the management component.Finally,a case study is presented to illustrate the implementation of Fog-IBDIS,which ensures raw data security by deploying the analysis tasks executed by the data generators,and eases the network traffic load by greatly reducing the volume of transmitted data.展开更多
In view of the lack of comprehensive evaluation and analysis from the combination of natural and human multi-dimensional factors,the urban surface temperature patterns of Changsha in 2000,2009 and 2016 are retrieved b...In view of the lack of comprehensive evaluation and analysis from the combination of natural and human multi-dimensional factors,the urban surface temperature patterns of Changsha in 2000,2009 and 2016 are retrieved based on multi-source spatial data(Landsat 5 and Landsat 8 satellite image data,POI spatial big data,digital elevation model,etc.),and 12 natural and human factors closely related to urban thermal environment are quickly obtained.The standard deviation ellipse and spatial principal component analysis(PCA)methods are used to analyze the effect of urban human residential thermal environment and its influencing factors.The results showed that the heat island area increased by 547 km~2 and the maximum surface temperature difference reached 10.1℃during the period 2000–2016.The spatial distribution of urban heat island was mainly concentrated in urban built-up areas,such as industrial and commercial agglomerations and densely populated urban centers.The spatial distribution pattern of heat island is gradually decreasing from the urban center to the suburbs.There were multiple high-temperature centers,such as Wuyi square business circle,Xingsha economic and technological development zone in Changsha County,Wangcheng industrial zone,Yuelu industrial agglomeration,and Tianxin industrial zone.From 2000 to 2016,the main axis of spatial development of heat island remained in the northeast-southwest direction.The center of gravity of heat island shifted 2.7 km to the southwest with the deflection angle of 54.9°in 2000–2009.The center of gravity of heat island shifted to the northeast by 4.8 km with the deflection angle of 60.9°in 2009–2016.On the whole,the change of spatial pattern of thermal environment in Changsha was related to the change of urban construction intensity.Through the PCA method,it was concluded that landscape pattern,urban construction intensity and topographic landforms were the main factors affecting the spatial pattern of urban thermal environment of Changsha.The promotion effect of human factors on the formation of heat island effect was obviously greater than that of natural factors.The temperature would rise by 0.293℃under the synthetic effect of human and natural factors.Due to the complexity of factors influencing the urban thermal environment of human settlements,the utilization of multi-source data could help to reveal the spatial pattern and evolution law of urban thermal environment,deepen the understanding of the causes of urban heat island effect,and clarify the correlation between human and natural factors,so as to provide scientific supports for the improvement of the quality of urban human settlements.展开更多
Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemin...Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.展开更多
Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of ...Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.展开更多
Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this ana...Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and integrated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline(NURBS) technique is then presented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.展开更多
Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data mu...Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.展开更多
文摘The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luanchuan,the case study area,southwestern Henan Province,is an important molybdenum-tungsten -lead-zinc polymetallic belt in China.
基金supported by the National Natural Science Foundation of China,China(Grant No.41827807)the“Social Development Project of Science and Technology Commission of Shanghai Municipality,China(Grant No.21DZ1201105)”+1 种基金“The Fundamental Research Funds for the Central Universities,China(Grant No.21D111320)”the“Systematic Project of Guangxi Key Laboratory of Disaster Prevention and Engineering Safety,China(Grant No.2022ZDK018)”.
文摘A reliable geological model plays a fundamental role in the efficiency and safety of mountain tunnel construction.However,regional models based on limited survey data represent macroscopic geological environments but not detailed internal geological characteristics,especially at tunnel portals with complex geological conditions.This paper presents a comprehensive methodological framework for refined modeling of the tunnel surrounding rock and subsequent mechanics analysis,with a particular focus on natural space distortion of hard-soft rock interfaces at tunnel portals.The progressive prediction of geological structures is developed considering multi-source data derived from the tunnel survey and excavation stages.To improve the accuracy of the models,a novel modeling method is proposed to integrate multi-source and multi-scale data based on data extraction and potential field interpolation.Finally,a regional-scale model and an engineering-scale model are built,providing a clear insight into geological phenomena and supporting numerical calculation.In addition,the proposed framework is applied to a case study,the Long-tou mountain tunnel project in Guangzhou,China,where the dominant rock type is granite.The results show that the data integration and modeling methods effectively improve model structure refinement.The improved model’s calculation deviation is reduced by about 10%to 20%in the mechanical analysis.This study contributes to revealing the complex geological environment with singular interfaces and promoting the safety and performance of mountain tunneling.
基金supported by the National Key Research and Devel-opment Program of China (Grant No.2022YFC3005503)the National Natural Science Foundation of China (Grant Nos.52322907,52179141,U23B20149,U2340232)+1 种基金the Fundamental Research Funds for the Central Universities (Grant Nos.2042024kf1031,2042024kf0031)the Key Program of Science and Technology of Yunnan Province (Grant Nos.202202AF080004,202203AA080009).
文摘Accurately and efficiently predicting the permeability of porous media is essential for addressing a wide range of hydrogeological issues.However,the complexity of porous media often limits the effectiveness of individual prediction methods.This study introduces a novel Particle Swarm Optimization-based Permeability Integrated Prediction model(PSO-PIP),which incorporates a particle swarm optimization algorithm enhanced with dy-namic clustering and adaptive parameter tuning(KGPSO).The model integrates multi-source data from the Lattice Boltzmann Method(LBM),Pore Network Modeling(PNM),and Finite Difference Method(FDM).By assigning optimal weight coefficients to the outputs of these methods,the model minimizes deviations from actual values and enhances permeability prediction performance.Initially,the computational performances of the LBM,PNM,and FDM are comparatively analyzed on datasets consisting of sphere packings and real rock samples.It is observed that these methods exhibit computational biases in certain permeability ranges.The PSOPIP model is proposed to combine the strengths of each computational approach and mitigate their limitations.The PSO-PIP model consistently produces predictions that are highly congruent with actual permeability values across all prediction intervals,significantly enhancing prediction accuracy.The outcomes of this study provide a new tool and perspective for the comprehensive,rapid,and accurate prediction of permeability in porous media.
文摘With the acceleration of intelligent transformation of energy system,the monitoring of equipment operation status and optimization of production process in thermal power plants face the challenge of multi-source heterogeneous data integration.In view of the heterogeneous characteristics of physical sensor data,including temperature,vibration and pressure that generated by boilers,steam turbines and other key equipment and real-time working condition data of SCADA system,this paper proposes a multi-source heterogeneous data fusion and analysis platform for thermal power plants based on edge computing and deep learning.By constructing a multi-level fusion architecture,the platform adopts dynamic weight allocation strategy and 5D digital twin model to realize the collaborative analysis of physical sensor data,simulation calculation results and expert knowledge.The data fusion module combines Kalman filter,wavelet transform and Bayesian estimation method to solve the problem of data time series alignment and dimension difference.Simulation results show that the data fusion accuracy can be improved to more than 98%,and the calculation delay can be controlled within 500 ms.The data analysis module integrates Dymola simulation model and AERMOD pollutant diffusion model,supports the cascade analysis of boiler combustion efficiency prediction and flue gas emission monitoring,system response time is less than 2 seconds,and data consistency verification accuracy reaches 99.5%.
基金supported by the Sichuan Science and Technology Program(Nos.2024JDRC0100 and 2023YFQ0091)the National Natural Science Foundation of China(Nos.U21A20167 and 52475138)the Scientific Research Foundation of the State Key Laboratory of Rail Transit Vehicle System(No.2024RVL-T08).
文摘Accurate monitoring of track irregularities is very helpful to improving the vehicle operation quality and to formulating appropriate track maintenance strategies.Existing methods have the problem that they rely on complex signal processing algorithms and lack multi-source data analysis.Driven by multi-source measurement data,including the axle box,the bogie frame and the carbody accelerations,this paper proposes a track irregularities monitoring network(TIMNet)based on deep learning methods.TIMNet uses the feature extraction capability of convolutional neural networks and the sequence map-ping capability of the long short-term memory model to explore the mapping relationship between vehicle accelerations and track irregularities.The particle swarm optimization algorithm is used to optimize the network parameters,so that both the vertical and lateral track irregularities can be accurately identified in the time and spatial domains.The effectiveness and superiority of the proposed TIMNet is analyzed under different simulation conditions using a vehicle dynamics model.Field tests are conducted to prove the availability of the proposed TIMNet in quantitatively monitoring vertical and lateral track irregularities.Furthermore,comparative tests show that the TIMNet has a better fitting degree and timeliness in monitoring track irregularities(vertical R2 of 0.91,lateral R2 of 0.84 and time cost of 10 ms),compared to other classical regression.The test also proves that the TIMNet has a better anti-interference ability than other regression models.
基金Under the auspices of the Key Program of National Natural Science Foundation of China(No.42030409)。
文摘Multi-source data fusion provides high-precision spatial situational awareness essential for analyzing granular urban social activities.This study used Shanghai’s catering industry as a case study,leveraging electronic reviews and consumer data sourced from third-party restaurant platforms collected in 2021.By performing weighted processing on two-dimensional point-of-interest(POI)data,clustering hotspots of high-dimensional restaurant data were identified.A hierarchical network of restaurant hotspots was constructed following the Central Place Theory(CPT)framework,while the Geo-Informatic Tupu method was employed to resolve the challenges posed by network deformation in multi-scale processes.These findings suggest the necessity of enhancing the spatial balance of Shanghai’s urban centers by moderately increasing the number and service capacity of suburban centers at the urban periphery.Such measures would contribute to a more optimized urban structure and facilitate the outward dispersion of comfort-oriented facilities such as the restaurant industry.At a finer spatial scale,the distribution of restaurant hotspots demonstrates a polycentric and symmetric spatial pattern,with a developmental trend radiating outward along the city’s ring roads.This trend can be attributed to the efforts of restaurants to establish connections with other urban functional spaces,leading to the reconfiguration of urban spaces,expansion of restaurant-dedicated land use,and the reorganization of associated commercial activities.The results validate the existence of a polycentric urban structure in Shanghai but also highlight the instability of the restaurant hotspot network during cross-scale transitions.
基金Sponsored by Beijing Youth Innovation Talent Support Program for Urban Greening and Landscaping——The 2024 Special Project for Promoting High-Quality Development of Beijing’s Landscaping through Scientific and Technological Innovation(KJCXQT202410).
文摘Taking the Ming Tombs Forest Farm in Beijing as the research object,this research applied multi-source data fusion and GIS heat-map overlay analysis techniques,systematically collected bird observation point data from the Global Biodiversity Information Facility(GBIF),population distribution data from the Oak Ridge National Laboratory(ORNL)in the United States,as well as information on the composition of tree species in suitable forest areas for birds and the forest geographical information of the Ming Tombs Forest Farm,which is based on literature research and field investigations.By using GIS technology,spatial processing was carried out on bird observation points and population distribution data to identify suitable bird-watching areas in different seasons.Then,according to the suitability value range,these areas were classified into different grades(from unsuitable to highly suitable).The research findings indicated that there was significant spatial heterogeneity in the bird-watching suitability of the Ming Tombs Forest Farm.The north side of the reservoir was generally a core area with high suitability in all seasons.The deep-aged broad-leaved mixed forests supported the overlapping co-existence of the ecological niches of various bird species,such as the Zosterops simplex and Urocissa erythrorhyncha.In contrast,the shallow forest-edge coniferous pure forests and mixed forests were more suitable for specialized species like Carduelis sinica.The southern urban area and the core area of the mausoleums had relatively low suitability due to ecological fragmentation or human interference.Based on these results,this paper proposed a three-level protection framework of“core area conservation—buffer zone management—isolation zone construction”and a spatio-temporal coordinated human-bird co-existence strategy.It was also suggested that the human-bird co-existence space could be optimized through measures such as constructing sound and light buffer interfaces,restoring ecological corridors,and integrating cultural heritage elements.This research provided an operational technical approach and decision-making support for the scientific planning of bird-watching sites and the coordination of ecological protection and tourism development.
基金Supported by the National Defense Pre-ResearchFoundation of China(4110105018)
文摘How to integrate heterogeneous semi-structured Web records into relational database is an important and challengeable research topic. An improved model of conditional random fields was presented to combine the learning of labeled samples and unlabeled database records in order to reduce the dependence on tediously hand-labeled training data. The pro- posed model was used to solve the problem of schema matching between data source schema and database schema. Experimental results using a large number of Web pages from diverse domains show the novel approach's effectiveness.
基金Weaponry Equipment Pre-Research Foundation of PLA Equipment Ministry (No. 9140A06050409JB8102)Pre-Research Foundation of PLA University of Science and Technology (No. 2009JSJ11)
文摘To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.
文摘Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of blended seismic data is a linear inverse problem. According to the relationship between the shooting number and the simultaneous source number of the acquisition system, this separation of blended seismic data is divided into an easily determined or overdetermined linear inverse problem and an underdetermined linear inverse problem that is difficult to solve. For the latter, this paper presents an optimization method that imposes the sparsity constraint on wavefields to construct the object function of inversion, and the problem is solved by using the iterative thresholding method. For the most extremely underdetermined separation problem with single-shooting and multiple sources, this paper presents a method of pseudo-deblending with random noise filtering. In this method, approximate common shot gathers are received through the pseudo-deblending process, and the random noises that appear when the approximate common shot gathers are sorted into common receiver gathers are eliminated through filtering methods. The separation methods proposed in this paper are applied to three types of numerical simulation data, including pure data without noise, data with random noise, and data with linear regular noise to obtain satisfactory results. The noise suppression effects of these methods are sufficient, particularly with single-shooting blended seismic data, which verifies the effectiveness of the proposed methods.
基金research was funded by Science and Technology Project of State Grid Corporation of China under grant number 5200-202319382A-2-3-XG.
文摘Iced transmission line galloping poses a significant threat to the safety and reliability of power systems,leading directly to line tripping,disconnections,and power outages.Existing early warning methods of iced transmission line galloping suffer from issues such as reliance on a single data source,neglect of irregular time series,and lack of attention-based closed-loop feedback,resulting in high rates of missed and false alarms.To address these challenges,we propose an Internet of Things(IoT)empowered early warning method of transmission line galloping that integrates time series data from optical fiber sensing and weather forecast.Initially,the method applies a primary adaptive weighted fusion to the IoT empowered optical fiber real-time sensing data and weather forecast data,followed by a secondary fusion based on a Back Propagation(BP)neural network,and uses the K-medoids algorithm for clustering the fused data.Furthermore,an adaptive irregular time series perception adjustment module is introduced into the traditional Gated Recurrent Unit(GRU)network,and closed-loop feedback based on attentionmechanism is employed to update network parameters through gradient feedback of the loss function,enabling closed-loop training and time series data prediction of the GRU network model.Subsequently,considering various types of prediction data and the duration of icing,an iced transmission line galloping risk coefficient is established,and warnings are categorized based on this coefficient.Finally,using an IoT-driven realistic dataset of iced transmission line galloping,the effectiveness of the proposed method is validated through multi-dimensional simulation scenarios.
基金supported by the National Natural Science Foundation of China under Grant 51722406,52074340,and 51874335the Shandong Provincial Natural Science Foundation under Grant JQ201808+5 种基金The Fundamental Research Funds for the Central Universities under Grant 18CX02097Athe Major Scientific and Technological Projects of CNPC under Grant ZD2019-183-008the Science and Technology Support Plan for Youth Innovation of University in Shandong Province under Grant 2019KJH002the National Research Council of Science and Technology Major Project of China under Grant 2016ZX05025001-006111 Project under Grant B08028Sinopec Science and Technology Project under Grant P20050-1
文摘For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for the ensemble-based data assimilation methods.In this paper,we propose a multi-source information fused generative adversarial network(MSIGAN)model,which is used for parameterization of the complex geologies.In MSIGAN,various information such as facies distribution,microseismic,and inter-well connectivity,can be integrated to learn the geological features.And two major generative models in deep learning,variational autoencoder(VAE)and generative adversarial network(GAN)are combined in our model.Then the proposed MSIGAN model is integrated into the ensemble smoother with multiple data assimilation(ESMDA)method to conduct history matching.We tested the proposed method on two reservoir models with fluvial facies.The experimental results show that the proposed MSIGAN model can effectively learn the complex geological features,which can promote the accuracy of history matching.
基金supported by the National Natural Science Foundation of China(Grant No.:U2202213)the Special Program for the Major Science and Technology Projects of Yunnan Province,China(Grant Nos.:202102AE090051-1-01,and 202202AE090001).
文摘In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.
基金supported by the China Meteorological Administration Special Public Welfare Research Fund (GYHY201206012, GYHY201406016)the Climate Change Foundation of the China Meteorological Administration (CCSF201338)
文摘This paper analyzes the status of existing resources through extensive research and international cooperation on the basis of four typical global monthly surface temperature datasets including the climate research dataset of the University of East Anglia(CRUTEM3), the dataset of the U.S. National Climatic Data Center(GHCN-V3), the dataset of the U.S. National Aeronautics and Space Administration(GISSTMP), and the Berkeley Earth surface temperature dataset(Berkeley). China's first global monthly temperature dataset over land was developed by integrating the four aforementioned global temperature datasets and several regional datasets from major countries or regions. This dataset contains information from 9,519 stations worldwide of at least 20 years for monthly mean temperature, 7,073 for maximum temperature, and 6,587 for minimum temperature. Compared with CRUTEM3 and GHCN-V3, the station density is much higher particularly for South America, Africa,and Asia. Moreover, data from significantly more stations were available after the year 1990 which dramatically reduced the uncertainty of the estimated global temperature trend during 1990e2011. The integrated dataset can serve as a reliable data source for global climate change research.
基金This work was supported in part by the National Natural Science Foundation of China(51435009)Shanghai Sailing Program(19YF1401500)the Fundamental Research Funds for the Central Universities(2232019D3-34).
文摘Industrial big data integration and sharing(IBDIS)is of great significance in managing and providing data for big data analysis in manufacturing systems.A novel fog-computing-based IBDIS approach called Fog-IBDIS is proposed in order to integrate and share industrial big data with high raw data security and low network traffic loads by moving the integration task from the cloud to the edge of networks.First,a task flow graph(TFG)is designed to model the data analysis process.The TFG is composed of several tasks,which are executed by the data owners through the Fog-IBDIS platform in order to protect raw data privacy.Second,the function of Fog-IBDIS to enable data integration and sharing is presented in five modules:TFG management,compilation and running control,the data integration model,the basic algorithm library,and the management component.Finally,a case study is presented to illustrate the implementation of Fog-IBDIS,which ensures raw data security by deploying the analysis tasks executed by the data generators,and eases the network traffic load by greatly reducing the volume of transmitted data.
基金National Social Science Foundation of China,No.15BJY051Open Topic of Hunan Key Laboratory of Land Resources Evaluation and Utilization,No.SYS-ZX-202002Research Project of Appraisement Committee of Social Sciences Research Achievements of Hunan Province,No.XSP18ZDI031。
文摘In view of the lack of comprehensive evaluation and analysis from the combination of natural and human multi-dimensional factors,the urban surface temperature patterns of Changsha in 2000,2009 and 2016 are retrieved based on multi-source spatial data(Landsat 5 and Landsat 8 satellite image data,POI spatial big data,digital elevation model,etc.),and 12 natural and human factors closely related to urban thermal environment are quickly obtained.The standard deviation ellipse and spatial principal component analysis(PCA)methods are used to analyze the effect of urban human residential thermal environment and its influencing factors.The results showed that the heat island area increased by 547 km~2 and the maximum surface temperature difference reached 10.1℃during the period 2000–2016.The spatial distribution of urban heat island was mainly concentrated in urban built-up areas,such as industrial and commercial agglomerations and densely populated urban centers.The spatial distribution pattern of heat island is gradually decreasing from the urban center to the suburbs.There were multiple high-temperature centers,such as Wuyi square business circle,Xingsha economic and technological development zone in Changsha County,Wangcheng industrial zone,Yuelu industrial agglomeration,and Tianxin industrial zone.From 2000 to 2016,the main axis of spatial development of heat island remained in the northeast-southwest direction.The center of gravity of heat island shifted 2.7 km to the southwest with the deflection angle of 54.9°in 2000–2009.The center of gravity of heat island shifted to the northeast by 4.8 km with the deflection angle of 60.9°in 2009–2016.On the whole,the change of spatial pattern of thermal environment in Changsha was related to the change of urban construction intensity.Through the PCA method,it was concluded that landscape pattern,urban construction intensity and topographic landforms were the main factors affecting the spatial pattern of urban thermal environment of Changsha.The promotion effect of human factors on the formation of heat island effect was obviously greater than that of natural factors.The temperature would rise by 0.293℃under the synthetic effect of human and natural factors.Due to the complexity of factors influencing the urban thermal environment of human settlements,the utilization of multi-source data could help to reveal the spatial pattern and evolution law of urban thermal environment,deepen the understanding of the causes of urban heat island effect,and clarify the correlation between human and natural factors,so as to provide scientific supports for the improvement of the quality of urban human settlements.
基金funded by the High-Quality and Cutting-Edge Discipline Construction Project for Universities in Beijing (Internet Information,Communication University of China).
文摘Multi-Source data plays an important role in the evolution of media convergence.Its fusion processing enables the further mining of data and utilization of data value and broadens the path for the sharing and dissemination of media data.However,it also faces serious problems in terms of protecting user and data privacy.Many privacy protectionmethods have been proposed to solve the problemof privacy leakage during the process of data sharing,but they suffer fromtwo flaws:1)the lack of algorithmic frameworks for specific scenarios such as dynamic datasets in the media domain;2)the inability to solve the problem of the high computational complexity of ciphertext in multi-source data privacy protection,resulting in long encryption and decryption times.In this paper,we propose a multi-source data privacy protection method based on homomorphic encryption and blockchain technology,which solves the privacy protection problem ofmulti-source heterogeneous data in the dissemination ofmedia and reduces ciphertext processing time.We deployed the proposedmethod on theHyperledger platformfor testing and compared it with the privacy protection schemes based on k-anonymity and differential privacy.The experimental results showthat the key generation,encryption,and decryption times of the proposedmethod are lower than those in data privacy protection methods based on k-anonymity technology and differential privacy technology.This significantly reduces the processing time ofmulti-source data,which gives it potential for use in many applications.
基金Under the auspices of Natural Science Foundation of China(No.41971166)。
文摘Urban functional area(UFA)is a core scientific issue affecting urban sustainability.The current knowledge gap is mainly reflected in the lack of multi-scale quantitative interpretation methods from the perspective of human-land interaction.In this paper,based on multi-source big data include 250 m×250 m resolution cell phone data,1.81×105 Points of Interest(POI)data and administrative boundary data,we built a UFA identification method and demonstrated empirically in Shenyang City,China.We argue that the method we built can effectively identify multi-scale multi-type UFAs based on human activity and further reveal the spatial correlation between urban facilities and human activity.The empirical study suggests that the employment functional zones in Shenyang City are more concentrated in central cities than other single functional zones.There are more mix functional areas in the central city areas,while the planned industrial new cities need to develop comprehensive functions in Shenyang.UFAs have scale effects and human-land interaction patterns.We suggest that city decision makers should apply multi-sources big data to measure urban functional service in a more refined manner from a supply-demand perspective.
基金Supported by the National Natural Science Foundation of China(No.51379006 and No.51009106)the Program for New Century Excellent Talents in University of Ministry of Education of China(No.NCET-12-0404)the National Basic Research Program of China("973"Program,No.2013CB035903)
文摘Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and integrated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline(NURBS) technique is then presented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.
基金This study was supported by National Key Research and Development Project(Project No.2017YFD0301506)National Social Science Foundation(Project No.71774052)+1 种基金Hunan Education Department Scientific Research Project(Project No.17K04417A092).
文摘Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.