Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automaticall...Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automatically and manually corrected hydrological slope unit division,the Longhua District,Shenzhen City,Guangdong Province,was selected as the study area.A total of 15 influencing factors,namely Fluctuation,slope,slope aspect,curvature,topographic witness index(TWI),stream power index(SPI),topographic roughness index(TRI),annual average rainfall,distance to water system,engineering rock group,distance to fault,land use,normalized difference vegetation index(NDVI),nighttime light,and distance to road,were selected as evaluation indicators.The information volume model(IV)and random points were used to select non-geological disaster units,and then the random forest model(RF)was used to evaluate the susceptibility to geological disasters.The automatic slope unit and the hydrological slope unit were compared and analyzed in the random forest and information volume random forest models.The results show that the area under the curve(AUC)values of the automatic slope unit evaluation results are 0.931 for the IV-RF model and 0.716 for the RF model,which are 0.6%(IV-RF model)and 1.9%(RF model)higher than those for the hydrological slope unit.Based on a comparison of the evaluation methods based on the two types of slope units,the hydrological slope unit evaluation method based on manual correction is highly subjective,is complicated to operate,and has a low evaluation accuracy,whereas the evaluation method based on automatic slope unit division is efficient and accurate,is suitable for large-scale efficient geological disaster evaluation,and can better deal with the problem of geological disaster susceptibility evaluation.展开更多
With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration wi...With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.展开更多
High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of ...High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.展开更多
This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)syste...This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)system connected to the local grid.The study focuses on Dakhla,Morocco,a region with vast untapped renewable energy potential.By leveraging GIS,we are innovatively analyzing geographical and environmental factors that influence optimal site selection and system design.The incorporation of VR technologies offers an unprecedented level of realism and immersion,allowing stakeholders to virtually experience the project's impact and design in a dynamic,interactive environment.This novel methodology includes extensive data collection,advanced modeling,and simulations,ensuring that the hybrid system is precisely tailored to the unique climatic and environmental conditions of Dakhla.Our analysis reveals that the region possesses a photovoltaic solar potential of approximately2400 k Wh/m^(2) per year,with an average annual wind power density of about 434 W/m^(2) at an 80-meter hub height.Productivity simulations indicate that the 20 MW hybrid system could generate approximately 60 GWh of energy per year and 1369 GWh over its 25-year lifespan.To validate these findings,we employed the System Advisor Model(SAM)software and the Global Solar Photovoltaic Atlas platform.This comprehensive and interdisciplinary approach not only provides a robust assessment of the system's feasibility but also offers valuable insights into its potential socio-economic and environmental impact.展开更多
Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empow...Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.展开更多
Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for ...Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.展开更多
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ...Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.展开更多
The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D desi...The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.展开更多
Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved inform...Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.展开更多
With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration predict...With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.展开更多
In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges whe...In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.展开更多
The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore th...The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore the construction of a school-enterprise community education model driven by the ITAI industry.It establishes the Kirin Workshop training base to facilitate talent cultivation,integrates the ITAI Application Adaptation Center to enhance technical capabilities,cooperates with Liqi Technology to establish an industrial college for government talent training,adjusts the professional curriculum system,and arranges for students to participate in ITAI vocational skills competitions.The school-enterprise collaborative cultivation mechanism meets the talent needs of the ITAI field,with effective practical results.This paper also points out the shortcomings of the school-enterprise collaborative education model in the ITAI industry and provides optimization methods to explore new paths for industry-education integration and serve the development of regional and national ITAI industries^([1]).展开更多
Research about the auto commuter's pre-trip route choice behavior ignores the combined influence of the real-time information and all respondents' historical information in the existing documents.To overcome this sh...Research about the auto commuter's pre-trip route choice behavior ignores the combined influence of the real-time information and all respondents' historical information in the existing documents.To overcome this shortcoming,an approach to describing the pre-trip route choice behavior with the incorporation of the real-time and historical information is proposed.Two types of real-time information are investigated,which are quantitative information and prescriptive information.By using the bounded rationality theory,the influence of historical information on the real-time information reference process is examined first.Estimation results show that the historical information has a significant influence on the quantitative information reference process,but not on the prescriptive information reference process.Then the route choice behavior is modeled.A comparison is also made among three route choice models,one of which does not incorporate the real-time information reference process,while the others do.Estimation results show that the route choice behavior is better described with the consideration of the reference process of both quantitative and prescriptive information.展开更多
Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synt...Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synthesis) on the systems consisting of different success failure model units. According to the fundamental method of the unit reliability assessment, the general models of system reliability approximate lower limits are given. Finally, this paper analyses the application of the assessment method by examples, the assessment results are neither conservative nor radical and very satisfactory. The assessment method can be popularized to the systems which have fixed reliability structural models.展开更多
The research performed analysis on causes of asymmetric information of agricultural product supply chain and made conclusion on operation mechanism and characteristics of supply chain based on asymmetric information. ...The research performed analysis on causes of asymmetric information of agricultural product supply chain and made conclusion on operation mechanism and characteristics of supply chain based on asymmetric information. Finally, the research detailed profit sharing of agricultural product supply chain in the context of asymmetric information and proposed suggestions, providing references of pricing and profit sharing of supply chains of agricultural products.展开更多
This work was to generate landslide susceptibility maps for the Three Gorges Reservoir(TGR) area, China by using different machine learning models. Three advanced machine learning methods, namely, gradient boosting de...This work was to generate landslide susceptibility maps for the Three Gorges Reservoir(TGR) area, China by using different machine learning models. Three advanced machine learning methods, namely, gradient boosting decision tree(GBDT), random forest(RF) and information value(InV) models, were used, and the performances were assessed and compared. In total, 202 landslides were mapped by using a series of field surveys, aerial photographs, and reviews of historical and bibliographical data. Nine causative factors were then considered in landslide susceptibility map generation by using the GBDT, RF and InV models. All of the maps of the causative factors were resampled to a resolution of 28.5 m. Of the 486289 pixels in the area,28526 pixels were landslide pixels, and 457763 pixels were non-landslide pixels. Finally, landslide susceptibility maps were generated by using the three machine learning models, and their performances were assessed through receiver operating characteristic(ROC) curves, the sensitivity, specificity,overall accuracy(OA), and kappa coefficient(KAPPA). The results showed that the GBDT, RF and In V models in overall produced reasonable accurate landslide susceptibility maps. Among these three methods, the GBDT method outperforms the other two machine learning methods, which can provide strong technical support for producing landslide susceptibility maps in TGR.展开更多
A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively rem...A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively remove the nonlinear correlation redundancy of chemical process in this paper.From the whole process point of view,the method makes use of the characteristic of mutual information to select the optimal variable subset.It extracts the correlation among variables in the whitening process without limiting to only linear correlations.Further,PCA(Principal Component Analysis)dimension reduction is used to extract feature subset before fault diagnosis.The application results of the TE(Tennessee Eastman)simulation process show that the dynamic modeling process of MIFE(Mutual Information Feature Engineering)can accurately extract the nonlinear correlation relationship among process variables and can effectively reduce the dimension of feature detection in process monitoring.展开更多
Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technol...Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technology to study how slope aspect contributes to landslide growth from Yunyang to Wushan segment in the Three Gorges Reservoir area, and the relationship between aspect and landslide growth is quantified. Through the research on 205 landslides examples, it is found that the slope contributes most whose aspect is towards south,southeast and southwest aspect contribute moderately, and other five aspects contribute little. The research result inosculates preferably with the fact. The result of this paper can provide potent gist to the construction of Three Gorges Reservoir area in future.展开更多
文摘Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automatically and manually corrected hydrological slope unit division,the Longhua District,Shenzhen City,Guangdong Province,was selected as the study area.A total of 15 influencing factors,namely Fluctuation,slope,slope aspect,curvature,topographic witness index(TWI),stream power index(SPI),topographic roughness index(TRI),annual average rainfall,distance to water system,engineering rock group,distance to fault,land use,normalized difference vegetation index(NDVI),nighttime light,and distance to road,were selected as evaluation indicators.The information volume model(IV)and random points were used to select non-geological disaster units,and then the random forest model(RF)was used to evaluate the susceptibility to geological disasters.The automatic slope unit and the hydrological slope unit were compared and analyzed in the random forest and information volume random forest models.The results show that the area under the curve(AUC)values of the automatic slope unit evaluation results are 0.931 for the IV-RF model and 0.716 for the RF model,which are 0.6%(IV-RF model)and 1.9%(RF model)higher than those for the hydrological slope unit.Based on a comparison of the evaluation methods based on the two types of slope units,the hydrological slope unit evaluation method based on manual correction is highly subjective,is complicated to operate,and has a low evaluation accuracy,whereas the evaluation method based on automatic slope unit division is efficient and accurate,is suitable for large-scale efficient geological disaster evaluation,and can better deal with the problem of geological disaster susceptibility evaluation.
基金Under the auspices of National Natural Science Foundation of China(No.42330510)。
文摘With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(RS-2020-NR049579).
文摘High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.
文摘This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)system connected to the local grid.The study focuses on Dakhla,Morocco,a region with vast untapped renewable energy potential.By leveraging GIS,we are innovatively analyzing geographical and environmental factors that influence optimal site selection and system design.The incorporation of VR technologies offers an unprecedented level of realism and immersion,allowing stakeholders to virtually experience the project's impact and design in a dynamic,interactive environment.This novel methodology includes extensive data collection,advanced modeling,and simulations,ensuring that the hybrid system is precisely tailored to the unique climatic and environmental conditions of Dakhla.Our analysis reveals that the region possesses a photovoltaic solar potential of approximately2400 k Wh/m^(2) per year,with an average annual wind power density of about 434 W/m^(2) at an 80-meter hub height.Productivity simulations indicate that the 20 MW hybrid system could generate approximately 60 GWh of energy per year and 1369 GWh over its 25-year lifespan.To validate these findings,we employed the System Advisor Model(SAM)software and the Global Solar Photovoltaic Atlas platform.This comprehensive and interdisciplinary approach not only provides a robust assessment of the system's feasibility but also offers valuable insights into its potential socio-economic and environmental impact.
基金supported supported by the Fundamental Research Funds for the Central Universities(226-2024-00004)the National Natural Science Foundation of China(U23 A20326)Key Research and Development Program of Zhejiang Province(2025C01061).
文摘Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.
基金the support of Research Program of Fine Exploration and Surrounding Rock Classification Technology for Deep Buried Long Tunnels Driven by Horizontal Directional Drilling and Magnetotelluric Methods Based on Deep Learning under Grant E202408010the Sichuan Science and Technology Program under Grant 2024NSFSC1984 and Grant 2024NSFSC1990。
文摘Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.
文摘Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.
文摘The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.
文摘Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.
基金supported by General Scientific Research Funding of the Science and Technology Development Fund(FDCT)in Macao(No.0150/2022/A)the Faculty Research Grants of Macao University of Science and Technology(No.FRG-22-074-FIE).
文摘With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.
基金supported by the National Natural Science Foundation of China(No.62306281)the Natural Science Foundation of Zhejiang Province(Nos.LQ23E060006 and LTGG24E050005)the Key Research Plan of Jiaxing City(No.2024BZ20016).
文摘In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.
基金supported by the Foundation of Shanwei Institute of Technology(swjy23-008).
文摘The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore the construction of a school-enterprise community education model driven by the ITAI industry.It establishes the Kirin Workshop training base to facilitate talent cultivation,integrates the ITAI Application Adaptation Center to enhance technical capabilities,cooperates with Liqi Technology to establish an industrial college for government talent training,adjusts the professional curriculum system,and arranges for students to participate in ITAI vocational skills competitions.The school-enterprise collaborative cultivation mechanism meets the talent needs of the ITAI field,with effective practical results.This paper also points out the shortcomings of the school-enterprise collaborative education model in the ITAI industry and provides optimization methods to explore new paths for industry-education integration and serve the development of regional and national ITAI industries^([1]).
基金The Scientific Research Innovation Project for College Graduates in Jiangsu Province(No.CX10B_071Z)the National High Technology Research and Development Program of China(863 Program)(No.2011AA110304)
文摘Research about the auto commuter's pre-trip route choice behavior ignores the combined influence of the real-time information and all respondents' historical information in the existing documents.To overcome this shortcoming,an approach to describing the pre-trip route choice behavior with the incorporation of the real-time and historical information is proposed.Two types of real-time information are investigated,which are quantitative information and prescriptive information.By using the bounded rationality theory,the influence of historical information on the real-time information reference process is examined first.Estimation results show that the historical information has a significant influence on the quantitative information reference process,but not on the prescriptive information reference process.Then the route choice behavior is modeled.A comparison is also made among three route choice models,one of which does not incorporate the real-time information reference process,while the others do.Estimation results show that the route choice behavior is better described with the consideration of the reference process of both quantitative and prescriptive information.
文摘Using the characteristic of addition of information quantity and the principle of equivalence of information quantity, this paper derives the general conversion formulae of the formation theory method conversion (synthesis) on the systems consisting of different success failure model units. According to the fundamental method of the unit reliability assessment, the general models of system reliability approximate lower limits are given. Finally, this paper analyses the application of the assessment method by examples, the assessment results are neither conservative nor radical and very satisfactory. The assessment method can be popularized to the systems which have fixed reliability structural models.
基金Supported by S&T Development Strategy Program of Tianjin(15ZLZLZF00210)S&T Development Strategy Program of Tianjin(15ZLZLZF00390)~~
文摘The research performed analysis on causes of asymmetric information of agricultural product supply chain and made conclusion on operation mechanism and characteristics of supply chain based on asymmetric information. Finally, the research detailed profit sharing of agricultural product supply chain in the context of asymmetric information and proposed suggestions, providing references of pricing and profit sharing of supply chains of agricultural products.
基金This work was supported in part by the National Natural Science Foundation of China(61601418,41602362,61871259)in part by the Opening Foundation of Hunan Engineering and Research Center of Natural Resource Investigation and Monitoring(2020-5)+1 种基金in part by the Qilian Mountain National Park Research Center(Qinghai)(grant number:GKQ2019-01)in part by the Geomatics Technology and Application Key Laboratory of Qinghai Province,Grant No.QHDX-2019-01.
文摘This work was to generate landslide susceptibility maps for the Three Gorges Reservoir(TGR) area, China by using different machine learning models. Three advanced machine learning methods, namely, gradient boosting decision tree(GBDT), random forest(RF) and information value(InV) models, were used, and the performances were assessed and compared. In total, 202 landslides were mapped by using a series of field surveys, aerial photographs, and reviews of historical and bibliographical data. Nine causative factors were then considered in landslide susceptibility map generation by using the GBDT, RF and InV models. All of the maps of the causative factors were resampled to a resolution of 28.5 m. Of the 486289 pixels in the area,28526 pixels were landslide pixels, and 457763 pixels were non-landslide pixels. Finally, landslide susceptibility maps were generated by using the three machine learning models, and their performances were assessed through receiver operating characteristic(ROC) curves, the sensitivity, specificity,overall accuracy(OA), and kappa coefficient(KAPPA). The results showed that the GBDT, RF and In V models in overall produced reasonable accurate landslide susceptibility maps. Among these three methods, the GBDT method outperforms the other two machine learning methods, which can provide strong technical support for producing landslide susceptibility maps in TGR.
基金Supported by the National Natural Science Foundation of China(21576143).
文摘A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively remove the nonlinear correlation redundancy of chemical process in this paper.From the whole process point of view,the method makes use of the characteristic of mutual information to select the optimal variable subset.It extracts the correlation among variables in the whitening process without limiting to only linear correlations.Further,PCA(Principal Component Analysis)dimension reduction is used to extract feature subset before fault diagnosis.The application results of the TE(Tennessee Eastman)simulation process show that the dynamic modeling process of MIFE(Mutual Information Feature Engineering)can accurately extract the nonlinear correlation relationship among process variables and can effectively reduce the dimension of feature detection in process monitoring.
文摘Slope aspect is one of the indispensable internal factors besides lithology, relative elevation and slope degree. In this paper authors use information value model with Geo graphical Information System (GIS) technology to study how slope aspect contributes to landslide growth from Yunyang to Wushan segment in the Three Gorges Reservoir area, and the relationship between aspect and landslide growth is quantified. Through the research on 205 landslides examples, it is found that the slope contributes most whose aspect is towards south,southeast and southwest aspect contribute moderately, and other five aspects contribute little. The research result inosculates preferably with the fact. The result of this paper can provide potent gist to the construction of Three Gorges Reservoir area in future.