Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automaticall...Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automatically and manually corrected hydrological slope unit division,the Longhua District,Shenzhen City,Guangdong Province,was selected as the study area.A total of 15 influencing factors,namely Fluctuation,slope,slope aspect,curvature,topographic witness index(TWI),stream power index(SPI),topographic roughness index(TRI),annual average rainfall,distance to water system,engineering rock group,distance to fault,land use,normalized difference vegetation index(NDVI),nighttime light,and distance to road,were selected as evaluation indicators.The information volume model(IV)and random points were used to select non-geological disaster units,and then the random forest model(RF)was used to evaluate the susceptibility to geological disasters.The automatic slope unit and the hydrological slope unit were compared and analyzed in the random forest and information volume random forest models.The results show that the area under the curve(AUC)values of the automatic slope unit evaluation results are 0.931 for the IV-RF model and 0.716 for the RF model,which are 0.6%(IV-RF model)and 1.9%(RF model)higher than those for the hydrological slope unit.Based on a comparison of the evaluation methods based on the two types of slope units,the hydrological slope unit evaluation method based on manual correction is highly subjective,is complicated to operate,and has a low evaluation accuracy,whereas the evaluation method based on automatic slope unit division is efficient and accurate,is suitable for large-scale efficient geological disaster evaluation,and can better deal with the problem of geological disaster susceptibility evaluation.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration wi...With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.展开更多
Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empow...Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.展开更多
The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D desi...The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.展开更多
This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)syste...This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)system connected to the local grid.The study focuses on Dakhla,Morocco,a region with vast untapped renewable energy potential.By leveraging GIS,we are innovatively analyzing geographical and environmental factors that influence optimal site selection and system design.The incorporation of VR technologies offers an unprecedented level of realism and immersion,allowing stakeholders to virtually experience the project's impact and design in a dynamic,interactive environment.This novel methodology includes extensive data collection,advanced modeling,and simulations,ensuring that the hybrid system is precisely tailored to the unique climatic and environmental conditions of Dakhla.Our analysis reveals that the region possesses a photovoltaic solar potential of approximately2400 k Wh/m^(2) per year,with an average annual wind power density of about 434 W/m^(2) at an 80-meter hub height.Productivity simulations indicate that the 20 MW hybrid system could generate approximately 60 GWh of energy per year and 1369 GWh over its 25-year lifespan.To validate these findings,we employed the System Advisor Model(SAM)software and the Global Solar Photovoltaic Atlas platform.This comprehensive and interdisciplinary approach not only provides a robust assessment of the system's feasibility but also offers valuable insights into its potential socio-economic and environmental impact.展开更多
Information extraction(IE)aims to automatically identify and extract information about specific interests from raw texts.Despite the abundance of solutions based on fine-tuning pretrained language models,IE in the con...Information extraction(IE)aims to automatically identify and extract information about specific interests from raw texts.Despite the abundance of solutions based on fine-tuning pretrained language models,IE in the context of fewshot and zero-shot scenarios remains highly challenging due to the scarcity of training data.Large language models(LLMs),on the other hand,can generalize well to unseen tasks with few-shot demonstrations or even zero-shot instructions and have demonstrated impressive ability for a wide range of natural language understanding or generation tasks.Nevertheless,it is unclear,whether such effectiveness can be replicated in the task of IE,where the target tasks involve specialized schema and quite abstractive entity or relation concepts.In this paper,we first examine the validity of LLMs in executing IE tasks with an established prompting strategy and further propose multiple types of augmented prompting methods,including the structured fundamental prompt(SFP),the structured interactive reasoning prompt(SIRP),and the voting-enabled structured interactive reasoning prompt(VESIRP).The experimental results demonstrate that while directly promotes inferior performance,the proposed augmented prompt methods significantly improve the extraction accuracy,achieving comparable or even better performance(e.g.,zero-shot FewNERD,FewNERD-INTRA)than state-of-theart methods that require large-scale training samples.This study represents a systematic exploration of employing instruction-following LLM for the task of IE.It not only establishes a performance benchmark for this novel paradigm but,more importantly,validates a practical technical pathway through the proposed prompt enhancement method,offering a viable solution for efficient IE in low-resource settings.展开更多
Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for ...Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.展开更多
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ...Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.展开更多
Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved inform...Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.展开更多
With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration predict...With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.展开更多
In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges whe...In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.展开更多
The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore th...The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore the construction of a school-enterprise community education model driven by the ITAI industry.It establishes the Kirin Workshop training base to facilitate talent cultivation,integrates the ITAI Application Adaptation Center to enhance technical capabilities,cooperates with Liqi Technology to establish an industrial college for government talent training,adjusts the professional curriculum system,and arranges for students to participate in ITAI vocational skills competitions.The school-enterprise collaborative cultivation mechanism meets the talent needs of the ITAI field,with effective practical results.This paper also points out the shortcomings of the school-enterprise collaborative education model in the ITAI industry and provides optimization methods to explore new paths for industry-education integration and serve the development of regional and national ITAI industries^([1]).展开更多
With the continuous advancement of information technology,traditional teaching management models can no longer meet the demands of modern laboratory management.Information management,characterized by efficiency,conven...With the continuous advancement of information technology,traditional teaching management models can no longer meet the demands of modern laboratory management.Information management,characterized by efficiency,convenience,and intelligence,provides new ideas and directions for reforming laboratory teaching management models in higher education.Based on this,this paper explores reform strategies and practical approaches for laboratory teaching management models from the perspective of information management,aiming to offer references for enhancing the modernization and intelligentization of laboratory teaching management.展开更多
The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically dem...The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically demonstrate the security of information transmission and processing under nonrandom noise(or vulnerability backdoor attack)conditions in cyberspace.This paper first proposes a security model for cyberspace information transmission and processing channels based on error correction coding theory.First,we analyze the fault tolerance and non-randomness problem of Dynamic Heterogeneous Redundancy(DHR)structured information transmission and processing channel under the condition of non-random noise or attacks.Secondly,we use a mathematical statistical method to demonstrate that for non-random noise(or attacks)on discrete memory channels,there exists a DHR-structured channel and coding scheme that enables the average system error probability to be arbitrarily small.Finally,to construct suitable coding and heterogeneous channels,we take Turbo code as an example and simulate the effects of different heterogeneity,redundancy,output vector length,verdict algorithm and dynamism on the system,which is an important guidance for theory and engineering practice.展开更多
As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and am...As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.展开更多
With the rapid development of the internet,the dissemination of public opinion in online social networks has become increasingly complex.Existing dissemination models rarely consider group phenomena and the simultaneo...With the rapid development of the internet,the dissemination of public opinion in online social networks has become increasingly complex.Existing dissemination models rarely consider group phenomena and the simultaneous spread of competing public opinion information in online social networks.This paper introduces the UHNPR information dissemination model to study the dynamic spread and interaction of positive and negative public opinion information in hypernetworks.To improve the accuracy of modeling of information dissemination,we revise the traditional assumptions of constant propagation and decay rates by redefining these rates based on factors that influence the spread of public opinion information.Subsequently,we validate the effectiveness of the UHNPR model using numerical simulations and analyze the impact of factors such as authority effect,user intimacy,information content and information timeliness on the spread of public opinion,providing corresponding suggestions for public opinion control.Our research results demonstrate that this model outperforms the SIR,SEIR and SEIDR models in describing public opinion propagation in real social networks.Compared with complex networks,information spreads faster and more extensively in hypernetworks.展开更多
The Advanced French course is a core subject for the major of French,and ideological and political education is an important component of its teaching.By restructuring the teaching content according to the educational...The Advanced French course is a core subject for the major of French,and ideological and political education is an important component of its teaching.By restructuring the teaching content according to the educational modules of ideological and political education,we can provide a more comprehensive and systematic educational experience.Empowered by information technology,this approach broadens the dimensions of ideological and political education in the Advanced French course.Meanwhile,the learning outcomes from the“first classroom”can be transformed into the results of the“second classroom”through social platforms such as WeChat public accounts,micro-video competitions,and innovation projects,achieving the effect of spreading Chinese culture and telling Chinese stories.By using diverse evaluation criteria,we continuously improve teaching and learning activities and innovate the teaching model of ideological and political education.展开更多
The field of artificial intelligence has advanced significantly in recent years,but achieving a human-like or Artificial General Intelligence(AGI)remains a theoretical challenge.One hypothesis suggests that a key issu...The field of artificial intelligence has advanced significantly in recent years,but achieving a human-like or Artificial General Intelligence(AGI)remains a theoretical challenge.One hypothesis suggests that a key issue is the formalisation of extracting meaning from information.Meaning emerges through a three-stage interpretative process,where the spectrum of possible interpretations is collapsed into a singular outcome by a particular context.However,this approach currently lacks practical grounding.In this research,we developed a model based on contexts,which applies interpretation principles to the visual information to address this gap.The field of computer vision and object recognition has progressed essentially with artificial neural networks,but these models struggle with geometrically transformed images,such as those that are rotated or shifted,limiting their robustness in real-world applications.Various approaches have been proposed to address this problem.Some of them(Hu moments,spatial transformers,capsule networks,attention and memory mechanisms)share a conceptual connection with the contextual model(CM)discussed in this study.This paper investigates whether CM principles are applicable for interpreting rotated images from the MNIST and Fashion MNIST datasets.The model was implemented in the Rust programming language.It consists of a contextual module and a convolutional neural network(CNN).The CMwas trained on the rotated Mono Icons dataset,which is significantly different from the testing datasets.The CNN module was trained on the original MNIST and Fashion MNIST datasets for interpretation recognition.As a result,the CM was able to recognise the original datasets but encountered rotated images only during testing.The findings show that the model effectively interpreted transformed images by considering them in all available contexts and restoring their original form.This provides a practical foundation for further development of the contextual hypothesis and its relation to theAGI domain.展开更多
High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of ...High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.展开更多
文摘Slope units are divided according to the real topography and have clear geological characteristics,making them ideal units for evaluating the susceptibility to geological disasters.Based on the results of automatically and manually corrected hydrological slope unit division,the Longhua District,Shenzhen City,Guangdong Province,was selected as the study area.A total of 15 influencing factors,namely Fluctuation,slope,slope aspect,curvature,topographic witness index(TWI),stream power index(SPI),topographic roughness index(TRI),annual average rainfall,distance to water system,engineering rock group,distance to fault,land use,normalized difference vegetation index(NDVI),nighttime light,and distance to road,were selected as evaluation indicators.The information volume model(IV)and random points were used to select non-geological disaster units,and then the random forest model(RF)was used to evaluate the susceptibility to geological disasters.The automatic slope unit and the hydrological slope unit were compared and analyzed in the random forest and information volume random forest models.The results show that the area under the curve(AUC)values of the automatic slope unit evaluation results are 0.931 for the IV-RF model and 0.716 for the RF model,which are 0.6%(IV-RF model)and 1.9%(RF model)higher than those for the hydrological slope unit.Based on a comparison of the evaluation methods based on the two types of slope units,the hydrological slope unit evaluation method based on manual correction is highly subjective,is complicated to operate,and has a low evaluation accuracy,whereas the evaluation method based on automatic slope unit division is efficient and accurate,is suitable for large-scale efficient geological disaster evaluation,and can better deal with the problem of geological disaster susceptibility evaluation.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
基金Under the auspices of National Natural Science Foundation of China(No.42330510)。
文摘With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.
基金supported supported by the Fundamental Research Funds for the Central Universities(226-2024-00004)the National Natural Science Foundation of China(U23 A20326)Key Research and Development Program of Zhejiang Province(2025C01061).
文摘Dear Editor,This letter deals with automatically constructing an OPC UA information model(IM)aimed at enhancing data interoperability among heterogeneous system components within manufacturing automation systems.Empowered by the large language model(LLM),we propose a novel multi-agent collaborative framework to streamline the end-to-end OPC UA IM modeling process.Each agent is equipped with meticulously engineered prompt templates,augmenting their capacity to execute specific tasks.We conduct modeling experiments using real textual data to demonstrate the effectiveness of the proposed method,improving modeling efficiency and reducing the labor workload.
文摘The management of large-scale architectural engineering projects(e.g.,airports,hospitals)is plagued by information silos,cost overruns,and scheduling delays.While building information modeling(BIM)has improved 3D design coordination,its static nature limits its utility in real-time construction management and operational phases.This paper proposes a novel synergistic framework that integrates the static,deep data of BIM with the dynamic,real-time capabilities of digital twin(DT)technology.The framework establishes a closed-loop data flow from design(BIM)to construction(IoT,drones,BIM 360)to operation(DT platform).We detail the technological stack required,including IoT sensors,cloud computing,and AI-driven analytics.The application of this framework is illustrated through a simulated case study of a mega-terminal airport construction project,demonstrating potential reductions in rework by 15%,improvement in labor productivity by 10%,and enhanced predictive maintenance capabilities.This research contributes to the field of construction engineering by providing a practical model for achieving full lifecycle digitalization and intelligent project management.
文摘This research pioneers the integration of geographic information systems(GIS)and 3D modeling within a virtual reality(VR)framework to assess the viability and planning of a 20 MW hybrid wind-solarphotovoltaic(PV)system connected to the local grid.The study focuses on Dakhla,Morocco,a region with vast untapped renewable energy potential.By leveraging GIS,we are innovatively analyzing geographical and environmental factors that influence optimal site selection and system design.The incorporation of VR technologies offers an unprecedented level of realism and immersion,allowing stakeholders to virtually experience the project's impact and design in a dynamic,interactive environment.This novel methodology includes extensive data collection,advanced modeling,and simulations,ensuring that the hybrid system is precisely tailored to the unique climatic and environmental conditions of Dakhla.Our analysis reveals that the region possesses a photovoltaic solar potential of approximately2400 k Wh/m^(2) per year,with an average annual wind power density of about 434 W/m^(2) at an 80-meter hub height.Productivity simulations indicate that the 20 MW hybrid system could generate approximately 60 GWh of energy per year and 1369 GWh over its 25-year lifespan.To validate these findings,we employed the System Advisor Model(SAM)software and the Global Solar Photovoltaic Atlas platform.This comprehensive and interdisciplinary approach not only provides a robust assessment of the system's feasibility but also offers valuable insights into its potential socio-economic and environmental impact.
基金supported by the National Natural Science Foundation of China(62222212).
文摘Information extraction(IE)aims to automatically identify and extract information about specific interests from raw texts.Despite the abundance of solutions based on fine-tuning pretrained language models,IE in the context of fewshot and zero-shot scenarios remains highly challenging due to the scarcity of training data.Large language models(LLMs),on the other hand,can generalize well to unseen tasks with few-shot demonstrations or even zero-shot instructions and have demonstrated impressive ability for a wide range of natural language understanding or generation tasks.Nevertheless,it is unclear,whether such effectiveness can be replicated in the task of IE,where the target tasks involve specialized schema and quite abstractive entity or relation concepts.In this paper,we first examine the validity of LLMs in executing IE tasks with an established prompting strategy and further propose multiple types of augmented prompting methods,including the structured fundamental prompt(SFP),the structured interactive reasoning prompt(SIRP),and the voting-enabled structured interactive reasoning prompt(VESIRP).The experimental results demonstrate that while directly promotes inferior performance,the proposed augmented prompt methods significantly improve the extraction accuracy,achieving comparable or even better performance(e.g.,zero-shot FewNERD,FewNERD-INTRA)than state-of-theart methods that require large-scale training samples.This study represents a systematic exploration of employing instruction-following LLM for the task of IE.It not only establishes a performance benchmark for this novel paradigm but,more importantly,validates a practical technical pathway through the proposed prompt enhancement method,offering a viable solution for efficient IE in low-resource settings.
基金the support of Research Program of Fine Exploration and Surrounding Rock Classification Technology for Deep Buried Long Tunnels Driven by Horizontal Directional Drilling and Magnetotelluric Methods Based on Deep Learning under Grant E202408010the Sichuan Science and Technology Program under Grant 2024NSFSC1984 and Grant 2024NSFSC1990。
文摘Porosity is an important attribute for evaluating the petrophysical properties of reservoirs, and has guiding significance for the exploration and development of oil and gas. The seismic inversion is a key method for comprehensively obtaining the porosity. Deep learning methods provide an intelligent approach to suppress the ambiguity of the conventional inversion method. However, under the trace-bytrace inversion strategy, there is a lack of constraints from geological structural information, resulting in poor lateral continuity of prediction results. In addition, the heterogeneity and the sedimentary variability of subsurface media also lead to uncertainty in intelligent prediction. To achieve fine prediction of porosity, we consider the lateral continuity and variability and propose an improved structural modeling deep learning porosity prediction method. First, we combine well data, waveform attributes, and structural information as constraints to model geophysical parameters, constructing a high-quality training dataset with sedimentary facies-controlled significance. Subsequently, we introduce a gated axial attention mechanism to enhance the features of dataset and design a bidirectional closed-loop network system constrained by inversion and forward processes. The constraint coefficient is adaptively adjusted by the petrophysical information contained between the porosity and impedance in the study area. We demonstrate the effectiveness of the adaptive coefficient through numerical experiments.Finally, we compare the performance differences between the proposed method and conventional deep learning methods using data from two study areas. The proposed method achieves better consistency with the logging porosity, demonstrating the superiority of the proposed method.
文摘Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.
文摘Background:Acquiring relevant information about procurement targets is fundamental to procuring medical devices.Although traditional Natural Language Processing(NLP)and Machine Learning(ML)methods have improved information retrieval efficiency to a certain extent,they exhibit significant limitations in adaptability and accuracy when dealing with procurement documents characterized by diverse formats and a high degree of unstructured content.The emergence of Large Language Models(LLMs)offers new possibilities for efficient procurement information processing and extraction.Methods:This study collected procurement transaction documents from public procurement websites,and proposed a procurement Information Extraction(IE)method based on LLMs.Unlike traditional approaches,this study systematically explores the applicability of LLMs in both structured and unstructured entities in procurement documents,addressing the challenges posed by format variability and content complexity.Furthermore,an optimized prompt framework tailored for procurement document extraction tasks is developed to enhance the accuracy and robustness of IE.The aim is to process and extract key information from medical device procurement quickly and accurately,meeting stakeholders'demands for precision and timeliness in information retrieval.Results:Experimental results demonstrate that,compared to traditional methods,the proposed approach achieves an F1 Score of 0.9698,representing a 4.85%improvement over the best baseline model.Moreover,both recall and precision rates are close to 97%,significantly outperforming other models and exhibiting exceptional overall recognition capabilities.Notably,further analysis reveals that the proposed method consistently maintains high performance across both structured and unstructured entities in procurement documents while balancing recall and precision effectively,demonstrating its adaptability in handling varying document formats.The results of ablation experiments validate the effectiveness of the proposed prompting strategy.Conclusion:Additionally,this study explores the challenges and potential improvements of the proposed method in IE tasks and provides insights into its feasibility for real-world deployment and application directions,further clarifying its adaptability and value.This method not only exhibits significant advantages in medical device procurement but also holds promise for providing new approaches to information processing and decision support in various domains.
基金supported by General Scientific Research Funding of the Science and Technology Development Fund(FDCT)in Macao(No.0150/2022/A)the Faculty Research Grants of Macao University of Science and Technology(No.FRG-22-074-FIE).
文摘With the rapid development of economy,air pollution caused by industrial expansion has caused serious harm to human health and social development.Therefore,establishing an effective air pollution concentration prediction system is of great scientific and practical significance for accurate and reliable predictions.This paper proposes a combination of pointinterval prediction system for pollutant concentration prediction by leveraging neural network,meta-heuristic optimization algorithm,and fuzzy theory.Fuzzy information granulation technology is used in data preprocessing to transform numerical sequences into fuzzy particles for comprehensive feature extraction.The golden Jackal optimization algorithm is employed in the optimization stage to fine-tune model hyperparameters.In the prediction stage,an ensemble learning method combines training results frommultiplemodels to obtain final point predictions while also utilizing quantile regression and kernel density estimation methods for interval predictions on the test set.Experimental results demonstrate that the combined model achieves a high goodness of fit coefficient of determination(R^(2))at 99.3% and a maximum difference between prediction accuracy mean absolute percentage error(MAPE)and benchmark model at 12.6%.This suggests that the integrated learning system proposed in this paper can provide more accurate deterministic predictions as well as reliable uncertainty analysis compared to traditionalmodels,offering practical reference for air quality early warning.
基金supported by the National Natural Science Foundation of China(No.62306281)the Natural Science Foundation of Zhejiang Province(Nos.LQ23E060006 and LTGG24E050005)the Key Research Plan of Jiaxing City(No.2024BZ20016).
文摘In the era of big data,data-driven technologies are increasingly leveraged by industry to facilitate autonomous learning and intelligent decision-making.However,the challenge of“small samples in big data”emerges when datasets lack the comprehensive information necessary for addressing complex scenarios,which hampers adaptability.Thus,enhancing data completeness is essential.Knowledge-guided virtual sample generation transforms domain knowledge into extensive virtual datasets,thereby reducing dependence on limited real samples and enabling zero-sample fault diagnosis.This study used building air conditioning systems as a case study.We innovatively used the large language model(LLM)to acquire domain knowledge for sample generation,significantly lowering knowledge acquisition costs and establishing a generalized framework for knowledge acquisition in engineering applications.This acquired knowledge guided the design of diffusion boundaries in mega-trend diffusion(MTD),while the Monte Carlo method was used to sample within the diffusion function to create information-rich virtual samples.Additionally,a noise-adding technique was introduced to enhance the information entropy of these samples,thereby improving the robustness of neural networks trained with them.Experimental results showed that training the diagnostic model exclusively with virtual samples achieved an accuracy of 72.80%,significantly surpassing traditional small-sample supervised learning in terms of generalization.This underscores the quality and completeness of the generated virtual samples.
基金supported by the Foundation of Shanwei Institute of Technology(swjy23-008).
文摘The software technology field is facing new talent demands brought by the Information Technology Application Innovation(ITAI)industry.This paper takes Shanwei Institute of Technology as an example to deeply explore the construction of a school-enterprise community education model driven by the ITAI industry.It establishes the Kirin Workshop training base to facilitate talent cultivation,integrates the ITAI Application Adaptation Center to enhance technical capabilities,cooperates with Liqi Technology to establish an industrial college for government talent training,adjusts the professional curriculum system,and arranges for students to participate in ITAI vocational skills competitions.The school-enterprise collaborative cultivation mechanism meets the talent needs of the ITAI field,with effective practical results.This paper also points out the shortcomings of the school-enterprise collaborative education model in the ITAI industry and provides optimization methods to explore new paths for industry-education integration and serve the development of regional and national ITAI industries^([1]).
文摘With the continuous advancement of information technology,traditional teaching management models can no longer meet the demands of modern laboratory management.Information management,characterized by efficiency,convenience,and intelligence,provides new ideas and directions for reforming laboratory teaching management models in higher education.Based on this,this paper explores reform strategies and practical approaches for laboratory teaching management models from the perspective of information management,aiming to offer references for enhancing the modernization and intelligentization of laboratory teaching management.
基金supported by National Key R&D Program of China for Young Scientists:Cyberspace Endogenous Security Mechanisms and Evaluation Methods(No.2022YFB3102800).
文摘The security of information transmission and processing due to unknown vulnerabilities and backdoors in cyberspace is becoming increasingly problematic.However,there is a lack of effective theory to mathematically demonstrate the security of information transmission and processing under nonrandom noise(or vulnerability backdoor attack)conditions in cyberspace.This paper first proposes a security model for cyberspace information transmission and processing channels based on error correction coding theory.First,we analyze the fault tolerance and non-randomness problem of Dynamic Heterogeneous Redundancy(DHR)structured information transmission and processing channel under the condition of non-random noise or attacks.Secondly,we use a mathematical statistical method to demonstrate that for non-random noise(or attacks)on discrete memory channels,there exists a DHR-structured channel and coding scheme that enables the average system error probability to be arbitrarily small.Finally,to construct suitable coding and heterogeneous channels,we take Turbo code as an example and simulate the effects of different heterogeneity,redundancy,output vector length,verdict algorithm and dynamism on the system,which is an important guidance for theory and engineering practice.
基金Project supported by the National Natural Science Foundation of China (Grant No. 62371253)the Postgraduate Research and Practice Innovation Program of Jiangsu Province, China (Grant No. KYCX24_1179)。
文摘As the economy grows, environmental issues are becoming increasingly severe, making the promotion of green behavior more urgent. Information dissemination and policy regulation play crucial roles in influencing and amplifying the spread of green behavior across society. To this end, a novel three-layer model in multilayer networks is proposed. In the novel model, the information layer describes green information spreading, the physical contact layer depicts green behavior propagation, and policy regulation is symbolized by an isolated node beneath the two layers. Then, we deduce the green behavior threshold for the three-layer model using the microscopic Markov chain approach. Moreover, subject to some individuals who are more likely to influence others or become green nodes and the limitations of the capacity of policy regulation, an optimal scheme is given that could optimize policy interventions to most effectively prompt green behavior.Subsequently, simulations are performed to validate the preciseness and theoretical results of the new model. It reveals that policy regulation can prompt the prevalence and outbreak of green behavior. Then, the green behavior is more likely to spread and be prevalent in the SF network than in the ER network. Additionally, optimal allocation is highly successful in facilitating the dissemination of green behavior. In practice, the optimal allocation strategy could prioritize interventions at critical nodes or regions, such as highly connected urban areas, where the impact of green behavior promotion would be most significant.
基金supported by Yunnan High-tech Industry Development Project(Grant No.201606)Yunnan Provincial Major Science and Technology Special Plan Projects(Grant Nos.202103AA080015 and 202002AD080001-5)+1 种基金Yunnan Basic Research Project(Grant No.202001AS070014)Talents and Platform Program of Science and Technology of Yunnan(Grant No.202105AC160018)。
文摘With the rapid development of the internet,the dissemination of public opinion in online social networks has become increasingly complex.Existing dissemination models rarely consider group phenomena and the simultaneous spread of competing public opinion information in online social networks.This paper introduces the UHNPR information dissemination model to study the dynamic spread and interaction of positive and negative public opinion information in hypernetworks.To improve the accuracy of modeling of information dissemination,we revise the traditional assumptions of constant propagation and decay rates by redefining these rates based on factors that influence the spread of public opinion information.Subsequently,we validate the effectiveness of the UHNPR model using numerical simulations and analyze the impact of factors such as authority effect,user intimacy,information content and information timeliness on the spread of public opinion,providing corresponding suggestions for public opinion control.Our research results demonstrate that this model outperforms the SIR,SEIR and SEIDR models in describing public opinion propagation in real social networks.Compared with complex networks,information spreads faster and more extensively in hypernetworks.
基金2023 Xi’an Fanyi University-Level Education and Teaching Reform Research Project“Research on Information Technology-Based Innovative Model of Civics and Politics in the Advanced French Course”(J23B17)。
文摘The Advanced French course is a core subject for the major of French,and ideological and political education is an important component of its teaching.By restructuring the teaching content according to the educational modules of ideological and political education,we can provide a more comprehensive and systematic educational experience.Empowered by information technology,this approach broadens the dimensions of ideological and political education in the Advanced French course.Meanwhile,the learning outcomes from the“first classroom”can be transformed into the results of the“second classroom”through social platforms such as WeChat public accounts,micro-video competitions,and innovation projects,achieving the effect of spreading Chinese culture and telling Chinese stories.By using diverse evaluation criteria,we continuously improve teaching and learning activities and innovate the teaching model of ideological and political education.
文摘The field of artificial intelligence has advanced significantly in recent years,but achieving a human-like or Artificial General Intelligence(AGI)remains a theoretical challenge.One hypothesis suggests that a key issue is the formalisation of extracting meaning from information.Meaning emerges through a three-stage interpretative process,where the spectrum of possible interpretations is collapsed into a singular outcome by a particular context.However,this approach currently lacks practical grounding.In this research,we developed a model based on contexts,which applies interpretation principles to the visual information to address this gap.The field of computer vision and object recognition has progressed essentially with artificial neural networks,but these models struggle with geometrically transformed images,such as those that are rotated or shifted,limiting their robustness in real-world applications.Various approaches have been proposed to address this problem.Some of them(Hu moments,spatial transformers,capsule networks,attention and memory mechanisms)share a conceptual connection with the contextual model(CM)discussed in this study.This paper investigates whether CM principles are applicable for interpreting rotated images from the MNIST and Fashion MNIST datasets.The model was implemented in the Rust programming language.It consists of a contextual module and a convolutional neural network(CNN).The CMwas trained on the rotated Mono Icons dataset,which is significantly different from the testing datasets.The CNN module was trained on the original MNIST and Fashion MNIST datasets for interpretation recognition.As a result,the CM was able to recognise the original datasets but encountered rotated images only during testing.The findings show that the model effectively interpreted transformed images by considering them in all available contexts and restoring their original form.This provides a practical foundation for further development of the contextual hypothesis and its relation to theAGI domain.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(RS-2020-NR049579).
文摘High-dimensional data causes difficulties in machine learning due to high time consumption and large memory requirements.In particular,in amulti-label environment,higher complexity is required asmuch as the number of labels.Moreover,an optimization problem that fully considers all dependencies between features and labels is difficult to solve.In this study,we propose a novel regression-basedmulti-label feature selectionmethod that integrates mutual information to better exploit the underlying data structure.By incorporating mutual information into the regression formulation,the model captures not only linear relationships but also complex non-linear dependencies.The proposed objective function simultaneously considers three types of relationships:(1)feature redundancy,(2)featurelabel relevance,and(3)inter-label dependency.These three quantities are computed usingmutual information,allowing the proposed formulation to capture nonlinear dependencies among variables.These three types of relationships are key factors in multi-label feature selection,and our method expresses them within a unified formulation,enabling efficient optimization while simultaneously accounting for all of them.To efficiently solve the proposed optimization problem under non-negativity constraints,we develop a gradient-based optimization algorithm with fast convergence.Theexperimental results on sevenmulti-label datasets show that the proposed method outperforms existingmulti-label feature selection techniques.