In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment a...In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment among teaching,learning,and assessment.To counter these challenges,we propose an Inquiry-Construction Double-Helix model that uses a domain-specific knowledge graph as its cognitive spine.The model interweaves two mutually reinforcing strands-student-driven inquiry and systematic knowledge construction-into a double-helix trajectory analogous to DNA replication.The Inquiry Strand is launched by authentic,situation-based tasks that shepherd students through the complete cycle:question→hypothesis→verification→reflection.The Construction Strand simultaneously externalizes,restructures,and internalizes core disciplinary concepts via visual,hierarchical knowledge graphs.Within the flow of a lesson,the two strands alternately dominate and scaffold each other,securing the co-development of conceptual understanding,procedural fluency,and mathematical literacy.Empirical evidence demonstrates that this model significantly enhances students’systematic knowledge integration,problem-solving transfer ability,and core mathematical competencies,offering a replicable and operable teaching paradigm and practical pathway for deepening high school mathematics classroom reform.展开更多
The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map cons...The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map construction.Through the way of extracting the accounting entities and their connections in the pattern layer,the data layer is provided for the fine-tuning and optimization of the large model.Studies found that,through the reasonable application of language model,knowledge can be realized in massive financial data neural five effective extracted tuples,and complete accounting knowledge map construction.展开更多
Recognizing essential proteins within bacteriophages is fundamental to uncovering their replication and survival mechanisms and contributes to advances in phage-based antibacterial therapies.Despite notable progress,e...Recognizing essential proteins within bacteriophages is fundamental to uncovering their replication and survival mechanisms and contributes to advances in phage-based antibacterial therapies.Despite notable progress,existing computational techniques struggle to represent the interplay between sequence-derived and structuredependent protein features.To overcome this limitation,we introduce GLM-EP,a unified framework that fuses protein language models with equivariant graph neural networks.Bymerging semantic embeddings extracted from amino acid sequences with geometry-aware graph representations,GLM-EP enables an in-depth depiction of phage proteins and enhances essential protein identification.Evaluation on diverse benchmark datasets confirms that GLM-EP surpasses conventional sequence-based and independent deep-learning methods,yielding higher F1 and AUROC outcomes.Component-wise analysis demonstrates that GCNII,EGNN,and the gated multi-head attention mechanism function in a complementary manner to encode complex molecular attributes.In summary,GLM-EP serves as a robust and efficient tool for bacteriophage genomic analysis and provides valuable methodological perspectives for the discovery of antibiotic-resistance therapeutic targets.The corresponding code repository is available at:https://github.com/MiJia-ID/GLM-EP(accessed on 01 November 2025).展开更多
Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of...Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of this condition is fragmented and inconsistent.This study constructed a CINV knowledge graph using a large language model(LLM)to integrate nursing and medical evidence,thereby supporting systematic clinical decision-making.Methods:A top-down approach was adopted.1)Knowledge base preparation:Nine databases and eight guideline repositories were searched up to October 2024 to include guidelines,evidence summaries,expert consensuses,and systematic reviews screened by two researchers.2)Schema design:Referring to the Unified Medical Language System,Systematized Nomenclature of Medicine-Clinical Terms,and the Nursing Intervention Classification,entity and relation types were defined to build the ontology schema.3)LLM-based extraction and integration:Using the Qwen model under the CRISPE framework,named entity recognition,relation extraction,disambiguation,and fusion were conducted to generate triples and visualize them in Neo4j.Four expert rounds ensured semantic and logical consistency.Model performance was evaluated using precision,recall,F1-score,and 95%confidence interval(95%CI)in Python 3.11.Result:A total of 47 studies were included(18 guidelines,two expert consensuses,two evidence summaries,and 25 systematic reviews).The Qwen model extracted 273 entities and 289 relations;after expert validation,238 entities and 242 relations were retained,forming 244 triples.The ontology comprised nine entity types and eight relation types.The F1-scores for named entity recognition and relation extraction were 82.97(95%CI:0.820,0.839)and 85.54(95%CI:0.844,0.867),respectively.The average node degree was 2.03,with no isolated nodes.Conclusion:The LLM-based CINV knowledge graph achieved structured integration of nursing and medical evidence,offering a novel,data-driven tool to support clinical nursing decision-making and advance intelligent healthcare.展开更多
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte...Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.展开更多
With the development of the Semantic Web,the number of ontologies grows exponentially and the semantic relationships between ontologies become more and more complex,understanding the true semantics of specific terms o...With the development of the Semantic Web,the number of ontologies grows exponentially and the semantic relationships between ontologies become more and more complex,understanding the true semantics of specific terms or concepts in an ontology is crucial for the matching task.At present,the main challenges facing ontology matching tasks based on representation learning methods are how to improve the embedding quality of ontology knowledge and how to integrate multiple features of ontology efficiently.Therefore,we propose an Ontology Matching Method Based on the Gated Graph Attention Model(OM-GGAT).Firstly,the semantic knowledge related to concepts in the ontology is encoded into vectors using the OWL2Vec^(*)method,and the relevant path information from the root node to the concept is embedded to understand better the true meaning of the concept itself and the relationship between concepts.Secondly,the ontology is transformed into the corresponding graph structure according to the semantic relation.Then,when extracting the features of the ontology graph nodes,different attention weights are assigned to each adjacent node of the central concept with the help of the attention mechanism idea.Finally,gated networks are designed to further fuse semantic and structural embedding representations efficiently.To verify the effectiveness of the proposed method,comparative experiments on matching tasks were carried out on public datasets.The results show that the OM-GGAT model can effectively improve the efficiency of ontology matching.展开更多
As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domai...As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domain in the industrial sector,has long faced the dual challenges of managing vast amounts of experiential knowledge and improving human-machine collaboration efficiency.Traditional fault diagnosis systems,which are primarily based on expert systems,suffer from three major limitations:(1)ineffective organization of fault diagnosis knowledge,(2)lack of adaptability between static knowledge frameworks and dynamic engineering environments,and(3)difficulties in integrating expert knowledge with real-time data streams.These systemic shortcomings restrict the ability of conventional approaches to handle uncertainty.In this study,we proposed an intelligent computer numerical control(CNC)fault diagnosis system,integrating LLMs with knowledge graph(KG).First,we constructed a comprehensive KG that consolidated multi-source data for structured representation.Second,we designed a retrievalaugmented generation(RAG)framework leveraging the KG to support multi-turn interactive fault diagnosis while incorporating real-time engineering data into the decision-making process.Finally,we introduced a learning mechanism to facilitate dynamic knowledge updates.The experimental results demonstrated that our system significantly improved fault diagnosis accuracy,outperforming engineers with two years of professional experience on our constructed benchmark datasets.By integrating LLMs and KG,our framework surpassed the limitations of traditional expert systems rooted in symbolic reasoning,offering a novel approach to addressing the cognitive paradox of unstructured knowledge modeling and dynamic environment adaptation in industrial settings.展开更多
Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite a...Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite advancements in intelligent and digital technologies,assembly process design still heavily relies on manual knowledge reuse,and inefficiencies and inconsistent quality in process documentation are caused.To address the aforementioned issues,this paper proposes a knowledge push method of complex product assembly process design based on distillation model-based dynamically enhanced graph and Bayesian network.First,an initial knowledge graph is constructed using a BERT-BiLSTM-CRF model trained with integrated human expertise and a fine-tuned large language model.Then,a confidence-based dynamic weighted fusion strategy is employed to achieve dynamic incremental construction of the knowledge graph with low resource consumption.Subsequently,a Bayesian network model is constructed based on the relationships between assembly components,assembly features,and operations.Bayesian network reasoning is used to push assembly process knowledge under different design requirements.Finally,the feasibility of the Bayesian network construction method and the effectiveness of Bayesian network reasoning are verified through a specific example,significantly improving the utilization of assembly process knowledge and the efficiency of assembly process design.展开更多
The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack...The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.展开更多
A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov sub...A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov subspace techniques,and finally all the reduced sub-networks are incorporated together.With some accuracy,this method can reduce the number of both nodes and components of the circuit comparing to the traditional methods which usually only offer a reduced net with less nodes.This can markedly accelerate the sparse-matrix-based simulators whose performance is dominated by the entity of the matrix or the number of components of the circuits.展开更多
Due to the limitations of the existing fault detection methods in the embryonic cellular array(ECA), the fault detection coverage cannot reach 100%. In order to evaluate the reliability of the ECA more accurately, emb...Due to the limitations of the existing fault detection methods in the embryonic cellular array(ECA), the fault detection coverage cannot reach 100%. In order to evaluate the reliability of the ECA more accurately, embryonic cell and its input and output(I/O) resources are considered as a whole, named functional unit(FU). The FU fault detection coverage parameter is introduced to ECA reliability analysis, and a new ECA reliability evaluation method based on the Markov status graph model is proposed.Simulation experiment results indicate that the proposed ECA reliability evaluation method can evaluate the ECA reliability more effectively and accurately. Based on the proposed reliability evaluation method, the influence of parameters change on the ECA reliability is studied, and simulation experiment results show that ECA reliability can be improved by increasing the FU fault detection coverage and reducing the FU failure rate. In addition, by increasing the scale of the ECA, the reliability increases to the maximum first, and then it will decrease continuously. ECA reliability variation rules can not only provide theoretical guidance for the ECA optimization design, but also point out the direction for further research.展开更多
Recently, random graphs in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices have attracted much attention. This paper presents a specific realizatio...Recently, random graphs in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices have attracted much attention. This paper presents a specific realization of a class of random network models in which the connection probability between two vertices (i, j) is a specific function of degrees ki and kj. In the framework of the configuration model of random graphsp we find the analytical expressions for the degree correlation and clustering as a function of the variance of the desired degree distribution. The obtained expressions are checked by means of numerical simulations. Possible applications of our model are discussed.展开更多
Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when sema...Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when semantic façades are reconstructed from point cloud data,uneven point density and noise make it difficult to accurately determine the façade structure.When inves-tigating façade layouts,Gestalt principles can be applied to cluster visually similar floors and façade elements,allowing for a more intuitive interpretation of façade structures.We propose a novel model for describing façade structures,namely the layout graph model,which involves a compound graph with two structure levels.In the proposed model,similar façade elements such as windows are first grouped into clusters.A down-layout graph is then formed using this cluster as a node and by combining intra-and inter-cluster spacings as the edges.Second,a top-layout graph is formed by clustering similar floors.By extracting relevant parameters from this model,we transform semantic façade reconstruction to an optimization strategy using simulated annealing coupled with Gibbs sampling.Multiple façade point cloud data with different features were selected from three datasets to verify the effectiveness of this method.The experimental results show that the proposed method achieves an average accuracy of 86.35%.Owing to its flexibility,the proposed layout graph model can deal with different types of façades and qualities of point cloud data,enabling a more robust and accurate reconstruc-tion of façade models.展开更多
There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Ba...There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Based on knowledge graph,this paper proposes an assembly information model(KGAM)to integrate geometric information from CAD model,non-geometric information and semantic information from assembly process document.KGAM describes the integrated assembly process information as a knowledge graph in the form of“entity-relationship-entity”and“entity-attribute-value”,which can improve the efficiency of information interaction.Taking the trial assembly stage of a certain type of aeroengine compressor rotor component as an example,KGAM is used to get its assembly process knowledge graph.The trial data show the query and update rate of assembly attribute information is improved by more than once.And the query and update rate of assembly semantic information is improved by more than twice.In conclusion,KGAM can solve the heterogeneous problems between the CAD model and the assembly process document and improve the information interaction efficiency.展开更多
Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in r...Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in real systems are based on graph models,which are characterized by their simplicity and stability.Thus,this paper proposes an improved extractive text summarization algorithm based on both topic and graph models.The methodology of this work consists of two stages.First,the well-known TextRank algorithm is analyzed and its shortcomings are investigated.Then,an improved method is proposed with a new computational model of sentence weights.The experimental results were carried out on standard DUC2004 and DUC2006 datasets and compared to four text summarization methods.Finally,through experiments on the DUC2004 and DUC2006 datasets,our proposed improved graph model algorithm TG-SMR(Topic Graph-Summarizer)is compared to other text summarization systems.The experimental results prove that the proposed TG-SMR algorithm achieves higher ROUGE scores.It is foreseen that the TG-SMR algorithm will open a new horizon that concerns the performance of ROUGE evaluation indicators.展开更多
With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power...With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid.展开更多
To increase the efficiency and reliability of the thermodynamics analysis of the hydraulic system, the method based on pseudo-bond graph is introduced. According to the working mechanism of hydraulic components, they ...To increase the efficiency and reliability of the thermodynamics analysis of the hydraulic system, the method based on pseudo-bond graph is introduced. According to the working mechanism of hydraulic components, they can be separated into two categories: capacitive components and resistive components. Then, the thermal-hydraulic pseudo-bond graphs of capacitive C element and resistance R element were developed, based on the conservation of mass and energy. Subsequently, the connection rule for the pseudo-bond graph elements and the method to construct the complete thermal-hydraulic system model were proposed. On the basis of heat transfer analysis of a typical hydraulic circuit containing a piston pump, the lumped parameter mathematical model of the system was given. The good agreement between the simulation results and experimental data demonstrates the validity of the modeling method.展开更多
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text...Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.展开更多
文摘In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment among teaching,learning,and assessment.To counter these challenges,we propose an Inquiry-Construction Double-Helix model that uses a domain-specific knowledge graph as its cognitive spine.The model interweaves two mutually reinforcing strands-student-driven inquiry and systematic knowledge construction-into a double-helix trajectory analogous to DNA replication.The Inquiry Strand is launched by authentic,situation-based tasks that shepherd students through the complete cycle:question→hypothesis→verification→reflection.The Construction Strand simultaneously externalizes,restructures,and internalizes core disciplinary concepts via visual,hierarchical knowledge graphs.Within the flow of a lesson,the two strands alternately dominate and scaffold each other,securing the co-development of conceptual understanding,procedural fluency,and mathematical literacy.Empirical evidence demonstrates that this model significantly enhances students’systematic knowledge integration,problem-solving transfer ability,and core mathematical competencies,offering a replicable and operable teaching paradigm and practical pathway for deepening high school mathematics classroom reform.
文摘The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map construction.Through the way of extracting the accounting entities and their connections in the pattern layer,the data layer is provided for the fine-tuning and optimization of the large model.Studies found that,through the reasonable application of language model,knowledge can be realized in massive financial data neural five effective extracted tuples,and complete accounting knowledge map construction.
基金supported in part by funds from the Ministry of Science and Technology(2022FY101104).
文摘Recognizing essential proteins within bacteriophages is fundamental to uncovering their replication and survival mechanisms and contributes to advances in phage-based antibacterial therapies.Despite notable progress,existing computational techniques struggle to represent the interplay between sequence-derived and structuredependent protein features.To overcome this limitation,we introduce GLM-EP,a unified framework that fuses protein language models with equivariant graph neural networks.Bymerging semantic embeddings extracted from amino acid sequences with geometry-aware graph representations,GLM-EP enables an in-depth depiction of phage proteins and enhances essential protein identification.Evaluation on diverse benchmark datasets confirms that GLM-EP surpasses conventional sequence-based and independent deep-learning methods,yielding higher F1 and AUROC outcomes.Component-wise analysis demonstrates that GCNII,EGNN,and the gated multi-head attention mechanism function in a complementary manner to encode complex molecular attributes.In summary,GLM-EP serves as a robust and efficient tool for bacteriophage genomic analysis and provides valuable methodological perspectives for the discovery of antibiotic-resistance therapeutic targets.The corresponding code repository is available at:https://github.com/MiJia-ID/GLM-EP(accessed on 01 November 2025).
基金supported by Education and Research Project of Fujian Province Young and Middle-aged Teachers(JAT241035)High-level Talent Project of Fujian Medical University(XRCZX2024036)。
文摘Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of this condition is fragmented and inconsistent.This study constructed a CINV knowledge graph using a large language model(LLM)to integrate nursing and medical evidence,thereby supporting systematic clinical decision-making.Methods:A top-down approach was adopted.1)Knowledge base preparation:Nine databases and eight guideline repositories were searched up to October 2024 to include guidelines,evidence summaries,expert consensuses,and systematic reviews screened by two researchers.2)Schema design:Referring to the Unified Medical Language System,Systematized Nomenclature of Medicine-Clinical Terms,and the Nursing Intervention Classification,entity and relation types were defined to build the ontology schema.3)LLM-based extraction and integration:Using the Qwen model under the CRISPE framework,named entity recognition,relation extraction,disambiguation,and fusion were conducted to generate triples and visualize them in Neo4j.Four expert rounds ensured semantic and logical consistency.Model performance was evaluated using precision,recall,F1-score,and 95%confidence interval(95%CI)in Python 3.11.Result:A total of 47 studies were included(18 guidelines,two expert consensuses,two evidence summaries,and 25 systematic reviews).The Qwen model extracted 273 entities and 289 relations;after expert validation,238 entities and 242 relations were retained,forming 244 triples.The ontology comprised nine entity types and eight relation types.The F1-scores for named entity recognition and relation extraction were 82.97(95%CI:0.820,0.839)and 85.54(95%CI:0.844,0.867),respectively.The average node degree was 2.03,with no isolated nodes.Conclusion:The LLM-based CINV knowledge graph achieved structured integration of nursing and medical evidence,offering a novel,data-driven tool to support clinical nursing decision-making and advance intelligent healthcare.
基金supported by the National Natural Science Foundation of China(Grant No.:62101087)the China Postdoctoral Science Foundation(Grant No.:2021MD703942)+2 种基金the Chongqing Postdoctoral Research Project Special Funding,China(Grant No.:2021XM2016)the Science Foundation of Chongqing Municipal Commission of Education,China(Grant No.:KJQN202100642)the Chongqing Natural Science Foundation,China(Grant No.:cstc2021jcyj-msxmX0834).
文摘Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.
基金supported by the National Natural Science Foundation of China(grant numbers 62267005 and 42365008)the Guangxi Collaborative Innovation Center of Multi-Source Information Integration and Intelligent Processing.
文摘With the development of the Semantic Web,the number of ontologies grows exponentially and the semantic relationships between ontologies become more and more complex,understanding the true semantics of specific terms or concepts in an ontology is crucial for the matching task.At present,the main challenges facing ontology matching tasks based on representation learning methods are how to improve the embedding quality of ontology knowledge and how to integrate multiple features of ontology efficiently.Therefore,we propose an Ontology Matching Method Based on the Gated Graph Attention Model(OM-GGAT).Firstly,the semantic knowledge related to concepts in the ontology is encoded into vectors using the OWL2Vec^(*)method,and the relevant path information from the root node to the concept is embedded to understand better the true meaning of the concept itself and the relationship between concepts.Secondly,the ontology is transformed into the corresponding graph structure according to the semantic relation.Then,when extracting the features of the ontology graph nodes,different attention weights are assigned to each adjacent node of the central concept with the help of the attention mechanism idea.Finally,gated networks are designed to further fuse semantic and structural embedding representations efficiently.To verify the effectiveness of the proposed method,comparative experiments on matching tasks were carried out on public datasets.The results show that the OM-GGAT model can effectively improve the efficiency of ontology matching.
基金funded by the National Natural Science Foundation of China(72104224,L2424237,71974107,L2224059,L2124002,and 91646102)the Beijing Natural Science Foundation(9232015)+4 种基金the Beijing Social Science Foundation(24GLC058)the Construction Project of China Knowledge Center for Engineering Sciences and Technology(CKCEST-2023-1-7)the MOE(Ministry of Education in China)Project of Humanities and Social Sciences(16JDGC011)the Tsinghua University Initiative Scientific Research Program(2019Z02CAU)the Tsinghua University Project of Volvo-Supported Green Economy and Sustainable Development(20183910020)。
文摘As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domain in the industrial sector,has long faced the dual challenges of managing vast amounts of experiential knowledge and improving human-machine collaboration efficiency.Traditional fault diagnosis systems,which are primarily based on expert systems,suffer from three major limitations:(1)ineffective organization of fault diagnosis knowledge,(2)lack of adaptability between static knowledge frameworks and dynamic engineering environments,and(3)difficulties in integrating expert knowledge with real-time data streams.These systemic shortcomings restrict the ability of conventional approaches to handle uncertainty.In this study,we proposed an intelligent computer numerical control(CNC)fault diagnosis system,integrating LLMs with knowledge graph(KG).First,we constructed a comprehensive KG that consolidated multi-source data for structured representation.Second,we designed a retrievalaugmented generation(RAG)framework leveraging the KG to support multi-turn interactive fault diagnosis while incorporating real-time engineering data into the decision-making process.Finally,we introduced a learning mechanism to facilitate dynamic knowledge updates.The experimental results demonstrated that our system significantly improved fault diagnosis accuracy,outperforming engineers with two years of professional experience on our constructed benchmark datasets.By integrating LLMs and KG,our framework surpassed the limitations of traditional expert systems rooted in symbolic reasoning,offering a novel approach to addressing the cognitive paradox of unstructured knowledge modeling and dynamic environment adaptation in industrial settings.
基金Supported by National Key Research and Development Program(Grant No.2024YFB3312700)National Natural Science Foundation of China(Grant No.52405541)the Changzhou Municipal Sci&Tech Program(Grant No.CJ20241131)。
文摘Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite advancements in intelligent and digital technologies,assembly process design still heavily relies on manual knowledge reuse,and inefficiencies and inconsistent quality in process documentation are caused.To address the aforementioned issues,this paper proposes a knowledge push method of complex product assembly process design based on distillation model-based dynamically enhanced graph and Bayesian network.First,an initial knowledge graph is constructed using a BERT-BiLSTM-CRF model trained with integrated human expertise and a fine-tuned large language model.Then,a confidence-based dynamic weighted fusion strategy is employed to achieve dynamic incremental construction of the knowledge graph with low resource consumption.Subsequently,a Bayesian network model is constructed based on the relationships between assembly components,assembly features,and operations.Bayesian network reasoning is used to push assembly process knowledge under different design requirements.Finally,the feasibility of the Bayesian network construction method and the effectiveness of Bayesian network reasoning are verified through a specific example,significantly improving the utilization of assembly process knowledge and the efficiency of assembly process design.
基金funded by the Science and Technology Project of State Grid Corporation of China(5108-202355437A-3-2-ZN).
文摘The increasing complexity of China’s electricity market creates substantial challenges for settlement automation,data consistency,and operational scalability.Existing provincial settlement systems are fragmented,lack a unified data structure,and depend heavily on manual intervention to process high-frequency and retroactive transactions.To address these limitations,a graph-based unified settlement framework is proposed to enhance automation,flexibility,and adaptability in electricity market settlements.A flexible attribute-graph model is employed to represent heterogeneousmulti-market data,enabling standardized integration,rapid querying,and seamless adaptation to evolving business requirements.An extensible operator library is designed to support configurable settlement rules,and a suite of modular tools—including dataset generation,formula configuration,billing templates,and task scheduling—facilitates end-to-end automated settlement processing.A robust refund-clearing mechanism is further incorporated,utilizing sandbox execution,data-version snapshots,dynamic lineage tracing,and real-time changecapture technologies to enable rapid and accurate recalculations under dynamic policy and data revisions.Case studies based on real-world data from regional Chinese markets validate the effectiveness of the proposed approach,demonstrating marked improvements in computational efficiency,system robustness,and automation.Moreover,enhanced settlement accuracy and high temporal granularity improve price-signal fidelity,promote cost-reflective tariffs,and incentivize energy-efficient and demand-responsive behavior among market participants.The method not only supports equitable and transparent market operations but also provides a generalizable,scalable foundation for modern electricity settlement platforms in increasingly complex and dynamic market environments.
文摘A modified reduced-order method for RC networks which takes a division-and-conquest strategy is presented.The whole network is partitioned into a set of sub-networks at first,then each of them is reduced by Krylov subspace techniques,and finally all the reduced sub-networks are incorporated together.With some accuracy,this method can reduce the number of both nodes and components of the circuit comparing to the traditional methods which usually only offer a reduced net with less nodes.This can markedly accelerate the sparse-matrix-based simulators whose performance is dominated by the entity of the matrix or the number of components of the circuits.
基金supported by the National Natural Science Foundation of China(61601495,61372039)。
文摘Due to the limitations of the existing fault detection methods in the embryonic cellular array(ECA), the fault detection coverage cannot reach 100%. In order to evaluate the reliability of the ECA more accurately, embryonic cell and its input and output(I/O) resources are considered as a whole, named functional unit(FU). The FU fault detection coverage parameter is introduced to ECA reliability analysis, and a new ECA reliability evaluation method based on the Markov status graph model is proposed.Simulation experiment results indicate that the proposed ECA reliability evaluation method can evaluate the ECA reliability more effectively and accurately. Based on the proposed reliability evaluation method, the influence of parameters change on the ECA reliability is studied, and simulation experiment results show that ECA reliability can be improved by increasing the FU fault detection coverage and reducing the FU failure rate. In addition, by increasing the scale of the ECA, the reliability increases to the maximum first, and then it will decrease continuously. ECA reliability variation rules can not only provide theoretical guidance for the ECA optimization design, but also point out the direction for further research.
基金Project supported by the National Natural Science Foundation of China (Grant Nos 10375025 and 10275027) and the Cultivation Fund of the Key Scientific and Technical Innovation Project, Ministry of Education of China (Grant No 704035)
文摘Recently, random graphs in which vertices are characterized by hidden variables controlling the establishment of edges between pairs of vertices have attracted much attention. This paper presents a specific realization of a class of random network models in which the connection probability between two vertices (i, j) is a specific function of degrees ki and kj. In the framework of the configuration model of random graphsp we find the analytical expressions for the degree correlation and clustering as a function of the variance of the desired degree distribution. The obtained expressions are checked by means of numerical simulations. Possible applications of our model are discussed.
基金This work is supported by the National Natural Science Foundation of China[grant number 41771484].
文摘Building façades can feature different patterns depending on the architectural style,function-ality,and size of the buildings;therefore,reconstructing these façades can be complicated.In particular,when semantic façades are reconstructed from point cloud data,uneven point density and noise make it difficult to accurately determine the façade structure.When inves-tigating façade layouts,Gestalt principles can be applied to cluster visually similar floors and façade elements,allowing for a more intuitive interpretation of façade structures.We propose a novel model for describing façade structures,namely the layout graph model,which involves a compound graph with two structure levels.In the proposed model,similar façade elements such as windows are first grouped into clusters.A down-layout graph is then formed using this cluster as a node and by combining intra-and inter-cluster spacings as the edges.Second,a top-layout graph is formed by clustering similar floors.By extracting relevant parameters from this model,we transform semantic façade reconstruction to an optimization strategy using simulated annealing coupled with Gibbs sampling.Multiple façade point cloud data with different features were selected from three datasets to verify the effectiveness of this method.The experimental results show that the proposed method achieves an average accuracy of 86.35%.Owing to its flexibility,the proposed layout graph model can deal with different types of façades and qualities of point cloud data,enabling a more robust and accurate reconstruc-tion of façade models.
基金the National Natural Science Foundation of China(No.51805079)。
文摘There are heterogeneous problems between the CAD model and the assembly process document.In the planning stage of assembly process,these heterogeneous problems can decrease the efficiency of information interaction.Based on knowledge graph,this paper proposes an assembly information model(KGAM)to integrate geometric information from CAD model,non-geometric information and semantic information from assembly process document.KGAM describes the integrated assembly process information as a knowledge graph in the form of“entity-relationship-entity”and“entity-attribute-value”,which can improve the efficiency of information interaction.Taking the trial assembly stage of a certain type of aeroengine compressor rotor component as an example,KGAM is used to get its assembly process knowledge graph.The trial data show the query and update rate of assembly attribute information is improved by more than once.And the query and update rate of assembly semantic information is improved by more than twice.In conclusion,KGAM can solve the heterogeneous problems between the CAD model and the assembly process document and improve the information interaction efficiency.
文摘Recently,automation is considered vital in most fields since computing methods have a significant role in facilitating work such as automatic text summarization.However,most of the computing methods that are used in real systems are based on graph models,which are characterized by their simplicity and stability.Thus,this paper proposes an improved extractive text summarization algorithm based on both topic and graph models.The methodology of this work consists of two stages.First,the well-known TextRank algorithm is analyzed and its shortcomings are investigated.Then,an improved method is proposed with a new computational model of sentence weights.The experimental results were carried out on standard DUC2004 and DUC2006 datasets and compared to four text summarization methods.Finally,through experiments on the DUC2004 and DUC2006 datasets,our proposed improved graph model algorithm TG-SMR(Topic Graph-Summarizer)is compared to other text summarization systems.The experimental results prove that the proposed TG-SMR algorithm achieves higher ROUGE scores.It is foreseen that the TG-SMR algorithm will open a new horizon that concerns the performance of ROUGE evaluation indicators.
基金supported by the Science and Technology Project of the State Grid Corporation“Research on Key Technologies of Power Artificial Intelligence Open Platform”(5700-202155260A-0-0-00).
文摘With the construction of new power systems,the power grid has become extremely large,with an increasing proportion of new energy and AC/DC hybrid connections.The dynamic characteristics and fault patterns of the power grid are complex;additionally,power grid control is difficult,operation risks are high,and the task of fault handling is arduous.Traditional power-grid fault handling relies primarily on human experience.The difference in and lack of knowledge reserve of control personnel restrict the accuracy and timeliness of fault handling.Therefore,this mode of operation is no longer suitable for the requirements of new systems.Based on the multi-source heterogeneous data of power grid dispatch,this paper proposes a joint entity–relationship extraction method for power-grid dispatch fault processing based on a pre-trained model,constructs a knowledge graph of power-grid dispatch fault processing and designs,and develops a fault-processing auxiliary decision-making system based on the knowledge graph.It was applied to study a provincial dispatch control center,and it effectively improved the accident processing ability and intelligent level of accident management and control of the power grid.
基金Project(51175518)supported by the National Natural Science Foundation of China
文摘To increase the efficiency and reliability of the thermodynamics analysis of the hydraulic system, the method based on pseudo-bond graph is introduced. According to the working mechanism of hydraulic components, they can be separated into two categories: capacitive components and resistive components. Then, the thermal-hydraulic pseudo-bond graphs of capacitive C element and resistance R element were developed, based on the conservation of mass and energy. Subsequently, the connection rule for the pseudo-bond graph elements and the method to construct the complete thermal-hydraulic system model were proposed. On the basis of heat transfer analysis of a typical hydraulic circuit containing a piston pump, the lumped parameter mathematical model of the system was given. The good agreement between the simulation results and experimental data demonstrates the validity of the modeling method.
基金supported by National Key R&D Program of China(2022QY2000-02).
文摘Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR.