The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map cons...The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map construction.Through the way of extracting the accounting entities and their connections in the pattern layer,the data layer is provided for the fine-tuning and optimization of the large model.Studies found that,through the reasonable application of language model,knowledge can be realized in massive financial data neural five effective extracted tuples,and complete accounting knowledge map construction.展开更多
Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have in...Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.展开更多
In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment a...In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment among teaching,learning,and assessment.To counter these challenges,we propose an Inquiry-Construction Double-Helix model that uses a domain-specific knowledge graph as its cognitive spine.The model interweaves two mutually reinforcing strands-student-driven inquiry and systematic knowledge construction-into a double-helix trajectory analogous to DNA replication.The Inquiry Strand is launched by authentic,situation-based tasks that shepherd students through the complete cycle:question→hypothesis→verification→reflection.The Construction Strand simultaneously externalizes,restructures,and internalizes core disciplinary concepts via visual,hierarchical knowledge graphs.Within the flow of a lesson,the two strands alternately dominate and scaffold each other,securing the co-development of conceptual understanding,procedural fluency,and mathematical literacy.Empirical evidence demonstrates that this model significantly enhances students’systematic knowledge integration,problem-solving transfer ability,and core mathematical competencies,offering a replicable and operable teaching paradigm and practical pathway for deepening high school mathematics classroom reform.展开更多
Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite a...Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite advancements in intelligent and digital technologies,assembly process design still heavily relies on manual knowledge reuse,and inefficiencies and inconsistent quality in process documentation are caused.To address the aforementioned issues,this paper proposes a knowledge push method of complex product assembly process design based on distillation model-based dynamically enhanced graph and Bayesian network.First,an initial knowledge graph is constructed using a BERT-BiLSTM-CRF model trained with integrated human expertise and a fine-tuned large language model.Then,a confidence-based dynamic weighted fusion strategy is employed to achieve dynamic incremental construction of the knowledge graph with low resource consumption.Subsequently,a Bayesian network model is constructed based on the relationships between assembly components,assembly features,and operations.Bayesian network reasoning is used to push assembly process knowledge under different design requirements.Finally,the feasibility of the Bayesian network construction method and the effectiveness of Bayesian network reasoning are verified through a specific example,significantly improving the utilization of assembly process knowledge and the efficiency of assembly process design.展开更多
Recent advancements in large language models(LLMs)have driven remarkable progress in text process-ing,opening new avenues for medical knowledge discovery.In this study,we present ERQA,a mEdical knowledge Retrieval and...Recent advancements in large language models(LLMs)have driven remarkable progress in text process-ing,opening new avenues for medical knowledge discovery.In this study,we present ERQA,a mEdical knowledge Retrieval and Question-Answering framework powered by an enhanced LLM that integrates a semantic vector database and a curated literature repository.The ERQA framework leverages domain-specific incremental pretraining and conducts supervised fine-tuning on medical literature,enabling retrieval and question-answering(QA)tasks to be completed with high precision.Performance evaluations implemented on the coronavirus disease 2019(COVID-19)and TripClick data-sets demonstrate the robust capabilities of ERQA across multiple tasks.On the COVID-19 dataset,ERQA-13B achieves state-of-the-art retrieval metrics,with normalized discounted cumulative gain at top 10(NDCG@10)0.297,recall values at top 10(Recall@10)0.347,and mean reciprocal rank(MRR)=0.370;it also attains strong abstract summarization performance,with a recall-oriented understudy for gisting evaluation(ROUGE)-1 score of 0.434,and QA performance,with a bilingual evaluation understudy(BLEU)-1 score of 7.851.The comparable performance achieved on the TripClick dataset further under-scores the adaptability of ERQA across diverse medical topics.These findings suggest that ERQA repre-sents a significant step toward efficient biomedical knowledge retrieval and QA.展开更多
Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of...Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of this condition is fragmented and inconsistent.This study constructed a CINV knowledge graph using a large language model(LLM)to integrate nursing and medical evidence,thereby supporting systematic clinical decision-making.Methods:A top-down approach was adopted.1)Knowledge base preparation:Nine databases and eight guideline repositories were searched up to October 2024 to include guidelines,evidence summaries,expert consensuses,and systematic reviews screened by two researchers.2)Schema design:Referring to the Unified Medical Language System,Systematized Nomenclature of Medicine-Clinical Terms,and the Nursing Intervention Classification,entity and relation types were defined to build the ontology schema.3)LLM-based extraction and integration:Using the Qwen model under the CRISPE framework,named entity recognition,relation extraction,disambiguation,and fusion were conducted to generate triples and visualize them in Neo4j.Four expert rounds ensured semantic and logical consistency.Model performance was evaluated using precision,recall,F1-score,and 95%confidence interval(95%CI)in Python 3.11.Result:A total of 47 studies were included(18 guidelines,two expert consensuses,two evidence summaries,and 25 systematic reviews).The Qwen model extracted 273 entities and 289 relations;after expert validation,238 entities and 242 relations were retained,forming 244 triples.The ontology comprised nine entity types and eight relation types.The F1-scores for named entity recognition and relation extraction were 82.97(95%CI:0.820,0.839)and 85.54(95%CI:0.844,0.867),respectively.The average node degree was 2.03,with no isolated nodes.Conclusion:The LLM-based CINV knowledge graph achieved structured integration of nursing and medical evidence,offering a novel,data-driven tool to support clinical nursing decision-making and advance intelligent healthcare.展开更多
Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches ofte...Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.展开更多
Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve ...Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.展开更多
A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterpr...A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.展开更多
In this paper,a methodology for Leaf Area Index(LAI) estimating was proposed by assimilating remote sensed data into crop model based on temporal and spatial knowledge.Firstly,sensitive parameters of crop model were c...In this paper,a methodology for Leaf Area Index(LAI) estimating was proposed by assimilating remote sensed data into crop model based on temporal and spatial knowledge.Firstly,sensitive parameters of crop model were calibrated by Shuffled Complex Evolution method developed at the University of Arizona(SCE-UA) optimization method based on phenological information,which is called temporal knowledge.The calibrated crop model will be used as the forecast operator.Then,the Taylor′s mean value theorem was applied to extracting spatial information from the Moderate Resolution Imaging Spectroradiometer(MODIS) multi-scale data,which was used to calibrate the LAI inversion results by A two-layer Canopy Reflectance Model(ACRM) model.The calibrated LAI result was used as the observation operator.Finally,an Ensemble Kalman Filter(EnKF) was used to assimilate MODIS data into crop model.The results showed that the method could significantly improve the estimation accuracy of LAI and the simulated curves of LAI more conform to the crop growth situation closely comparing with MODIS LAI products.The root mean square error(RMSE) of LAI calculated by assimilation is 0.9185 which is reduced by 58.7% compared with that by simulation(0.3795),and before and after assimilation the mean error is reduced by 92.6% which is from 0.3563 to 0.0265.All these experiments indicated that the methodology proposed in this paper is reasonable and accurate for estimating crop LAI.展开更多
With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the b...With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the big data environment.Typically,there is mutual influence between each knowledge transfer if the time interval is not too long.It is necessary to study the problem of continuous knowledge transfer in the big data environment.Based on research on one-time knowledge transfer,a model of continuous knowledge transfer is presented,which can consider the interaction between knowledge transfer and determine the optimal knowledge transfer time at different time points in the big data environment.Simulation experiments were performed by adjusting several parameters.The experimental results verified the model’s validity and facilitated conclusions regarding their practical application values.The experimental results can provide more effective decisions for enterprises that must carry out continuous knowledge transfer in the big data environment.展开更多
A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is ...A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is constructured.Any world to be modelled can be logically specified in this Language. The integrity constraints on the database and the deducing rules within the Database world are derived from the proper axioms of the world being modelled.展开更多
Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the presen...Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the present digital era of big data,there is a lack of visual representations of the knowledge present in scholarly articles,and a time-saving approach for a literature search and visual navigation is warranted.The majority of knowledge display tools cannot cope with current big data trends and pose limitations in meeting the requirements of automatic knowledge representation,storage,and dynamic visualization.To address this limitation,the main aim of this paper is to model the visualization of unstructured data and explore the feasibility of achieving visual navigation for researchers to gain insight into the knowledge hidden in scientific articles of digital repositories.Contemporary topics of research and practice,including modifiable risk factors leading to a dramatic increase in Alzheimer’s disease and other forms of dementia,warrant deeper insight into the evidence-based knowledge available in the literature.The goal is to provide researchers with a visual-based easy traversal through a digital repository of research articles.This paper takes the first step in proposing a novel integrated model using knowledge maps and next-generation graph datastores to achieve a semantic visualization with domain-specific knowledge,such as dementia risk factors.The model facilitates a deep conceptual understanding of the literature by automatically establishing visual relationships among the extracted knowledge from the big data resources of research articles.It also serves as an automated tool for a visual navigation through the knowledge repository for faster identification of dementia risk factors reported in scholarly articles.Further,it facilitates a semantic visualization and domain-specific knowledge discovery from a large digital repository and their associations.In this study,the implementation of the proposed model in the Neo4j graph data repository,along with the results achieved,is presented as a proof of concept.Using scholarly research articles on dementia risk factors as a case study,automatic knowledge extraction,storage,intelligent search,and visual navigation are illustrated.The implementation of contextual knowledge and its relationship for a visual exploration by researchers show promising results in the knowledge discovery of dementia risk factors.Overall,this study demonstrates the significance of a semantic visualization with the effective use of knowledge maps and paves the way for extending visual modeling capabilities in the future.展开更多
Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new genera...Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new generations of databases. These models have a deep impact on evolving decision-support systems. But they suffer a variety of practical problems while accessing real-world data sources. Specifically a type of data storage model based on data distribution theory has been increasingly used in recent years by large-scale enterprises, while it is not compatible with existing decision-support models. This data storage model stores the data in different geographical sites where they are more regularly accessed. This leads to considerably less inter-site data transfer that can reduce data security issues in some circumstances and also significantly improve data manipulation transactions speed. The aim of this paper is to propose a new approach for supporting proactive decision-making that utilizes a workable data source management methodology. The new model can effectively organize and use complex data sources, even when they are distributed in different sites in a fragmented form. At the same time, the new model provides a very high level of intellectual management decision-support by intelligent use of the data collections through utilizing new smart methods in synthesizing useful knowledge. The results of an empirical study to evaluate the model are provided.展开更多
Image-guided computer aided surgery system (ICAS) contributes to safeness and success of surgery operations by means of displaying anatomical structures and showing correlative information to surgeons in the process o...Image-guided computer aided surgery system (ICAS) contributes to safeness and success of surgery operations by means of displaying anatomical structures and showing correlative information to surgeons in the process of operation. Based on analysis of requirements for ICAS, a new concept of clinical knowledge-based ICAS was proposed. Designing a reasonable data structure model is essential for realizing this new concept. The traditional data structure is limited in expressing and reusing the clinical knowledge such as locating an anatomical object, topological relations of anatomical objects and correlative clinical attributes. A data structure model called mixed adjacency lists by octree-path-chain (MALOC) was outlined, which can combine patient's images with clinical knowledge, as well as efficiently locate the instrument and search the objects' information. The efficiency of data structures was analyzed and experimental results were given in comparison to other traditional data structures. The result of the nasal surgery experiment proves that MALOC is a proper model for clinical knowledge-based ICAS that has advantages in not only locating the operative instrument precisely but also proving surgeons with real-time operation-correlative information. It is shown that the clinical knowledge-based ICAS with MALOC model has advantages in terms of safety and success of surgical operations, and help in accurately locating the operative instrument and providing operation-correlative knowledge and information to surgeons in the process of operations.展开更多
In order to find the completeness threshold which offers a practical method of making bounded model checking complete, the over-approximation for the complete threshold is presented. First, a linear logic of knowledge...In order to find the completeness threshold which offers a practical method of making bounded model checking complete, the over-approximation for the complete threshold is presented. First, a linear logic of knowledge is introduced into the past tense operator, and then a new temporal epistemic logic LTLKP is obtained, so that LTLKP can naturally and precisely describe the system's reliability. Secondly, a set of prior algorithms are designed to calculate the maximal reachable depth and the length of the longest of loop free paths in the structure based on the graph structure theory. Finally, some theorems are proposed to show how to approximate the complete threshold with the diameter and recurrence diameter. The proposed work resolves the completeness threshold problem so that the completeness of bounded model checking can be guaranteed.展开更多
In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling...In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.展开更多
For an extract description of threads information in question and answer (QnA) web forums, it is proposed to construct a QnA knowledge presentation model in the English language, and then an entire solution for the ...For an extract description of threads information in question and answer (QnA) web forums, it is proposed to construct a QnA knowledge presentation model in the English language, and then an entire solution for the QnA knowledge system is presented, including data gathering, platform building and applications design. With pre-defined dictionary and grammatical analysis, the model draws semantic information, grammatical information and knowledge confidence into IR methods, in the form of statement sets and term sets with semantic links. Theoretical analysis shows that the statement model can provide an exact presentation for QnA knowledge, breaking through any limits from original QnA patterns and being adaptable to various query demands; the semantic links between terms can assist the statement model, in terms of deducing new from existing knowledge. The model makes use of both information retrieval (IR) and natural language processing (NLP) features, strengthening the knowledge presentation ability. Many knowledge-based applications built upon this model can be improved, providing better performance.展开更多
Virtual organization uses information technology to achieve a closer integration and better management of business relationships between internal and external parties. There are many emerging issues in virtual organiz...Virtual organization uses information technology to achieve a closer integration and better management of business relationships between internal and external parties. There are many emerging issues in virtual organization and one of them is knowl- edge sharing. Knowledge sharing that is the core of knowledge management plays a key role in virtual organization. However, game theoretic exploration about knowledge sharing mechanism based on virtual organization is seldom published, In this study, knowledge sharing mechanism in virtual organization enterprise is explored using game theories from two aspects based on the features of knowledge sharing in virtual organization enterprise. Fi- nally, the critical model of knowledge sharing mechanism in virtual organization also is presented based on the perspective of game analysis.展开更多
With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. Fir...With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.展开更多
文摘The article is based on language model,through the cue word engineering and agent thinking method,automatic knowledge extraction,with China accounting standards support to complete the corresponding knowledge map construction.Through the way of extracting the accounting entities and their connections in the pattern layer,the data layer is provided for the fine-tuning and optimization of the large model.Studies found that,through the reasonable application of language model,knowledge can be realized in massive financial data neural five effective extracted tuples,and complete accounting knowledge map construction.
基金supported in part by National Natural Science Foundation of China(62441605)。
文摘Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.
文摘In the context of the“Two New”initiatives,high school mathematics instruction still grapples with three interlocking problems:knowledge fragmentation,limited cultivation of higher-order thinking,and weak alignment among teaching,learning,and assessment.To counter these challenges,we propose an Inquiry-Construction Double-Helix model that uses a domain-specific knowledge graph as its cognitive spine.The model interweaves two mutually reinforcing strands-student-driven inquiry and systematic knowledge construction-into a double-helix trajectory analogous to DNA replication.The Inquiry Strand is launched by authentic,situation-based tasks that shepherd students through the complete cycle:question→hypothesis→verification→reflection.The Construction Strand simultaneously externalizes,restructures,and internalizes core disciplinary concepts via visual,hierarchical knowledge graphs.Within the flow of a lesson,the two strands alternately dominate and scaffold each other,securing the co-development of conceptual understanding,procedural fluency,and mathematical literacy.Empirical evidence demonstrates that this model significantly enhances students’systematic knowledge integration,problem-solving transfer ability,and core mathematical competencies,offering a replicable and operable teaching paradigm and practical pathway for deepening high school mathematics classroom reform.
基金Supported by National Key Research and Development Program(Grant No.2024YFB3312700)National Natural Science Foundation of China(Grant No.52405541)the Changzhou Municipal Sci&Tech Program(Grant No.CJ20241131)。
文摘Under the paradigm of Industry 5.0,intelligent manufacturing transcends mere efficiency enhancement by emphasizing human-machine collaboration,where human expertise plays a central role in assembly processes.Despite advancements in intelligent and digital technologies,assembly process design still heavily relies on manual knowledge reuse,and inefficiencies and inconsistent quality in process documentation are caused.To address the aforementioned issues,this paper proposes a knowledge push method of complex product assembly process design based on distillation model-based dynamically enhanced graph and Bayesian network.First,an initial knowledge graph is constructed using a BERT-BiLSTM-CRF model trained with integrated human expertise and a fine-tuned large language model.Then,a confidence-based dynamic weighted fusion strategy is employed to achieve dynamic incremental construction of the knowledge graph with low resource consumption.Subsequently,a Bayesian network model is constructed based on the relationships between assembly components,assembly features,and operations.Bayesian network reasoning is used to push assembly process knowledge under different design requirements.Finally,the feasibility of the Bayesian network construction method and the effectiveness of Bayesian network reasoning are verified through a specific example,significantly improving the utilization of assembly process knowledge and the efficiency of assembly process design.
基金supported by the Innovation Fund for Medical Sciences of the Chinese Academy of Medical Sciences(2021-I2M-1-033)the National Key Research and Development Program of China(2022YFF0711900).
文摘Recent advancements in large language models(LLMs)have driven remarkable progress in text process-ing,opening new avenues for medical knowledge discovery.In this study,we present ERQA,a mEdical knowledge Retrieval and Question-Answering framework powered by an enhanced LLM that integrates a semantic vector database and a curated literature repository.The ERQA framework leverages domain-specific incremental pretraining and conducts supervised fine-tuning on medical literature,enabling retrieval and question-answering(QA)tasks to be completed with high precision.Performance evaluations implemented on the coronavirus disease 2019(COVID-19)and TripClick data-sets demonstrate the robust capabilities of ERQA across multiple tasks.On the COVID-19 dataset,ERQA-13B achieves state-of-the-art retrieval metrics,with normalized discounted cumulative gain at top 10(NDCG@10)0.297,recall values at top 10(Recall@10)0.347,and mean reciprocal rank(MRR)=0.370;it also attains strong abstract summarization performance,with a recall-oriented understudy for gisting evaluation(ROUGE)-1 score of 0.434,and QA performance,with a bilingual evaluation understudy(BLEU)-1 score of 7.851.The comparable performance achieved on the TripClick dataset further under-scores the adaptability of ERQA across diverse medical topics.These findings suggest that ERQA repre-sents a significant step toward efficient biomedical knowledge retrieval and QA.
基金supported by Education and Research Project of Fujian Province Young and Middle-aged Teachers(JAT241035)High-level Talent Project of Fujian Medical University(XRCZX2024036)。
文摘Objectives:Chemotherapy-induced nausea and vomiting(CINV)is a common adverse effect among breast cancer patients,significantly affecting quality of life.Existing evidence on the prevention,assessment,and management of this condition is fragmented and inconsistent.This study constructed a CINV knowledge graph using a large language model(LLM)to integrate nursing and medical evidence,thereby supporting systematic clinical decision-making.Methods:A top-down approach was adopted.1)Knowledge base preparation:Nine databases and eight guideline repositories were searched up to October 2024 to include guidelines,evidence summaries,expert consensuses,and systematic reviews screened by two researchers.2)Schema design:Referring to the Unified Medical Language System,Systematized Nomenclature of Medicine-Clinical Terms,and the Nursing Intervention Classification,entity and relation types were defined to build the ontology schema.3)LLM-based extraction and integration:Using the Qwen model under the CRISPE framework,named entity recognition,relation extraction,disambiguation,and fusion were conducted to generate triples and visualize them in Neo4j.Four expert rounds ensured semantic and logical consistency.Model performance was evaluated using precision,recall,F1-score,and 95%confidence interval(95%CI)in Python 3.11.Result:A total of 47 studies were included(18 guidelines,two expert consensuses,two evidence summaries,and 25 systematic reviews).The Qwen model extracted 273 entities and 289 relations;after expert validation,238 entities and 242 relations were retained,forming 244 triples.The ontology comprised nine entity types and eight relation types.The F1-scores for named entity recognition and relation extraction were 82.97(95%CI:0.820,0.839)and 85.54(95%CI:0.844,0.867),respectively.The average node degree was 2.03,with no isolated nodes.Conclusion:The LLM-based CINV knowledge graph achieved structured integration of nursing and medical evidence,offering a novel,data-driven tool to support clinical nursing decision-making and advance intelligent healthcare.
基金supported by the National Natural Science Foundation of China(Grant No.:62101087)the China Postdoctoral Science Foundation(Grant No.:2021MD703942)+2 种基金the Chongqing Postdoctoral Research Project Special Funding,China(Grant No.:2021XM2016)the Science Foundation of Chongqing Municipal Commission of Education,China(Grant No.:KJQN202100642)the Chongqing Natural Science Foundation,China(Grant No.:cstc2021jcyj-msxmX0834).
文摘Drug repurposing offers a promising alternative to traditional drug development and significantly re-duces costs and timelines by identifying new therapeutic uses for existing drugs.However,the current approaches often rely on limited data sources and simplistic hypotheses,which restrict their ability to capture the multi-faceted nature of biological systems.This study introduces adaptive multi-view learning(AMVL),a novel methodology that integrates chemical-induced transcriptional profiles(CTPs),knowledge graph(KG)embeddings,and large language model(LLM)representations,to enhance drug repurposing predictions.AMVL incorporates an innovative similarity matrix expansion strategy and leverages multi-view learning(MVL),matrix factorization,and ensemble optimization techniques to integrate heterogeneous multi-source data.Comprehensive evaluations on benchmark datasets(Fdata-set,Cdataset,and Ydataset)and the large-scale iDrug dataset demonstrate that AMVL outperforms state-of-the-art(SOTA)methods,achieving superior accuracy in predicting drug-disease associations across multiple metrics.Literature-based validation further confirmed the model's predictive capabilities,with seven out of the top ten predictions corroborated by post-2011 evidence.To promote transparency and reproducibility,all data and codes used in this study were open-sourced,providing resources for pro-cessing CTPs,KG,and LLM-based similarity calculations,along with the complete AMVL algorithm and benchmarking procedures.By unifying diverse data modalities,AMVL offers a robust and scalable so-lution for accelerating drug discovery,fostering advancements in translational medicine and integrating multi-omics data.We aim to inspire further innovations in multi-source data integration and support the development of more precise and efficient strategies for advancing drug discovery and translational medicine.
基金the National Natural Science Foundation of China (Grants No. 12072090 and No.12302056) to provide fund for conducting experiments。
文摘Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.
基金supported by NSFC(Grant No.71373032)the Natural Science Foundation of Hunan Province(Grant No.12JJ4073)+3 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.11C0029)the Educational Economy and Financial Research Base of Hunan Province(Grant No.13JCJA2)the Project of China Scholarship Council for Overseas Studies(201208430233201508430121)
文摘A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.
基金Under the auspices of Major State Basic Research Development Program of China(No.2007CB714407)National Natural Science Foundation of China(No.40801070)Action Plan for West Development Program of Chinese Academy of Sciences(No.KZCX2-XB2-09)
文摘In this paper,a methodology for Leaf Area Index(LAI) estimating was proposed by assimilating remote sensed data into crop model based on temporal and spatial knowledge.Firstly,sensitive parameters of crop model were calibrated by Shuffled Complex Evolution method developed at the University of Arizona(SCE-UA) optimization method based on phenological information,which is called temporal knowledge.The calibrated crop model will be used as the forecast operator.Then,the Taylor′s mean value theorem was applied to extracting spatial information from the Moderate Resolution Imaging Spectroradiometer(MODIS) multi-scale data,which was used to calibrate the LAI inversion results by A two-layer Canopy Reflectance Model(ACRM) model.The calibrated LAI result was used as the observation operator.Finally,an Ensemble Kalman Filter(EnKF) was used to assimilate MODIS data into crop model.The results showed that the method could significantly improve the estimation accuracy of LAI and the simulated curves of LAI more conform to the crop growth situation closely comparing with MODIS LAI products.The root mean square error(RMSE) of LAI calculated by assimilation is 0.9185 which is reduced by 58.7% compared with that by simulation(0.3795),and before and after assimilation the mean error is reduced by 92.6% which is from 0.3563 to 0.0265.All these experiments indicated that the methodology proposed in this paper is reasonable and accurate for estimating crop LAI.
基金supported by the National Natural Science Foundation of China(Grant No.71704016,71331008)the Natural Science Foundation of Hunan Province(Grant No.2017JJ2267)+1 种基金Key Projects of Chinese Ministry of Education(17JZD022)the Project of China Scholarship Council for Overseas Studies(201208430233,201508430121),which are acknowledged.
文摘With market competition becoming fiercer,enterprises must update their products by constantly assimilating new big data knowledge and private knowledge to maintain their market shares at different time points in the big data environment.Typically,there is mutual influence between each knowledge transfer if the time interval is not too long.It is necessary to study the problem of continuous knowledge transfer in the big data environment.Based on research on one-time knowledge transfer,a model of continuous knowledge transfer is presented,which can consider the interaction between knowledge transfer and determine the optimal knowledge transfer time at different time points in the big data environment.Simulation experiments were performed by adjusting several parameters.The experimental results verified the model’s validity and facilitated conclusions regarding their practical application values.The experimental results can provide more effective decisions for enterprises that must carry out continuous knowledge transfer in the big data environment.
文摘A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is constructured.Any world to be modelled can be logically specified in this Language. The integrity constraints on the database and the deducing rules within the Database world are derived from the proper axioms of the world being modelled.
文摘Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the present digital era of big data,there is a lack of visual representations of the knowledge present in scholarly articles,and a time-saving approach for a literature search and visual navigation is warranted.The majority of knowledge display tools cannot cope with current big data trends and pose limitations in meeting the requirements of automatic knowledge representation,storage,and dynamic visualization.To address this limitation,the main aim of this paper is to model the visualization of unstructured data and explore the feasibility of achieving visual navigation for researchers to gain insight into the knowledge hidden in scientific articles of digital repositories.Contemporary topics of research and practice,including modifiable risk factors leading to a dramatic increase in Alzheimer’s disease and other forms of dementia,warrant deeper insight into the evidence-based knowledge available in the literature.The goal is to provide researchers with a visual-based easy traversal through a digital repository of research articles.This paper takes the first step in proposing a novel integrated model using knowledge maps and next-generation graph datastores to achieve a semantic visualization with domain-specific knowledge,such as dementia risk factors.The model facilitates a deep conceptual understanding of the literature by automatically establishing visual relationships among the extracted knowledge from the big data resources of research articles.It also serves as an automated tool for a visual navigation through the knowledge repository for faster identification of dementia risk factors reported in scholarly articles.Further,it facilitates a semantic visualization and domain-specific knowledge discovery from a large digital repository and their associations.In this study,the implementation of the proposed model in the Neo4j graph data repository,along with the results achieved,is presented as a proof of concept.Using scholarly research articles on dementia risk factors as a case study,automatic knowledge extraction,storage,intelligent search,and visual navigation are illustrated.The implementation of contextual knowledge and its relationship for a visual exploration by researchers show promising results in the knowledge discovery of dementia risk factors.Overall,this study demonstrates the significance of a semantic visualization with the effective use of knowledge maps and paves the way for extending visual modeling capabilities in the future.
文摘Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new generations of databases. These models have a deep impact on evolving decision-support systems. But they suffer a variety of practical problems while accessing real-world data sources. Specifically a type of data storage model based on data distribution theory has been increasingly used in recent years by large-scale enterprises, while it is not compatible with existing decision-support models. This data storage model stores the data in different geographical sites where they are more regularly accessed. This leads to considerably less inter-site data transfer that can reduce data security issues in some circumstances and also significantly improve data manipulation transactions speed. The aim of this paper is to propose a new approach for supporting proactive decision-making that utilizes a workable data source management methodology. The new model can effectively organize and use complex data sources, even when they are distributed in different sites in a fragmented form. At the same time, the new model provides a very high level of intellectual management decision-support by intelligent use of the data collections through utilizing new smart methods in synthesizing useful knowledge. The results of an empirical study to evaluate the model are provided.
基金the Shanghai Municipal Education Commission Fund for Young Scholar (No. 02BQ23)the SEC E-Institute: Shanghai High Institutions Grid Project (No. 200304)
文摘Image-guided computer aided surgery system (ICAS) contributes to safeness and success of surgery operations by means of displaying anatomical structures and showing correlative information to surgeons in the process of operation. Based on analysis of requirements for ICAS, a new concept of clinical knowledge-based ICAS was proposed. Designing a reasonable data structure model is essential for realizing this new concept. The traditional data structure is limited in expressing and reusing the clinical knowledge such as locating an anatomical object, topological relations of anatomical objects and correlative clinical attributes. A data structure model called mixed adjacency lists by octree-path-chain (MALOC) was outlined, which can combine patient's images with clinical knowledge, as well as efficiently locate the instrument and search the objects' information. The efficiency of data structures was analyzed and experimental results were given in comparison to other traditional data structures. The result of the nasal surgery experiment proves that MALOC is a proper model for clinical knowledge-based ICAS that has advantages in not only locating the operative instrument precisely but also proving surgeons with real-time operation-correlative information. It is shown that the clinical knowledge-based ICAS with MALOC model has advantages in terms of safety and success of surgical operations, and help in accurately locating the operative instrument and providing operation-correlative knowledge and information to surgeons in the process of operations.
基金The National Natural Science Foundation of China (No.10974093)the Scientific Research Foundation for Senior Personnel of Jiangsu University (No.07JDG014)the Natural Science Foundation of Higher Education Institutions of Jiangsu Province (No.08KJD520015)
文摘In order to find the completeness threshold which offers a practical method of making bounded model checking complete, the over-approximation for the complete threshold is presented. First, a linear logic of knowledge is introduced into the past tense operator, and then a new temporal epistemic logic LTLKP is obtained, so that LTLKP can naturally and precisely describe the system's reliability. Secondly, a set of prior algorithms are designed to calculate the maximal reachable depth and the length of the longest of loop free paths in the structure based on the graph structure theory. Finally, some theorems are proposed to show how to approximate the complete threshold with the diameter and recurrence diameter. The proposed work resolves the completeness threshold problem so that the completeness of bounded model checking can be guaranteed.
基金The National Natural Science Foundation of China(No.70471023).
文摘In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.
基金Microsoft Research Asia Internet Services in Aca-demic Research Fund (NoFY07-RES-OPP-116)Tianjin Technological Development Program Project (No06YFGZGX05900)
文摘For an extract description of threads information in question and answer (QnA) web forums, it is proposed to construct a QnA knowledge presentation model in the English language, and then an entire solution for the QnA knowledge system is presented, including data gathering, platform building and applications design. With pre-defined dictionary and grammatical analysis, the model draws semantic information, grammatical information and knowledge confidence into IR methods, in the form of statement sets and term sets with semantic links. Theoretical analysis shows that the statement model can provide an exact presentation for QnA knowledge, breaking through any limits from original QnA patterns and being adaptable to various query demands; the semantic links between terms can assist the statement model, in terms of deducing new from existing knowledge. The model makes use of both information retrieval (IR) and natural language processing (NLP) features, strengthening the knowledge presentation ability. Many knowledge-based applications built upon this model can be improved, providing better performance.
基金Supported by the Natural Science Foundation of Fujian Province(2012D135)
文摘Virtual organization uses information technology to achieve a closer integration and better management of business relationships between internal and external parties. There are many emerging issues in virtual organization and one of them is knowl- edge sharing. Knowledge sharing that is the core of knowledge management plays a key role in virtual organization. However, game theoretic exploration about knowledge sharing mechanism based on virtual organization is seldom published, In this study, knowledge sharing mechanism in virtual organization enterprise is explored using game theories from two aspects based on the features of knowledge sharing in virtual organization enterprise. Fi- nally, the critical model of knowledge sharing mechanism in virtual organization also is presented based on the perspective of game analysis.
基金Supported by the National Natural Science Foun-dation of China (60173058 ,70372024)
文摘With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.