A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is ...A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is constructured.Any world to be modelled can be logically specified in this Language. The integrity constraints on the database and the deducing rules within the Database world are derived from the proper axioms of the world being modelled.展开更多
Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations a...Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations and lack prior knowledge of model parameters,which is essential for Bayesian parameter inversion to enhance accuracy and reduce uncertainty.This study introduces a datadriven approach to establishing prior knowledge of earth-rockfill dams.Driving factors are utilized to determine the potential range of model parameters,and settlement changes within this range are calculated.The results are iteratively compared with actual monitoring data until the calculated range encompasses the observed data,thereby providing prior knowledge of the model parameters.The proposed method is applied to the right-bank earth-rockfilldam of Danjiangkou.Employing a Gibbs sample size of 30,000,the proposed method effectively calibrates the prior knowledge of the wetting model parameters,achieving a root mean square error(RMSE)of 5.18 mm for the settlement predictions.By comparison,the use of non-informative priors with sample sizes of 30,000 and 50,000 results in significantly larger RMSE values of 11.97 mm and 16.07 mm,respectively.Furthermore,the computational efficiencyof the proposed method is demonstrated by an inversion computation time of 902 s for 30,000 samples,which is notably shorter than the 1026 s and 1558 s required for noninformative priors with 30,000 and 50,000 samples,respectively.These findingsunderscore the superior performance of the proposed approach in terms of both prediction accuracy and computational efficiency.These results demonstrate that the proposed method not only improves the predictive accuracy but also enhances the computational efficiency,enabling optimal parameter identificationwith reduced computational effort.This approach provides a robust and efficientframework for advancing dam safety assessments.展开更多
Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representati...Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representation,modeling,fusion,computation,and storage.Within this framework,knowledge extraction,as the core component,directly determines KG quality.In military domains,traditional manual curation models face efficiency constraints due to data fragmentation,complex knowledge architectures,and confidentiality protocols.Meanwhile,crowdsourced ontology construction approaches from general domains prove non-transferable,while human-crafted ontologies struggle with generalization deficiencies.To address these challenges,this study proposes an OntologyAware LLM Methodology for Military Domain Knowledge Extraction(LLM-KE).This approach leverages the deep semantic comprehension capabilities of Large Language Models(LLMs)to simulate human experts’cognitive processes in crowdsourced ontology construction,enabling automated extraction of military textual knowledge.It concurrently enhances knowledge processing efficiency and improves KG completeness.Empirical analysis demonstrates that this method effectively resolves scalability and dynamic adaptation challenges in military KG construction,establishing a novel technological pathway for advancing military intelligence development.展开更多
The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on e...The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.展开更多
With the widespread use of social media,the propagation of health-related rumors has become a significant public health threat.Existing methods for detecting health rumors predominantly rely on external knowledge or p...With the widespread use of social media,the propagation of health-related rumors has become a significant public health threat.Existing methods for detecting health rumors predominantly rely on external knowledge or propagation structures,with only a few recent approaches attempting causal inference;however,these have not yet effectively integrated causal discovery with domain-specific knowledge graphs for detecting health rumors.In this study,we found that the combined use of causal discovery and domain-specific knowledge graphs can effectively identify implicit pseudo-causal logic embedded within texts,holding significant potential for health rumor detection.To this end,we propose CKDG—a dual-graph fusion framework based on causal logic and medical knowledge graphs.CKDG constructs a weighted causal graph to capture the implicit causal relationships in the text and introduces a medical knowledge graph to verify semantic consistency,thereby enhancing the ability to identify the misuse of professional terminology and pseudoscientific claims.In experiments conducted on a dataset comprising 8430 health rumors,CKDG achieved an accuracy of 91.28%and an F1 score of 90.38%,representing improvements of 5.11%and 3.29%over the best baseline,respectively.Our results indicate that the integrated use of causal discovery and domainspecific knowledge graphs offers significant advantages for health rumor detection systems.This method not only improves detection performance but also enhances the transparency and credibility of model decisions by tracing causal chains and sources of knowledge conflicts.We anticipate that this work will provide key technological support for the development of trustworthy health-information filtering systems,thereby improving the reliability of public health information on social media.展开更多
Amidst evolving user behavior driven by the development of the internet,enhancing the operational quality of trade publishing knowledge service platforms has become a significant challenge for publishing institutions....Amidst evolving user behavior driven by the development of the internet,enhancing the operational quality of trade publishing knowledge service platforms has become a significant challenge for publishing institutions.To address this issue,this paper employs a combined approach of theoretical analysis and case study,introducing the SICAS(Sense-Interest-Connection-Action-Share)user consumption behavior analysis model and selecting“CITIC Academy”as the case study subject.It systematically examines and summarizes the platform’s operational practices and specific strategies,aiming to offer strategic insights and practical references for the operational improvement and sustainable,high-quality development of trade publishing knowledge service platforms.展开更多
Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve ...Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.展开更多
The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quan...The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quantitative risk assessment (QRA) and predictive maintenance (PdM) are essential to effectively manage coking risks influenced by multiple factors. However, the inherent uncertainties of the coking process, combined with the mixed-frequency nature of distributed control systems (DCS) and laboratory information management systems (LIMS) data, present significant challenges for the application of data-driven methods and their practical implementation in industrial environments. This study proposes a hierarchical framework that integrates deep learning and fuzzy logic inference, leveraging data and domain knowledge to monitor the coking condition and inform prescriptive maintenance planning. The framework proposes the multi-layer fuzzy inference system to construct the coking risk index, utilizes multi-label methods to select the optimal feature dataset across the reactor-regenerator and fractionation system using coking risk factors as label space, and designs the parallel encoder-integrated decoder architecture to address mixed-frequency data disparities and enhance adaptation capabilities through extracting the operation state and physical properties information. Additionally, triple attention mechanisms, whether in parallel or temporal modules, adaptively aggregate input information and enhance intrinsic interpretability to support the disposal decision-making. Applied in the 2.8 million tons FCCU under long-period complex operating conditions, enabling precise coking risk management at the fractionating tower bottom.展开更多
Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpe...Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.展开更多
With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. Fir...With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.展开更多
A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterpr...A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.展开更多
In order to realize the intelligent management of data mining (DM) domain knowledge, this paper presents an architecture for DM knowledge management based on ontology. Using ontology database, this architecture can ...In order to realize the intelligent management of data mining (DM) domain knowledge, this paper presents an architecture for DM knowledge management based on ontology. Using ontology database, this architecture can realize intelligent knowledge retrieval and automatic accomplishment of DM tasks by means of ontology services. Its key features include:①Describing DM ontology and meta-data using ontology based on Web ontology language (OWL).② Ontology reasoning function. Based on the existing concepts and relations, the hidden knowledge in ontology can be obtained using the reasoning engine. This paper mainly focuses on the construction of DM ontology and the reasoning of DM ontology based on OWL DL(s).展开更多
In the international shipping industry, digital intelligence transformation has become essential, with both governments and enterprises actively working to integrate diverse datasets. The domain of maritime and shippi...In the international shipping industry, digital intelligence transformation has become essential, with both governments and enterprises actively working to integrate diverse datasets. The domain of maritime and shipping is characterized by a vast array of document types, filled with complex, large-scale, and often chaotic knowledge and relationships. Effectively managing these documents is crucial for developing a Large Language Model (LLM) in the maritime domain, enabling practitioners to access and leverage valuable information. A Knowledge Graph (KG) offers a state-of-the-art solution for enhancing knowledge retrieval, providing more accurate responses and enabling context-aware reasoning. This paper presents a framework for utilizing maritime and shipping documents to construct a knowledge graph using GraphRAG, a hybrid tool combining graph-based retrieval and generation capabilities. The extraction of entities and relationships from these documents and the KG construction process are detailed. Furthermore, the KG is integrated with an LLM to develop a Q&A system, demonstrating that the system significantly improves answer accuracy compared to traditional LLMs. Additionally, the KG construction process is up to 50% faster than conventional LLM-based approaches, underscoring the efficiency of our method. This study provides a promising approach to digital intelligence in shipping, advancing knowledge accessibility and decision-making.展开更多
Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop...Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.展开更多
Traditional Chinese medicine(TCM)serves as a treasure trove of ancient knowledge,holding a crucial position in the medical field.However,the exploration of TCM's extensive information has been hindered by challeng...Traditional Chinese medicine(TCM)serves as a treasure trove of ancient knowledge,holding a crucial position in the medical field.However,the exploration of TCM's extensive information has been hindered by challenges related to data standardization,completeness,and accuracy,primarily due to the decen-tralized distribution of TCM resources.To address these issues,we developed a platform for TCM knowledge discovery(TCMKD,https://cbcb.cdutcm.edu.cn/TCMKD/).Seven types of data,including syndromes,formulas,Chinese patent drugs(CPDs),Chinese medicinal materials(CMMs),ingredients,targets,and diseases,were manually proofread and consolidated within TCMKD.To strengthen the integration of TCM with modern medicine,TCMKD employs analytical methods such as TCM data mining,enrichment analysis,and network localization and separation.These tools help elucidate the molecular-level commonalities between TCM and contemporary scientific insights.In addition to its analytical capabilities,a quick question and answer(Q&A)system is also embedded within TCMKD to query the database efficiently,thereby improving the interactivity of the platform.The platform also provides a TCM text annotation tool,offering a simple and efficient method for TCM text mining.Overall,TCMKD not only has the potential to become a pivotal repository for TCM,delving into the pharmaco-logical foundations of TCM treatments,but its flexible embedded tools and algorithms can also be applied to the study of other traditional medical systems,extending beyond just TCM.展开更多
In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfe...In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.展开更多
Open data initiatives have promoted governmental agencies and scientific organizations to publish data online for reuse.Research of geoscience focuses on processing georeferenced quantitative data(e.g.,rock parameters...Open data initiatives have promoted governmental agencies and scientific organizations to publish data online for reuse.Research of geoscience focuses on processing georeferenced quantitative data(e.g.,rock parameters,geochemical tests,geophysical surveys and satellite imagery)for discovering new knowledge.Geological knowledge is the cognitive result of human knowledge of the spatial distribution,evolution and interaction patterns of geological objects or processes.Knowledge graphs(KGs)can formalize unstructured knowledge into structured form and have been used in supporting decision-making recently.In this paper,we propose a novel framework that can extract the geological knowledge graph(GKG)from public reports relating to a modelling study.Based on the analysis of basic questions answered by geology,we summarize and abstract geological knowledge elements and then explore a geological knowledge representation model with three levels of“geological conceptsgeological entities-geological relations”to describe semantic units of geological knowledge and their logic relations.Finally,based on the characteristics of mineral resource reports,the geological knowledge representation model oriented to“object relationships”and the hierarchical geological knowledge representation model oriented to“process relationships”are proposed with reference to the commonly used geological knowledge graph representation.The research in this paper can provide some implications for the formalization and structured representation of geological knowledge graphs.展开更多
Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmoni...Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmonized vision.In this sense,the traditional Spatial Data Infrastructures adoption is not enough to overcome this challenge since data sources’heterogeneity implies needs related to harmonization interoperability,sharing,and integration in land administration development.This paper proposes a graph-based represen-tation of knowledge for integrating multiple and heterogeneous data sources(tables,shape-files,geodatabases,and WFS services)belonging to two Colombian agencies within a decentralized land administration scenario.These knowledge graphs are developed on an ontology-based knowledge representation using national and international standards for land administration.Our approach aims to prevent data isolation,enable cross-datasets integration,accomplish machine-processable data,and facilitate the reuse and exploitation of multi-jurisdictional datasets in a single approach.A real case study demonstrates the applicability of the land administration data cycle deployed.展开更多
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati...In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.展开更多
This study presents preliminary results of tidal-induced magnetic field signals extracted from 9 months of data collected by the Macao Science Satellite-1(MSS-1) from November 2023 to July 2024. Tidal signals were iso...This study presents preliminary results of tidal-induced magnetic field signals extracted from 9 months of data collected by the Macao Science Satellite-1(MSS-1) from November 2023 to July 2024. Tidal signals were isolated using sequential modeling techniques by subtracting non-tidal field model predictions from observed magnetic data. The extracted MSS-1 results show strong agreement with those from the Swarm and CryoSat satellites. MSS-1 effectively captures key large-scale tidal-induced magnetic anomalies, mainly due to its unique 41-degree low-inclination orbit, which provides wide coverage of local times. This finding underscores the strong potential of MSS-1 to recover high-resolution global tidal magnetic field models as more MSS-1 data become available.展开更多
文摘A Model, called 'Entity-Roles' is proposed in this paper in which the world of Interest is viewed as some mathematical structure. With respect to this structure, a First order (three-valued) Logic Language is constructured.Any world to be modelled can be logically specified in this Language. The integrity constraints on the database and the deducing rules within the Database world are derived from the proper axioms of the world being modelled.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3209504)Natural Science Foundation of Wuhan(Grant No.2024040801020271)the Fundamental Research Funds for Central Public Welfare Research Institutes(Grant No.CKSF2025718/YT).
文摘Wetting deformation in earth-rockfill dams is a critical factor influencingdam safety.Although numerous mathematical models have been developed to describe this phenomenon,most of them rely on empirical formulations and lack prior knowledge of model parameters,which is essential for Bayesian parameter inversion to enhance accuracy and reduce uncertainty.This study introduces a datadriven approach to establishing prior knowledge of earth-rockfill dams.Driving factors are utilized to determine the potential range of model parameters,and settlement changes within this range are calculated.The results are iteratively compared with actual monitoring data until the calculated range encompasses the observed data,thereby providing prior knowledge of the model parameters.The proposed method is applied to the right-bank earth-rockfilldam of Danjiangkou.Employing a Gibbs sample size of 30,000,the proposed method effectively calibrates the prior knowledge of the wetting model parameters,achieving a root mean square error(RMSE)of 5.18 mm for the settlement predictions.By comparison,the use of non-informative priors with sample sizes of 30,000 and 50,000 results in significantly larger RMSE values of 11.97 mm and 16.07 mm,respectively.Furthermore,the computational efficiencyof the proposed method is demonstrated by an inversion computation time of 902 s for 30,000 samples,which is notably shorter than the 1026 s and 1558 s required for noninformative priors with 30,000 and 50,000 samples,respectively.These findingsunderscore the superior performance of the proposed approach in terms of both prediction accuracy and computational efficiency.These results demonstrate that the proposed method not only improves the predictive accuracy but also enhances the computational efficiency,enabling optimal parameter identificationwith reduced computational effort.This approach provides a robust and efficientframework for advancing dam safety assessments.
文摘Since Google introduced the concept of Knowledge Graphs(KGs)in 2012,their construction technologies have evolved into a comprehensive methodological framework encompassing knowledge acquisition,extraction,representation,modeling,fusion,computation,and storage.Within this framework,knowledge extraction,as the core component,directly determines KG quality.In military domains,traditional manual curation models face efficiency constraints due to data fragmentation,complex knowledge architectures,and confidentiality protocols.Meanwhile,crowdsourced ontology construction approaches from general domains prove non-transferable,while human-crafted ontologies struggle with generalization deficiencies.To address these challenges,this study proposes an OntologyAware LLM Methodology for Military Domain Knowledge Extraction(LLM-KE).This approach leverages the deep semantic comprehension capabilities of Large Language Models(LLMs)to simulate human experts’cognitive processes in crowdsourced ontology construction,enabling automated extraction of military textual knowledge.It concurrently enhances knowledge processing efficiency and improves KG completeness.Empirical analysis demonstrates that this method effectively resolves scalability and dynamic adaptation challenges in military KG construction,establishing a novel technological pathway for advancing military intelligence development.
基金supported by the National Key Research and Development Program of China(No.2023YFB3712401),the National Natural Science Foundation of China(No.52274301)the Aeronautical Science Foundation of China(No.2023Z0530S6005)the Ningbo Yongjiang Talent-Introduction Programme(No.2022A-023-C).
文摘The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.
基金funded by the Hunan Provincial Natural Science Foundation of China(Grant No.2025JJ70105)the Hunan Provincial College Students’Innovation and Entrepreneurship Training Program(Project No.S202411342056)The article processing charge(APC)was funded by the Project No.2025JJ70105.
文摘With the widespread use of social media,the propagation of health-related rumors has become a significant public health threat.Existing methods for detecting health rumors predominantly rely on external knowledge or propagation structures,with only a few recent approaches attempting causal inference;however,these have not yet effectively integrated causal discovery with domain-specific knowledge graphs for detecting health rumors.In this study,we found that the combined use of causal discovery and domain-specific knowledge graphs can effectively identify implicit pseudo-causal logic embedded within texts,holding significant potential for health rumor detection.To this end,we propose CKDG—a dual-graph fusion framework based on causal logic and medical knowledge graphs.CKDG constructs a weighted causal graph to capture the implicit causal relationships in the text and introduces a medical knowledge graph to verify semantic consistency,thereby enhancing the ability to identify the misuse of professional terminology and pseudoscientific claims.In experiments conducted on a dataset comprising 8430 health rumors,CKDG achieved an accuracy of 91.28%and an F1 score of 90.38%,representing improvements of 5.11%and 3.29%over the best baseline,respectively.Our results indicate that the integrated use of causal discovery and domainspecific knowledge graphs offers significant advantages for health rumor detection systems.This method not only improves detection performance but also enhances the transparency and credibility of model decisions by tracing causal chains and sources of knowledge conflicts.We anticipate that this work will provide key technological support for the development of trustworthy health-information filtering systems,thereby improving the reliability of public health information on social media.
文摘Amidst evolving user behavior driven by the development of the internet,enhancing the operational quality of trade publishing knowledge service platforms has become a significant challenge for publishing institutions.To address this issue,this paper employs a combined approach of theoretical analysis and case study,introducing the SICAS(Sense-Interest-Connection-Action-Share)user consumption behavior analysis model and selecting“CITIC Academy”as the case study subject.It systematically examines and summarizes the platform’s operational practices and specific strategies,aiming to offer strategic insights and practical references for the operational improvement and sustainable,high-quality development of trade publishing knowledge service platforms.
基金the National Natural Science Foundation of China (Grants No. 12072090 and No.12302056) to provide fund for conducting experiments。
文摘Recently, high-precision trajectory prediction of ballistic missiles in the boost phase has become a research hotspot. This paper proposes a trajectory prediction algorithm driven by data and knowledge(DKTP) to solve this problem. Firstly, the complex dynamics characteristics of ballistic missile in the boost phase are analyzed in detail. Secondly, combining the missile dynamics model with the target gravity turning model, a knowledge-driven target three-dimensional turning(T3) model is derived. Then, the BP neural network is used to train the boost phase trajectory database in typical scenarios to obtain a datadriven state parameter mapping(SPM) model. On this basis, an online trajectory prediction framework driven by data and knowledge is established. Based on the SPM model, the three-dimensional turning coefficients of the target are predicted by using the current state of the target, and the state of the target at the next moment is obtained by combining the T3 model. Finally, simulation verification is carried out under various conditions. The simulation results show that the DKTP algorithm combines the advantages of data-driven and knowledge-driven, improves the interpretability of the algorithm, reduces the uncertainty, which can achieve high-precision trajectory prediction of ballistic missile in the boost phase.
基金financially supported by the Innovative Research Group Project of the National Natural Science Foundation of China (22021004)Sinopec Major Science and Technology Projects (321123-1)
文摘The fractionating tower bottom in fluid catalytic cracking Unit (FCCU) is highly susceptible to coking due to the interplay of complex external operating conditions and internal physical properties. Consequently, quantitative risk assessment (QRA) and predictive maintenance (PdM) are essential to effectively manage coking risks influenced by multiple factors. However, the inherent uncertainties of the coking process, combined with the mixed-frequency nature of distributed control systems (DCS) and laboratory information management systems (LIMS) data, present significant challenges for the application of data-driven methods and their practical implementation in industrial environments. This study proposes a hierarchical framework that integrates deep learning and fuzzy logic inference, leveraging data and domain knowledge to monitor the coking condition and inform prescriptive maintenance planning. The framework proposes the multi-layer fuzzy inference system to construct the coking risk index, utilizes multi-label methods to select the optimal feature dataset across the reactor-regenerator and fractionation system using coking risk factors as label space, and designs the parallel encoder-integrated decoder architecture to address mixed-frequency data disparities and enhance adaptation capabilities through extracting the operation state and physical properties information. Additionally, triple attention mechanisms, whether in parallel or temporal modules, adaptively aggregate input information and enhance intrinsic interpretability to support the disposal decision-making. Applied in the 2.8 million tons FCCU under long-period complex operating conditions, enabling precise coking risk management at the fractionating tower bottom.
基金supported in part by the National Key Research and Development Program of China under Grant 2024YFE0200600in part by the National Natural Science Foundation of China under Grant 62071425+3 种基金in part by the Zhejiang Key Research and Development Plan under Grant 2022C01093in part by the Zhejiang Provincial Natural Science Foundation of China under Grant LR23F010005in part by the National Key Laboratory of Wireless Communications Foundation under Grant 2023KP01601in part by the Big Data and Intelligent Computing Key Lab of CQUPT under Grant BDIC-2023-B-001.
文摘Semantic communication(SemCom)aims to achieve high-fidelity information delivery under low communication consumption by only guaranteeing semantic accuracy.Nevertheless,semantic communication still suffers from unexpected channel volatility and thus developing a re-transmission mechanism(e.g.,hybrid automatic repeat request[HARQ])becomes indispensable.In that regard,instead of discarding previously transmitted information,the incremental knowledge-based HARQ(IK-HARQ)is deemed as a more effective mechanism that could sufficiently utilize the information semantics.However,considering the possible existence of semantic ambiguity in image transmission,a simple bit-level cyclic redundancy check(CRC)might compromise the performance of IK-HARQ.Therefore,there emerges a strong incentive to revolutionize the CRC mechanism,thus more effectively reaping the benefits of both SemCom and HARQ.In this paper,built on top of swin transformer-based joint source-channel coding(JSCC)and IK-HARQ,we propose a semantic image transmission framework SC-TDA-HARQ.In particular,different from the conventional CRC,we introduce a topological data analysis(TDA)-based error detection method,which capably digs out the inner topological and geometric information of images,to capture semantic information and determine the necessity for re-transmission.Extensive numerical results validate the effectiveness and efficiency of the proposed SC-TDA-HARQ framework,especially under the limited bandwidth condition,and manifest the superiority of TDA-based error detection method in image transmission.
基金Supported by the National Natural Science Foun-dation of China (60173058 ,70372024)
文摘With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.
基金supported by NSFC(Grant No.71373032)the Natural Science Foundation of Hunan Province(Grant No.12JJ4073)+3 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.11C0029)the Educational Economy and Financial Research Base of Hunan Province(Grant No.13JCJA2)the Project of China Scholarship Council for Overseas Studies(201208430233201508430121)
文摘A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.
基金the Natural Science Foundation of Chongqing (CSTC2005BB2190)
文摘In order to realize the intelligent management of data mining (DM) domain knowledge, this paper presents an architecture for DM knowledge management based on ontology. Using ontology database, this architecture can realize intelligent knowledge retrieval and automatic accomplishment of DM tasks by means of ontology services. Its key features include:①Describing DM ontology and meta-data using ontology based on Web ontology language (OWL).② Ontology reasoning function. Based on the existing concepts and relations, the hidden knowledge in ontology can be obtained using the reasoning engine. This paper mainly focuses on the construction of DM ontology and the reasoning of DM ontology based on OWL DL(s).
文摘In the international shipping industry, digital intelligence transformation has become essential, with both governments and enterprises actively working to integrate diverse datasets. The domain of maritime and shipping is characterized by a vast array of document types, filled with complex, large-scale, and often chaotic knowledge and relationships. Effectively managing these documents is crucial for developing a Large Language Model (LLM) in the maritime domain, enabling practitioners to access and leverage valuable information. A Knowledge Graph (KG) offers a state-of-the-art solution for enhancing knowledge retrieval, providing more accurate responses and enabling context-aware reasoning. This paper presents a framework for utilizing maritime and shipping documents to construct a knowledge graph using GraphRAG, a hybrid tool combining graph-based retrieval and generation capabilities. The extraction of entities and relationships from these documents and the KG construction process are detailed. Furthermore, the KG is integrated with an LLM to develop a Q&A system, demonstrating that the system significantly improves answer accuracy compared to traditional LLMs. Additionally, the KG construction process is up to 50% faster than conventional LLM-based approaches, underscoring the efficiency of our method. This study provides a promising approach to digital intelligence in shipping, advancing knowledge accessibility and decision-making.
基金grants from the Fundamental Research Funds for the Central Universities(Grant No.2572018BH02)Special Funds for Scientific Research in the Forestry Public Welfare Industry(Grant Nos.201504307-03)。
文摘Using the advantages of web crawlers in data collection and distributed storage technologies,we accessed to a wealth of forestry-related data.Combined with the mature big data technology at its present stage,Hadoop's distributed system was selected to solve the storage problem of massive forestry big data and the memory-based Spark computing framework to realize real-time and fast processing of data.The forestry data contains a wealth of information,and mining this information is of great significance for guiding the development of forestry.We conducts co-word and cluster analyses on the keywords of forestry data,extracts the rules hidden in the data,analyzes the research hotspots more accurately,grasps the evolution trend of subject topics,and plays an important role in promoting the research and development of subject areas.The co-word analysis and clustering algorithm have important practical significance for the topic structure,research hotspot or development trend in the field of forestry research.Distributed storage framework and parallel computing have greatly improved the performance of data mining algorithms.Therefore,the forestry big data mining system by big data technology has important practical significance for promoting the development of intelligent forestry.
基金supported by Natural Science Foundation of Sichuan,China(Grant No.:2024ZDZX0019).
文摘Traditional Chinese medicine(TCM)serves as a treasure trove of ancient knowledge,holding a crucial position in the medical field.However,the exploration of TCM's extensive information has been hindered by challenges related to data standardization,completeness,and accuracy,primarily due to the decen-tralized distribution of TCM resources.To address these issues,we developed a platform for TCM knowledge discovery(TCMKD,https://cbcb.cdutcm.edu.cn/TCMKD/).Seven types of data,including syndromes,formulas,Chinese patent drugs(CPDs),Chinese medicinal materials(CMMs),ingredients,targets,and diseases,were manually proofread and consolidated within TCMKD.To strengthen the integration of TCM with modern medicine,TCMKD employs analytical methods such as TCM data mining,enrichment analysis,and network localization and separation.These tools help elucidate the molecular-level commonalities between TCM and contemporary scientific insights.In addition to its analytical capabilities,a quick question and answer(Q&A)system is also embedded within TCMKD to query the database efficiently,thereby improving the interactivity of the platform.The platform also provides a TCM text annotation tool,offering a simple and efficient method for TCM text mining.Overall,TCMKD not only has the potential to become a pivotal repository for TCM,delving into the pharmaco-logical foundations of TCM treatments,but its flexible embedded tools and algorithms can also be applied to the study of other traditional medical systems,extending beyond just TCM.
基金supported by the National Natural Science Foundation ofChina (Grant No. 71704016,71331008, 71402010)the Natural Science Foundation of HunanProvince (Grant No. 2017JJ2267)+1 种基金the Educational Economy and Financial Research Base ofHunan Province (Grant No. 13JCJA2)the Project of China Scholarship Council forOverseas Studies (201508430121, 201208430233).
文摘In the big data environment, enterprises must constantly assimilate big dataknowledge and private knowledge by multiple knowledge transfers to maintain theircompetitive advantage. The optimal time of knowledge transfer is one of the mostimportant aspects to improve knowledge transfer efficiency. Based on the analysis of thecomplex characteristics of knowledge transfer in the big data environment, multipleknowledge transfers can be divided into two categories. One is the simultaneous transferof various types of knowledge, and the other one is multiple knowledge transfers atdifferent time points. Taking into consideration the influential factors, such as theknowledge type, knowledge structure, knowledge absorptive capacity, knowledge updaterate, discount rate, market share, profit contributions of each type of knowledge, transfercosts, product life cycle and so on, time optimization models of multiple knowledgetransfers in the big data environment are presented by maximizing the total discountedexpected profits (DEPs) of an enterprise. Some simulation experiments have beenperformed to verify the validity of the models, and the models can help enterprisesdetermine the optimal time of multiple knowledge transfer in the big data environment.
基金the IUGS Deep-time Digital Earth(DDE)Big Science Programfinancially supported by the National Key R&D Program of China(No.2022YFF0711601)+4 种基金the Natural Science Foundation of Hubei Province of China(No.2022CFB640)the Opening Fund of Hubei Key Laboratory of Intelligent Vision-Based Monitoring for Hydroelectric Engineering(No.2022SDSJ04)the Opening Fund of Key Laboratory of Geological Survey and Evaluation of Ministry of Education(No.GLAB 2023ZR01)the Fundamental Research Funds for the Central UniversitiesFunded by Joint Fund of Collaborative Innovation Center of Geo-Information Technology for Smart Central Plains,Henan Province and Key Laboratory of Spatiotemporal Perception and Intelligent processing,Ministry of Natural Resources(No.212205)。
文摘Open data initiatives have promoted governmental agencies and scientific organizations to publish data online for reuse.Research of geoscience focuses on processing georeferenced quantitative data(e.g.,rock parameters,geochemical tests,geophysical surveys and satellite imagery)for discovering new knowledge.Geological knowledge is the cognitive result of human knowledge of the spatial distribution,evolution and interaction patterns of geological objects or processes.Knowledge graphs(KGs)can formalize unstructured knowledge into structured form and have been used in supporting decision-making recently.In this paper,we propose a novel framework that can extract the geological knowledge graph(GKG)from public reports relating to a modelling study.Based on the analysis of basic questions answered by geology,we summarize and abstract geological knowledge elements and then explore a geological knowledge representation model with three levels of“geological conceptsgeological entities-geological relations”to describe semantic units of geological knowledge and their logic relations.Finally,based on the characteristics of mineral resource reports,the geological knowledge representation model oriented to“object relationships”and the hierarchical geological knowledge representation model oriented to“process relationships”are proposed with reference to the commonly used geological knowledge graph representation.The research in this paper can provide some implications for the formalization and structured representation of geological knowledge graphs.
基金supported by Colfuturo and Ministerio de Tecnologías de la Información y las Comunicaciones de Colombia,CYTED program-520RT0010[Red GeoLIBERO-Consolidación de una red de geomática libre aplicada a las necesidades de Iberoamérica],and SIP-IPN 20210677[Generación de grafos de conocimiento sobre eventos meteorológicos urbanos].
文摘Multiple efforts have been performed worldwide around diverse aspects of land administra-tion.However,land administration data and systems’notorious heterogeneity remains a longstanding challenge to develop a harmonized vision.In this sense,the traditional Spatial Data Infrastructures adoption is not enough to overcome this challenge since data sources’heterogeneity implies needs related to harmonization interoperability,sharing,and integration in land administration development.This paper proposes a graph-based represen-tation of knowledge for integrating multiple and heterogeneous data sources(tables,shape-files,geodatabases,and WFS services)belonging to two Colombian agencies within a decentralized land administration scenario.These knowledge graphs are developed on an ontology-based knowledge representation using national and international standards for land administration.Our approach aims to prevent data isolation,enable cross-datasets integration,accomplish machine-processable data,and facilitate the reuse and exploitation of multi-jurisdictional datasets in a single approach.A real case study demonstrates the applicability of the land administration data cycle deployed.
文摘In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries.
基金financially supported by the National Natural Science Foundation of China(42250102,42250101)the Macao Foundation and Macao Science and Technology Development Fund(0001/2019/A1)the Pre-research Project on Civil Aerospace Technologies funded by China National Space Administration(D020303)。
文摘This study presents preliminary results of tidal-induced magnetic field signals extracted from 9 months of data collected by the Macao Science Satellite-1(MSS-1) from November 2023 to July 2024. Tidal signals were isolated using sequential modeling techniques by subtracting non-tidal field model predictions from observed magnetic data. The extracted MSS-1 results show strong agreement with those from the Swarm and CryoSat satellites. MSS-1 effectively captures key large-scale tidal-induced magnetic anomalies, mainly due to its unique 41-degree low-inclination orbit, which provides wide coverage of local times. This finding underscores the strong potential of MSS-1 to recover high-resolution global tidal magnetic field models as more MSS-1 data become available.