The key activity to build semantic web is to build ontologies. But today, the theory and methodology of ontology construction is still far from ready. This paper proposed a theoretical framework for massive knowledge ...The key activity to build semantic web is to build ontologies. But today, the theory and methodology of ontology construction is still far from ready. This paper proposed a theoretical framework for massive knowledge management- the knowledge domain framework (KDF), and introduces an integrated development environment (IDE) named large-scale ontology development environment (LODE), which implements the proposed theoretical framework. We also compared LODE with other popular ontology development environments in this paper. The practice of using LODE on management and development of agriculture ontologies shows that knowledge domain framework can handle the development activities of large scale ontologies. Application studies based on the described briefly. principle of knowledge domain framework and LODE was展开更多
AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20...AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20 y from 2000-2019 have been included.With the help of VOSviewer software,a knowledge map was constructed and the distribution of countries,institutions,journals,and authors in the field of pterygium noted.Meanwhile,using cocitation analysis of references and co-occurrence analysis of keywords,we identified basis and hotspots,thereby obtaining an overview of this field.RESULTS:The search retrieved 1516 publications from Wo SCC on pterygium published between 2000 and 2019.In the past two decades,the annual number of publications is on the rise and fluctuated a little.Most productive institutions are from Singapore but the most prolific and active country is the United States.Journal Cornea published the most articles and Coroneo MT contributed the most publications on pterygium.From cooccurrence analysis,the keywords formed 3 clusters:1)surgical therapeutic techniques and adjuvant of pterygium,2)occurrence process and pathogenesis of pterygium,and 3)epidemiology,and etiology of pterygium formation.These three clusters were consistent with the clustering in co-citation analysis,in which Cluster 1 contained the most references(74 publications,47.74%),Cluster 2 contained 53 publications,accounting for 34.19%,and Cluster 3 focused on epidemiology with 18.06%of total 155 cocitation publications.CONCLUSION:This study demonstrates that the research of pterygium is gradually attracting the attention of scholars and researchers.The interaction between authors,institutions,and countries is lack of.Even though,the research hotspot,distribution,and research status in pterygium in this study could provide valuable information for scholars and researchers.展开更多
BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM ...BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM To conduct a comprehensive visualization and analysis of the articles published in the WJP throughout 2023.By delving into these publications,the aim is to deter-mine the valuable insights that can illuminate pathways for future research endeavors in the field of psychiatry.METHODS A selection process led to the inclusion of 107 papers from the WJP published in 2023,forming the dataset for the analysis.Employing advanced visualization techniques,this study mapped the knowledge domains represented in these papers.RESULTS The findings revealed a prevalent focus on key topics such as depression,mental health,anxiety,schizophrenia,and the impact of coronavirus disease 2019.Additionally,through keyword clustering,it became evident that these papers were predominantly focused on exploring mental health disorders,depression,anxiety,schizophrenia,and related factors.Noteworthy contributions hailed authors in regions such as China,the United Kingdom,United States,and Turkey.Particularly,the paper garnered the highest number of citations,while the American Psychiatric Association was the most cited reference.CONCLUSION It is recommended that the WJP continue in its efforts to enhance the quality of papers published in the field of psychiatry.Additionally,there is a pressing need to delve into the potential applications of digital interventions and artificial intelligence within the discipline.展开更多
The development of the information age and globalization has challenged the training of technical talents in the 21st century, and the information media and technical skills are becoming increasingly important. As a c...The development of the information age and globalization has challenged the training of technical talents in the 21st century, and the information media and technical skills are becoming increasingly important. As a creative sharing form of multimedia, the digital storytelling is being concerned by more and more educators because of its discipline applicability and media technology enhancing ability. In this study, the information visualization software, i.e. CiteSpace was applied to visualize and analyze the researches on digital storytelling from the aspects of key articles and citation hotspots, and make a review on the research status of the digital storytelling in the education fields, such as promoting language learning, and helping students develop the 21 st century skills.展开更多
With the rapid global progression of population aging,the traffic safety of older drivers has emerged as a worldwide concern,resulting in a significant surge in the number of manuscripts on this subject.This study emp...With the rapid global progression of population aging,the traffic safety of older drivers has emerged as a worldwide concern,resulting in a significant surge in the number of manuscripts on this subject.This study employed scientometric analysis to scrutinize 1652 original manuscripts concerning research on older drivers.To visually depict the current state of knowledge in the field,mapping knowledge domain(MKD)was employed for scientometric analysis,shedding light on the evolution of this research area.Firstly,a statistical analysis was conducted to scrutinize the development of research on older drivers.Secondly,VOSviewer was utilized for manuscript co-citation analysis,revealing five primary research topics:cognitive function and crash risk,visual processing impairment and crash risk,potential consequences of changes in driving patterns,involvement of older drivers in crashes,as well as identifying and enhancing factors contributing to unsafe driving.Thirdly,Cit Net Explorer was applied to identify core publications and their reference relationships.Research predominantly focused on visual function,cognitive function,and physical health.Fourthly,VOSviewer's keyword cocitation analysis pinpointed research hotspots in the last five years concerning older drivers:driving risk factors,driving fitness evaluation,impact of distraction on driving,and impact of visual impairment on driving.Finally,based on the aforementioned discussions and the situation in China,some feasible research directions are proposed.This paper summarizes the overall trends in the study of older drivers and the risk factors for traffic accidents.These findings can serve as a reference for improving the driving and road traffic safety of older drivers.展开更多
Staple crops are the cornerstone of the food supply but are frequently threatened by plant diseases.Effective disease management,including disease identification and severity assessment,helps to better address these c...Staple crops are the cornerstone of the food supply but are frequently threatened by plant diseases.Effective disease management,including disease identification and severity assessment,helps to better address these challenges.Currently,methods for disease severity assessment typically rely on calculating the area proportion of disease segmentation regions or using classification networks for severity assessment.However,these methods require large amounts of labeled data and fail to quantify lesion proportions when using classification networks,leading to inaccurate evaluations.To address these issues,we propose an automated framework for disease severity assessment that combines multi-task learning and knowledge-driven large-model segmentation techniques.This framework includes an image information processor,a lesion and leaf segmentation module,and a disease severity assessment module.First,the image information processor utilizes a multi-task learning strategy to analyze input images comprehensively,ensuring a deep understanding of disease characteristics.Second,the lesion and leaf segmentation module employ prompt-driven large-model technology to accurately segment diseased areas and entire leaves,providing detailed visual analysis.Finally,the disease severity assessment module objectively evaluates the severity of the disease based on professional grading standards by calculating lesion area proportions.Additionally,we have developed a comprehensive database of diseased leaf images from major crops,including several task-specific datasets.Experimental results demonstrate that our framework can accurately identify and assess the types and severity of crop diseases,even without extensive labeled data.Codes and data are available at http://dkp-ads.samlab.cn/.展开更多
As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domai...As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domain in the industrial sector,has long faced the dual challenges of managing vast amounts of experiential knowledge and improving human-machine collaboration efficiency.Traditional fault diagnosis systems,which are primarily based on expert systems,suffer from three major limitations:(1)ineffective organization of fault diagnosis knowledge,(2)lack of adaptability between static knowledge frameworks and dynamic engineering environments,and(3)difficulties in integrating expert knowledge with real-time data streams.These systemic shortcomings restrict the ability of conventional approaches to handle uncertainty.In this study,we proposed an intelligent computer numerical control(CNC)fault diagnosis system,integrating LLMs with knowledge graph(KG).First,we constructed a comprehensive KG that consolidated multi-source data for structured representation.Second,we designed a retrievalaugmented generation(RAG)framework leveraging the KG to support multi-turn interactive fault diagnosis while incorporating real-time engineering data into the decision-making process.Finally,we introduced a learning mechanism to facilitate dynamic knowledge updates.The experimental results demonstrated that our system significantly improved fault diagnosis accuracy,outperforming engineers with two years of professional experience on our constructed benchmark datasets.By integrating LLMs and KG,our framework surpassed the limitations of traditional expert systems rooted in symbolic reasoning,offering a novel approach to addressing the cognitive paradox of unstructured knowledge modeling and dynamic environment adaptation in industrial settings.展开更多
Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the...Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the knowledge domains of SLA re⁃searches.The data are retrieved from 10 core journals of linguistics via the CNKI journal database.By means of CiteSpace 5.3,an analysis of the overall trend of studies on SLA in China is made.展开更多
With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. Fir...With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.展开更多
Extracting mining subsidence land from RS images is one of important research contents for environment monitoring in mining area. The accuracy of traditional extracting models based on spectral features is low. In ord...Extracting mining subsidence land from RS images is one of important research contents for environment monitoring in mining area. The accuracy of traditional extracting models based on spectral features is low. In order to extract subsidence land from RS images with high accuracy, some domain knowledge should be imported and new models should be proposed. This paper, in terms of the disadvantage of traditional extracting models, imports domain knowledge from practice and experience, converts semantic knowledge into digital information, and proposes a new model for the specific task. By selecting Luan mining area as study area, this new model is tested based on GIS and related knowledge. The result shows that the proposed method is more pre- cise than traditional methods and can satisfy the demands of land subsidence monitoring in mining area.展开更多
A mathematical formula of high physical interpretation,and accurate prediction and large generaliza-tion power is highly desirable for science,technology and engineering.In this study,we performed a domain knowledge-g...A mathematical formula of high physical interpretation,and accurate prediction and large generaliza-tion power is highly desirable for science,technology and engineering.In this study,we performed a domain knowledge-guided machine learning to discover high interpretive formula describing the high-temperature oxidation behavior of FeCrAlCoNi-based high entropy alloys(HEAs).The domain knowledge suggests that the exposure time dependent and thermally activated oxidation behavior can be described by the synergy formula of power law multiplying Arrhenius equation.The pre-factor,time exponent(m),and activation energy(Q)are dependent on the chemical compositions of eight elements in the FeCrAlCoNi-based HEAs.The Tree-Classifier for Linear Regression(TCLR)algorithm utilizes the two exper-imental features of exposure time(t)and temperature(T)to extract the spectrums of activation energy(Q)and time exponent(m)from the complex and high dimensional feature space,which automatically gives the spectrum of pre-factor.The three spectrums are assembled by using the element features,which leads to a general and interpretive formula with high prediction accuracy of the determination coefficient R^(2)=0.971.The role of each chemical element in the high-temperature oxidation behavior is analytically illustrated in the three spectrums,thereby the discovered interpretative formula provides a guidance to the inverse design of HEAs against high-temperature oxidation.The present work demonstrates the sig-nificance of domain knowledge in the development of materials informatics.展开更多
Immune evolutionary algorithms with domain knowledge were presented to solve the problem of simultaneous localization and mapping for a mobile robot in unknown environments. Two operators with domain knowledge were de...Immune evolutionary algorithms with domain knowledge were presented to solve the problem of simultaneous localization and mapping for a mobile robot in unknown environments. Two operators with domain knowledge were designed in algorithms, where the feature of parallel line segments without the problem of data association was used to construct a vaccination operator, and the characters of convex vertices in polygonal obstacle were extended to develop a pulling operator of key point grid. The experimental results of a real mobile robot show that the computational expensiveness of algorithms designed is less than other evolutionary algorithms for simultaneous localization and mapping and the maps obtained are very accurate. Because immune evolutionary algorithms with domain knowledge have some advantages, the convergence rate of designed algorithms is about 44% higher than those of other algorithms.展开更多
Short video applications like Tik Tok have seen significant growth in recent years.One common behavior of users on these platforms is watching and swiping through videos,which can lead to a significant waste of bandwi...Short video applications like Tik Tok have seen significant growth in recent years.One common behavior of users on these platforms is watching and swiping through videos,which can lead to a significant waste of bandwidth.As such,an important challenge in short video streaming is to design a preloading algorithm that can effectively decide which videos to download,at what bitrate,and when to pause the download in order to reduce bandwidth waste while improving the Quality of Experience(QoE).However,designing such an algorithm is non-trivial,especially when considering the conflicting objectives of minimizing bandwidth waste and maximizing QoE.In this paper,we propose an end-to-end Deep reinforcement learning framework with Action Masking called DAM that leverages domain knowledge to learn an optimal policy for short video preloading.To achieve this,we introduce a reward shaping technique to minimize bandwidth waste and use action masking to make actions more reasonable,reduce playback rebuffering,and accelerate the training process.We have conducted extensive experiments using real-world video datasets and network traces including 4G/Wi Fi/5G.Our results show that DAM improves the Qo E score by 3.73%-11.28%compared to state-of-the-art algorithms,and achieves an average bandwidth waste of only 10.27%-12.07%,outperforming all baseline methods.展开更多
With the rise of open-source software,the social development paradigm occupies an indispensable position in the current software development process.This paper puts forward a variant of the PageRank algorithm to build...With the rise of open-source software,the social development paradigm occupies an indispensable position in the current software development process.This paper puts forward a variant of the PageRank algorithm to build the importance assessment model,which provides quantifiable importance assessment metrics for new Java projects based on Java open-source projects or components.The critical point of the model is to use crawlers to obtain relevant information about Java open-source projects in the GitHub open-source community to build a domain knowledge graph.According to the three dimensions of the Java open-source project’s project influence,project activity and project popularity,the project is measured.A modified PageRank algorithm is proposed to construct the importance evaluation model.Thereby providing quantifiable importance evaluation indicators for new Java projects based on or components of Java open-source projects.This article evaluates the importance of 4512 Java open-source projects obtained on GitHub and has a good effect.展开更多
The characteristics of design process, design object and domain knowledge of complex product are analyzed. A kind of knowledge representation schema based on integrated generalized rule is stated. An AND-OR tree based...The characteristics of design process, design object and domain knowledge of complex product are analyzed. A kind of knowledge representation schema based on integrated generalized rule is stated. An AND-OR tree based model of concept for domain knowledge is set up. The strategy of multilevel domain knowledge acquisition based on the model is presented. The intelligent multilevel knowledge acquisition system (IMKAS) for product design is developed, and it is applied in the intelligent decision support system of concept design of complex product.展开更多
Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging alo...Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging along a predetermined trajectory.However,SSS images often suffer from speckle noise caused by mutual interference between echoes,and limited AUV computational resources further hinder noise suppression.Existing approaches for SSS image processing and speckle noise reduction rely heavily on complex network structures and fail to combine the benefits of deep learning and domain knowledge.To address the problem,Rep DNet,a novel and effective despeckling convolutional neural network is proposed.Rep DNet introduces two re-parameterized blocks:the Pixel Smoothing Block(PSB)and Edge Enhancement Block(EEB),preserving edge information while attenuating speckle noise.During training,PSB and EEB manifest as double-layered multi-branch structures,integrating first-order and secondorder derivatives and smoothing functions.During inference,the branches are re-parameterized into a 3×3 convolution,enabling efficient inference without sacrificing accuracy.Rep DNet comprises three computational operations:3×3 convolution,element-wise summation and Rectified Linear Unit activation.Evaluations on benchmark datasets,a real SSS dataset and Data collected at Lake Mulan aestablish Rep DNet as a well-balanced network,meeting the AUV computational constraints in terms of performance and latency.展开更多
Generally,knowledge extraction technology is used to obtain nodes and relationships of unstructured data and structured data,and then the data fuse with the original knowledge graph to achieve the extension of the kno...Generally,knowledge extraction technology is used to obtain nodes and relationships of unstructured data and structured data,and then the data fuse with the original knowledge graph to achieve the extension of the knowledge graph.Because the concepts and knowledge structures expressed on the Internet have problems of multi-source heterogeneity and low accuracy,it is usually difficult to achieve a good effect simply by using knowledge extraction technology.Considering that domain knowledge is highly dependent on the relevant expert knowledge,the method of this paper try to expand the domain knowledge through the crowdsourcing method.The method split the domain knowledge system into subgraph of knowledge according to corresponding concept,form subtasks with moderate granularity,and use the crowdsourcing technology for the acquisition and integration of knowledge subgraph to improve the knowledge system.展开更多
Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on ...Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on the CLIP mechanism to improve the accuracy and efficiency of interaction. First, this paper reviewed relevant technologies involved in the question-answering field. Then, the question-answering model based on the CLIP mechanism was produced, including its design, implementation, and optimization. It also described the construction process of the specific domain knowledge graph, including graph design, data collection and processing, and graph construction methods. The paper compared the performance of the proposed algorithm with classic question-answering algorithms BiDAF, R-Net, and XLNet models, using a military domain dataset. The experimental results show that the proposed algorithm has advanced performance, with an F1 score of 84.6% on the constructed military knowledge graph test set, which is at least 1.5% higher than other models. We conduct a detailed analysis of the experimental results, which illustrates the algorithm’s advantages in accuracy and efficiency, as well as its potential for further improvement. These findings demonstrate the practical application potential of the proposed algorithm in the military domain.展开更多
The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on e...The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.展开更多
Transformers have achieved promising results on aeroengine remaining useful life(RUL)prediction,but they still have several limitations:1)Aeroengine domain knowledge,which contains rich information that can reflect th...Transformers have achieved promising results on aeroengine remaining useful life(RUL)prediction,but they still have several limitations:1)Aeroengine domain knowledge,which contains rich information that can reflect the aeroengine’s health statue,is largely ignored in modeling process;2)Traditional transformer ignores the valuable degradation information from other time scales.To address these issues,a novel domain knowledge-augmented multiscale transformer(DKAMFormer)is developed by integrating domain knowledge and multiscale learning to improve the prognostic performance and reliability.First,to obtain rich and professional aeroengine domain knowledge,multiple detail and complete knowledge graphs(KGs)are established based on the working principle of aeroengine,including aeroengine structure,components working characteristics and sensor parameters.Second,the domain knowledge contained in KGs is convert to embedded vector by KG representative learning,which are then utilized to strengthen and enrich the original multidimensional time-series(MTS)monitoring data,aiming to intergrade domain knowledge and monitoring data to train DKAMFormer.Third,to learn rich and complementary degradation features,a novel multiscale time scale-guided self-attention(MTSGSA)mechanism is designed,which maps original MTS into different time-scale feature spaces,and then employs multiple independent self-attention head to extract the degradation features from different time-scale spaces.Finally,through a series of comparative experiments on the public CMAPSS and NCMAPSS datasets and compared with 17 SOTA methods,the developed DKAMFormer significantly improves the RUL prediction performance under multiple operation conditions and degradation modes.展开更多
基金supported by the Key Technology R&D Program of China during the 12th Five-Year Plan period:Super-Class Scientific and Technical Thesaurus and Ontology Construction Faced the Foreign Scientific and Technical Literature (2011BAH10B01)
文摘The key activity to build semantic web is to build ontologies. But today, the theory and methodology of ontology construction is still far from ready. This paper proposed a theoretical framework for massive knowledge management- the knowledge domain framework (KDF), and introduces an integrated development environment (IDE) named large-scale ontology development environment (LODE), which implements the proposed theoretical framework. We also compared LODE with other popular ontology development environments in this paper. The practice of using LODE on management and development of agriculture ontologies shows that knowledge domain framework can handle the development activities of large scale ontologies. Application studies based on the described briefly. principle of knowledge domain framework and LODE was
基金the National Natural Science Foundation of China(No.81870644)。
文摘AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20 y from 2000-2019 have been included.With the help of VOSviewer software,a knowledge map was constructed and the distribution of countries,institutions,journals,and authors in the field of pterygium noted.Meanwhile,using cocitation analysis of references and co-occurrence analysis of keywords,we identified basis and hotspots,thereby obtaining an overview of this field.RESULTS:The search retrieved 1516 publications from Wo SCC on pterygium published between 2000 and 2019.In the past two decades,the annual number of publications is on the rise and fluctuated a little.Most productive institutions are from Singapore but the most prolific and active country is the United States.Journal Cornea published the most articles and Coroneo MT contributed the most publications on pterygium.From cooccurrence analysis,the keywords formed 3 clusters:1)surgical therapeutic techniques and adjuvant of pterygium,2)occurrence process and pathogenesis of pterygium,and 3)epidemiology,and etiology of pterygium formation.These three clusters were consistent with the clustering in co-citation analysis,in which Cluster 1 contained the most references(74 publications,47.74%),Cluster 2 contained 53 publications,accounting for 34.19%,and Cluster 3 focused on epidemiology with 18.06%of total 155 cocitation publications.CONCLUSION:This study demonstrates that the research of pterygium is gradually attracting the attention of scholars and researchers.The interaction between authors,institutions,and countries is lack of.Even though,the research hotspot,distribution,and research status in pterygium in this study could provide valuable information for scholars and researchers.
基金Supported by Philosophy and Social Science Foundation of Hunan Province,China,No.23YBJ08China Youth&Children Research Association,No.2023B01Research Project on the Theories and Practice of Hunan Women,No.22YB06.
文摘BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM To conduct a comprehensive visualization and analysis of the articles published in the WJP throughout 2023.By delving into these publications,the aim is to deter-mine the valuable insights that can illuminate pathways for future research endeavors in the field of psychiatry.METHODS A selection process led to the inclusion of 107 papers from the WJP published in 2023,forming the dataset for the analysis.Employing advanced visualization techniques,this study mapped the knowledge domains represented in these papers.RESULTS The findings revealed a prevalent focus on key topics such as depression,mental health,anxiety,schizophrenia,and the impact of coronavirus disease 2019.Additionally,through keyword clustering,it became evident that these papers were predominantly focused on exploring mental health disorders,depression,anxiety,schizophrenia,and related factors.Noteworthy contributions hailed authors in regions such as China,the United Kingdom,United States,and Turkey.Particularly,the paper garnered the highest number of citations,while the American Psychiatric Association was the most cited reference.CONCLUSION It is recommended that the WJP continue in its efforts to enhance the quality of papers published in the field of psychiatry.Additionally,there is a pressing need to delve into the potential applications of digital interventions and artificial intelligence within the discipline.
文摘The development of the information age and globalization has challenged the training of technical talents in the 21st century, and the information media and technical skills are becoming increasingly important. As a creative sharing form of multimedia, the digital storytelling is being concerned by more and more educators because of its discipline applicability and media technology enhancing ability. In this study, the information visualization software, i.e. CiteSpace was applied to visualize and analyze the researches on digital storytelling from the aspects of key articles and citation hotspots, and make a review on the research status of the digital storytelling in the education fields, such as promoting language learning, and helping students develop the 21 st century skills.
基金sponsored by the National Natural Science Foundation of China(No.52072012)。
文摘With the rapid global progression of population aging,the traffic safety of older drivers has emerged as a worldwide concern,resulting in a significant surge in the number of manuscripts on this subject.This study employed scientometric analysis to scrutinize 1652 original manuscripts concerning research on older drivers.To visually depict the current state of knowledge in the field,mapping knowledge domain(MKD)was employed for scientometric analysis,shedding light on the evolution of this research area.Firstly,a statistical analysis was conducted to scrutinize the development of research on older drivers.Secondly,VOSviewer was utilized for manuscript co-citation analysis,revealing five primary research topics:cognitive function and crash risk,visual processing impairment and crash risk,potential consequences of changes in driving patterns,involvement of older drivers in crashes,as well as identifying and enhancing factors contributing to unsafe driving.Thirdly,Cit Net Explorer was applied to identify core publications and their reference relationships.Research predominantly focused on visual function,cognitive function,and physical health.Fourthly,VOSviewer's keyword cocitation analysis pinpointed research hotspots in the last five years concerning older drivers:driving risk factors,driving fitness evaluation,impact of distraction on driving,and impact of visual impairment on driving.Finally,based on the aforementioned discussions and the situation in China,some feasible research directions are proposed.This paper summarizes the overall trends in the study of older drivers and the risk factors for traffic accidents.These findings can serve as a reference for improving the driving and road traffic safety of older drivers.
基金supported by the National Key Research and Development Program of China (2024YFD2001100,2024YFE0214300)the National Natural Science Foundation of China (62162008)+3 种基金Guizhou Provincial Science and Technology Projects ([2024]002, CXTD[2023]027)Guizhou Province Youth Science and Technology Talent Project ([2024]317)Guiyang Guian Science and Technology Talent Training Project ([2024]2-15)the Guizhou Province Graduate Education Innovation Program Project (2024YJSKYJJ096)
文摘Staple crops are the cornerstone of the food supply but are frequently threatened by plant diseases.Effective disease management,including disease identification and severity assessment,helps to better address these challenges.Currently,methods for disease severity assessment typically rely on calculating the area proportion of disease segmentation regions or using classification networks for severity assessment.However,these methods require large amounts of labeled data and fail to quantify lesion proportions when using classification networks,leading to inaccurate evaluations.To address these issues,we propose an automated framework for disease severity assessment that combines multi-task learning and knowledge-driven large-model segmentation techniques.This framework includes an image information processor,a lesion and leaf segmentation module,and a disease severity assessment module.First,the image information processor utilizes a multi-task learning strategy to analyze input images comprehensively,ensuring a deep understanding of disease characteristics.Second,the lesion and leaf segmentation module employ prompt-driven large-model technology to accurately segment diseased areas and entire leaves,providing detailed visual analysis.Finally,the disease severity assessment module objectively evaluates the severity of the disease based on professional grading standards by calculating lesion area proportions.Additionally,we have developed a comprehensive database of diseased leaf images from major crops,including several task-specific datasets.Experimental results demonstrate that our framework can accurately identify and assess the types and severity of crop diseases,even without extensive labeled data.Codes and data are available at http://dkp-ads.samlab.cn/.
基金funded by the National Natural Science Foundation of China(72104224,L2424237,71974107,L2224059,L2124002,and 91646102)the Beijing Natural Science Foundation(9232015)+4 种基金the Beijing Social Science Foundation(24GLC058)the Construction Project of China Knowledge Center for Engineering Sciences and Technology(CKCEST-2023-1-7)the MOE(Ministry of Education in China)Project of Humanities and Social Sciences(16JDGC011)the Tsinghua University Initiative Scientific Research Program(2019Z02CAU)the Tsinghua University Project of Volvo-Supported Green Economy and Sustainable Development(20183910020)。
文摘As large language models(LLMs)continue to demonstrate their potential in handling complex tasks,their value in knowledge-intensive industrial scenarios is becoming increasingly evident.Fault diagnosis,a critical domain in the industrial sector,has long faced the dual challenges of managing vast amounts of experiential knowledge and improving human-machine collaboration efficiency.Traditional fault diagnosis systems,which are primarily based on expert systems,suffer from three major limitations:(1)ineffective organization of fault diagnosis knowledge,(2)lack of adaptability between static knowledge frameworks and dynamic engineering environments,and(3)difficulties in integrating expert knowledge with real-time data streams.These systemic shortcomings restrict the ability of conventional approaches to handle uncertainty.In this study,we proposed an intelligent computer numerical control(CNC)fault diagnosis system,integrating LLMs with knowledge graph(KG).First,we constructed a comprehensive KG that consolidated multi-source data for structured representation.Second,we designed a retrievalaugmented generation(RAG)framework leveraging the KG to support multi-turn interactive fault diagnosis while incorporating real-time engineering data into the decision-making process.Finally,we introduced a learning mechanism to facilitate dynamic knowledge updates.The experimental results demonstrated that our system significantly improved fault diagnosis accuracy,outperforming engineers with two years of professional experience on our constructed benchmark datasets.By integrating LLMs and KG,our framework surpassed the limitations of traditional expert systems rooted in symbolic reasoning,offering a novel approach to addressing the cognitive paradox of unstructured knowledge modeling and dynamic environment adaptation in industrial settings.
文摘Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the knowledge domains of SLA re⁃searches.The data are retrieved from 10 core journals of linguistics via the CNKI journal database.By means of CiteSpace 5.3,an analysis of the overall trend of studies on SLA in China is made.
基金Supported by the National Natural Science Foun-dation of China (60173058 ,70372024)
文摘With the explosive growth of data available, there is an urgent need to develop continuous data mining which reduces manual interaction evidently. A novel model for data mining is proposed in evolving environment. First, some valid mining task schedules are generated, and then au tonomous and local mining are executed periodically, finally, previous results are merged and refined. The framework based on the model creates a communication mechanism to in corporate domain knowledge into continuous process through ontology service. The local and merge mining are transparent to the end user and heterogeneous data ,source by ontology. Experiments suggest that the framework should be useful in guiding the continuous mining process.
基金Project 50774080 supported by the National Natural Science Foundation of China
文摘Extracting mining subsidence land from RS images is one of important research contents for environment monitoring in mining area. The accuracy of traditional extracting models based on spectral features is low. In order to extract subsidence land from RS images with high accuracy, some domain knowledge should be imported and new models should be proposed. This paper, in terms of the disadvantage of traditional extracting models, imports domain knowledge from practice and experience, converts semantic knowledge into digital information, and proposes a new model for the specific task. By selecting Luan mining area as study area, this new model is tested based on GIS and related knowledge. The result shows that the proposed method is more pre- cise than traditional methods and can satisfy the demands of land subsidence monitoring in mining area.
基金financially supported by the National Key Re-search and Development Program of China(No.2018YFB0704400)the Key Program of Science and Technology of Yunnan Province(No.202002AB080001-2)+1 种基金the Key Research Project of Zhejiang Laboratory(No.2021PE0AC02)the Shanghai Pujiang Program(No.20PJ1403700).
文摘A mathematical formula of high physical interpretation,and accurate prediction and large generaliza-tion power is highly desirable for science,technology and engineering.In this study,we performed a domain knowledge-guided machine learning to discover high interpretive formula describing the high-temperature oxidation behavior of FeCrAlCoNi-based high entropy alloys(HEAs).The domain knowledge suggests that the exposure time dependent and thermally activated oxidation behavior can be described by the synergy formula of power law multiplying Arrhenius equation.The pre-factor,time exponent(m),and activation energy(Q)are dependent on the chemical compositions of eight elements in the FeCrAlCoNi-based HEAs.The Tree-Classifier for Linear Regression(TCLR)algorithm utilizes the two exper-imental features of exposure time(t)and temperature(T)to extract the spectrums of activation energy(Q)and time exponent(m)from the complex and high dimensional feature space,which automatically gives the spectrum of pre-factor.The three spectrums are assembled by using the element features,which leads to a general and interpretive formula with high prediction accuracy of the determination coefficient R^(2)=0.971.The role of each chemical element in the high-temperature oxidation behavior is analytically illustrated in the three spectrums,thereby the discovered interpretative formula provides a guidance to the inverse design of HEAs against high-temperature oxidation.The present work demonstrates the sig-nificance of domain knowledge in the development of materials informatics.
基金Projects(60234030 60404021) supported by the National Natural Science Foundation of China
文摘Immune evolutionary algorithms with domain knowledge were presented to solve the problem of simultaneous localization and mapping for a mobile robot in unknown environments. Two operators with domain knowledge were designed in algorithms, where the feature of parallel line segments without the problem of data association was used to construct a vaccination operator, and the characters of convex vertices in polygonal obstacle were extended to develop a pulling operator of key point grid. The experimental results of a real mobile robot show that the computational expensiveness of algorithms designed is less than other evolutionary algorithms for simultaneous localization and mapping and the maps obtained are very accurate. Because immune evolutionary algorithms with domain knowledge have some advantages, the convergence rate of designed algorithms is about 44% higher than those of other algorithms.
基金supported by the National Key Research and Development Program of China(No.2021YFF0900503)partly by the National Natural Science Foundation of China(No.62262018,61971382)。
文摘Short video applications like Tik Tok have seen significant growth in recent years.One common behavior of users on these platforms is watching and swiping through videos,which can lead to a significant waste of bandwidth.As such,an important challenge in short video streaming is to design a preloading algorithm that can effectively decide which videos to download,at what bitrate,and when to pause the download in order to reduce bandwidth waste while improving the Quality of Experience(QoE).However,designing such an algorithm is non-trivial,especially when considering the conflicting objectives of minimizing bandwidth waste and maximizing QoE.In this paper,we propose an end-to-end Deep reinforcement learning framework with Action Masking called DAM that leverages domain knowledge to learn an optimal policy for short video preloading.To achieve this,we introduce a reward shaping technique to minimize bandwidth waste and use action masking to make actions more reasonable,reduce playback rebuffering,and accelerate the training process.We have conducted extensive experiments using real-world video datasets and network traces including 4G/Wi Fi/5G.Our results show that DAM improves the Qo E score by 3.73%-11.28%compared to state-of-the-art algorithms,and achieves an average bandwidth waste of only 10.27%-12.07%,outperforming all baseline methods.
基金This work has been supported by the National Science Foundation of China Grant No.61762092“Dynamic multi-objective requirement optimization based on transfer learning,”and the Open Foundation of the Key Laboratory in Software Engineering of Yunnan Province,Grant No.2017SE204+1 种基金“Research on extracting software feature models using transfer learning,”and the National Science Foundation of China Grant No.61762089“The key research of high order tensor decomposition in a distributed environment”.
文摘With the rise of open-source software,the social development paradigm occupies an indispensable position in the current software development process.This paper puts forward a variant of the PageRank algorithm to build the importance assessment model,which provides quantifiable importance assessment metrics for new Java projects based on Java open-source projects or components.The critical point of the model is to use crawlers to obtain relevant information about Java open-source projects in the GitHub open-source community to build a domain knowledge graph.According to the three dimensions of the Java open-source project’s project influence,project activity and project popularity,the project is measured.A modified PageRank algorithm is proposed to construct the importance evaluation model.Thereby providing quantifiable importance evaluation indicators for new Java projects based on or components of Java open-source projects.This article evaluates the importance of 4512 Java open-source projects obtained on GitHub and has a good effect.
文摘The characteristics of design process, design object and domain knowledge of complex product are analyzed. A kind of knowledge representation schema based on integrated generalized rule is stated. An AND-OR tree based model of concept for domain knowledge is set up. The strategy of multilevel domain knowledge acquisition based on the model is presented. The intelligent multilevel knowledge acquisition system (IMKAS) for product design is developed, and it is applied in the intelligent decision support system of concept design of complex product.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3010803)the National Nature Science Foundation of China(Grant No.52272424)+1 种基金the Key R&D Program of Hubei Province of China(Grant No.2023BCB123)the Fundamental Research Funds for the Central Universities(Grant No.WUT:2023IVB079)。
文摘Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging along a predetermined trajectory.However,SSS images often suffer from speckle noise caused by mutual interference between echoes,and limited AUV computational resources further hinder noise suppression.Existing approaches for SSS image processing and speckle noise reduction rely heavily on complex network structures and fail to combine the benefits of deep learning and domain knowledge.To address the problem,Rep DNet,a novel and effective despeckling convolutional neural network is proposed.Rep DNet introduces two re-parameterized blocks:the Pixel Smoothing Block(PSB)and Edge Enhancement Block(EEB),preserving edge information while attenuating speckle noise.During training,PSB and EEB manifest as double-layered multi-branch structures,integrating first-order and secondorder derivatives and smoothing functions.During inference,the branches are re-parameterized into a 3×3 convolution,enabling efficient inference without sacrificing accuracy.Rep DNet comprises three computational operations:3×3 convolution,element-wise summation and Rectified Linear Unit activation.Evaluations on benchmark datasets,a real SSS dataset and Data collected at Lake Mulan aestablish Rep DNet as a well-balanced network,meeting the AUV computational constraints in terms of performance and latency.
文摘Generally,knowledge extraction technology is used to obtain nodes and relationships of unstructured data and structured data,and then the data fuse with the original knowledge graph to achieve the extension of the knowledge graph.Because the concepts and knowledge structures expressed on the Internet have problems of multi-source heterogeneity and low accuracy,it is usually difficult to achieve a good effect simply by using knowledge extraction technology.Considering that domain knowledge is highly dependent on the relevant expert knowledge,the method of this paper try to expand the domain knowledge through the crowdsourcing method.The method split the domain knowledge system into subgraph of knowledge according to corresponding concept,form subtasks with moderate granularity,and use the crowdsourcing technology for the acquisition and integration of knowledge subgraph to improve the knowledge system.
文摘Research on specific domain question-answering technology has become important with the increasing demand for intelligent question-answering systems. This paper proposes a domain question-answering algorithm based on the CLIP mechanism to improve the accuracy and efficiency of interaction. First, this paper reviewed relevant technologies involved in the question-answering field. Then, the question-answering model based on the CLIP mechanism was produced, including its design, implementation, and optimization. It also described the construction process of the specific domain knowledge graph, including graph design, data collection and processing, and graph construction methods. The paper compared the performance of the proposed algorithm with classic question-answering algorithms BiDAF, R-Net, and XLNet models, using a military domain dataset. The experimental results show that the proposed algorithm has advanced performance, with an F1 score of 84.6% on the constructed military knowledge graph test set, which is at least 1.5% higher than other models. We conduct a detailed analysis of the experimental results, which illustrates the algorithm’s advantages in accuracy and efficiency, as well as its potential for further improvement. These findings demonstrate the practical application potential of the proposed algorithm in the military domain.
基金supported by the National Key Research and Development Program of China(No.2023YFB3712401),the National Natural Science Foundation of China(No.52274301)the Aeronautical Science Foundation of China(No.2023Z0530S6005)the Ningbo Yongjiang Talent-Introduction Programme(No.2022A-023-C).
文摘The viscosity of refining slags plays a critical role in metallurgical processes.However,obtaining accurate viscosity data remains challenging due to the complexities of high-temperature experiments,often relying on empirical models with limited predictive capabilities.This study focuses on the influence of optical basicity on viscosity in CaO-Al_(2)O_(3)-based refining slags,leveraging machine learning to address data scarcity and improve prediction accuracy.An automated framework for algorithm integration,parameter tuning,and evaluation ranking framework(Auto-APE)is employed to develop customized data-driven models for various slag systems,including CaO-Al_(2)O_(3)-SiO_(2),CaO-Al_(2)O_(3)-CaF_(2),CaO-Al_(2)O_(3)-SiO_(2)-MgO,and CaO-Al_(2)O_(3)-SiO_(2)-MgO-CaF_(2).By incorporating optical basicity as a key feature,the models achieve an average validation error of 8.0%to 15.1%,significantly outperforming traditional empirical models.Additionally,symbolic regression is introduced to rapidly construct domain-specific features,such as optical basicity-like descriptors,offering a potential breakthrough in performance prediction for small datasets.This work highlights the critical role of domain-specific knowledge in understanding and predicting viscosity,providing a robust machine learning-based approach for optimizing refining slag properties.
基金supported in part by the National Natural Science Foundation of China(52305570)the National Natural Science Foundation of China Key Support Project(U2133202)+2 种基金China Postdoctoral Science Foundation(2022M720955)Postdoctoral Science Foundation of Heilongjiang Province(LBH-Z22187)Outstanding Doctoral Dissertation Funding Project of Heilongjiang Province(LJYXL2022-011).
文摘Transformers have achieved promising results on aeroengine remaining useful life(RUL)prediction,but they still have several limitations:1)Aeroengine domain knowledge,which contains rich information that can reflect the aeroengine’s health statue,is largely ignored in modeling process;2)Traditional transformer ignores the valuable degradation information from other time scales.To address these issues,a novel domain knowledge-augmented multiscale transformer(DKAMFormer)is developed by integrating domain knowledge and multiscale learning to improve the prognostic performance and reliability.First,to obtain rich and professional aeroengine domain knowledge,multiple detail and complete knowledge graphs(KGs)are established based on the working principle of aeroengine,including aeroengine structure,components working characteristics and sensor parameters.Second,the domain knowledge contained in KGs is convert to embedded vector by KG representative learning,which are then utilized to strengthen and enrich the original multidimensional time-series(MTS)monitoring data,aiming to intergrade domain knowledge and monitoring data to train DKAMFormer.Third,to learn rich and complementary degradation features,a novel multiscale time scale-guided self-attention(MTSGSA)mechanism is designed,which maps original MTS into different time-scale feature spaces,and then employs multiple independent self-attention head to extract the degradation features from different time-scale spaces.Finally,through a series of comparative experiments on the public CMAPSS and NCMAPSS datasets and compared with 17 SOTA methods,the developed DKAMFormer significantly improves the RUL prediction performance under multiple operation conditions and degradation modes.