Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpret...Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.展开更多
Forecasting landslide deformation is challenging due to influence of various internal and external factors on the occurrence of systemic and localized heterogeneities.Despite the potential to improve landslide predict...Forecasting landslide deformation is challenging due to influence of various internal and external factors on the occurrence of systemic and localized heterogeneities.Despite the potential to improve landslide predictability,deep learning has yet to be sufficiently explored for complex deformation patterns associated with landslides and is inherently opaque.Herein,we developed a holistic landslide deformation forecasting method that considers spatiotemporal correlations of landslide deformation by integrating domain knowledge into interpretable deep learning.By spatially capturing the interconnections between multiple deformations from different observation points,our method contributes to the understanding and forecasting of landslide systematic behavior.By integrating specific domain knowledge relevant to each observation point and merging internal properties with external variables,the local heterogeneity is considered in our method,identifying deformation temporal patterns in different landslide zones.Case studies involving reservoir-induced landslides and creeping landslides demonstrated that our approach(1)enhances the accuracy of landslide deformation forecasting,(2)identifies significant contributing factors and their influence on spatiotemporal deformation characteristics,and(3)demonstrates how identifying these factors and patterns facilitates landslide forecasting.Our research offers a promising and pragmatic pathway toward a deeper understanding and forecasting of complex landslide behaviors.展开更多
Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global...Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.展开更多
A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The...A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.展开更多
In view of the low interpretability of existing collaborative filtering recommendation algorithms and the difficulty of extracting information from content-based recommendation algorithms,we propose an efficient KGRS ...In view of the low interpretability of existing collaborative filtering recommendation algorithms and the difficulty of extracting information from content-based recommendation algorithms,we propose an efficient KGRS model.KGRS first obtains reasoning paths of knowledge graph and embeds the entities of paths into vectors based on knowledge representation learning TransD algorithm,then uses LSTM and soft attention mechanism to capture the semantic of each path reasoning,then uses convolution operation and pooling operation to distinguish the importance of different paths reasoning.Finally,through the full connection layer and sigmoid function to get the prediction ratings,and the items are sorted according to the prediction ratings to get the user’s recommendation list.KGRS is tested on the movielens-100k dataset.Compared with the related representative algorithm,including the state-of-the-art interpretable recommendation models RKGE and RippleNet,the experimental results show that KGRS has good recommendation interpretation and higher recommendation accuracy.展开更多
The use of multiple-choice(MC)question types has been one of the most contentious issues in language testing.Much has been said and written about the use of MC over the years.However,no attempt has ever been made to i...The use of multiple-choice(MC)question types has been one of the most contentious issues in language testing.Much has been said and written about the use of MC over the years.However,no attempt has ever been made to introduce any innovation in test item types.The researchers proposed a jumbled words test item(JW)based on cognitive science and deep learning principles,and addressed the feasibility of replacing the type of multiple-choice(MC)question with JW to meet the ongoing rapid development of language testing practice.Two research questions were proposed ad hoc,focusing on the co-relationship between JW and MC scores.RASCH-GZ was used to perform item analyses(Rasch,1960).The item difficulty parameters thus obtained were used to compare the two different test items.The sample data metric includes 40 Chinese participants.The findings revealed that correlation analysis revealed that the performance of the same group of subjects taking both JW and MC was not relevant(Pearson Corr=0).This is primarily due to the total elimination of guessing factors inherent in test-takers during JW test performance.Three factors were specified for the design of the JW test:compute program,test difficulty,and score acceptability.These all have three dimensions.Data collected through questionnaires were analyzed using EFA in SPSS V.24.0.KMOs(=0.867)were found to be approximately one and significance at 0.000(0.05),indicating that the construct of theuestionnaire thus designed has better validity for factor analysis.Three important conclusions were obtained,the implications of which could provide impetus for our testing counterparts to practice more precisely and correctly,potentially reshaping our overall language testing practice.Limitations and recommendations for future research were also discussed.展开更多
With the advancement of the manufacturing industry,the investigation of the shop floor scheduling problem has gained increasing importance.The Job shop Scheduling Problem(JSP),as a fundamental scheduling problem,holds...With the advancement of the manufacturing industry,the investigation of the shop floor scheduling problem has gained increasing importance.The Job shop Scheduling Problem(JSP),as a fundamental scheduling problem,holds considerable theoretical research value.However,finding a satisfactory solution within a given time is difficult due to the NP-hard nature of the JSP.A co-operative-guided ant colony optimization algorithm with knowledge learning(namely KLCACO)is proposed to address this difficulty.This algorithm integrates a data-based swarm intelligence optimization algorithm with model-based JSP schedule knowledge.A solution construction scheme based on scheduling knowledge learning is proposed for KLCACO.The problem model and algorithm data are fused by merging scheduling and planning knowledge with individual scheme construction to enhance the quality of the generated individual solutions.A pheromone guidance mechanism,which is based on a collaborative machine strategy,is used to simplify information learning and the problem space by collaborating with different machine processing orders.Additionally,the KLCACO algorithm utilizes the classical neighborhood structure to optimize the solution,expanding the search space of the algorithm and accelerating its convergence.The KLCACO algorithm is compared with other highperformance intelligent optimization algorithms on four public benchmark datasets,comprising 48 benchmark test cases in total.The effectiveness of the proposed algorithm in addressing JSPs is validated,demonstrating the feasibility of the KLCACO algorithm for knowledge and data fusion in complex combinatorial optimization problems.展开更多
The“University”proposes the proposition of“investigating things to acquire knowledge”,presenting the rudimentary form of Chinese philosophical epistemology.Over the following 2,500 years,it has undergone numerous ...The“University”proposes the proposition of“investigating things to acquire knowledge”,presenting the rudimentary form of Chinese philosophical epistemology.Over the following 2,500 years,it has undergone numerous debates on the relationship between knowledge and action;however,no philosopher has developed a comprehensive epistemological system that explores the nature,source,formation,application,truth,testing,structure of human knowledge,as well as the relationship between language and thinking.The concept of“know”in the philosophy of“investigating things to attain knowledge”is equivalent to concepts such as“idea”and“meaning”in Western philosophy.However,the cognitive state of“know”has not been fully explored and expanded upon,nor has the distinction between empirical and rational recognition of“know”been made.Confucianism advocated the“learning”of language communication as one of the ways to acquire knowledge,but it failed to evolve a method of using language to conduct formal logical reasoning to acquire knowledge and test truth.The“eight trigrams”deduction and yin-yang and five elements theory in the“I Ching”have stifled the emergence of modern scientific epistemological methods in China in terms of thinking mode.Confucian epistemology:focuses on“reason”and interpersonal relationships,with an emphasis on the establishment of moral ethics and social order.Taoist epistemology:pursues a realm beyond experience and social conventions,understanding the world through introspection and insight;it focuses on grasping“Tao”,providing vague guidance for determining universal truths and acquiring precise knowledge;it is prone to falling into the dilemma of nihilism and relativism.The concept of unity between heaven and man and unity between knowledge and action has led to the lack of differentiation in Chinese philosophical epistemology regarding the study of the relationship between subject and object,not involving the way in which the subject understands the object,the limitations of understanding,and the interaction between subject and object in the process of understanding.展开更多
Entity alignment,which aims to identify entities with the same meaning in different Knowledge Graphs(KGs),is a key step in knowledge integration.Despite the promising results achieved by existing methods,they often fa...Entity alignment,which aims to identify entities with the same meaning in different Knowledge Graphs(KGs),is a key step in knowledge integration.Despite the promising results achieved by existing methods,they often fail to fully leverage the structure information of KGs for entity alignment.Therefore,our goal is to thoroughly explore the features of entity neighbors and relationships to obtain better entity embeddings.In this work,we propose DCEA,an effective dual-context representation learning framework for entity alignment.Specifically,the neighbor-level embedding module introduces relation information to more accurately aggregate neighbor context.The relation-level embedding module utilizes neighbor context to enhance relation-level embeddings.To eliminate semantic gaps between neighbor-level and relation-level embeddings,and fully exploit their complementarity,we design a hybrid embedding fusion model that adaptively performs embedding fusion to obtain powerful joint entity embeddings.We also jointly optimize the contrastive loss of multi-level embeddings,enhancing their mutual reinforcement while preserving the characteristics of neighbor and relation embeddings.Additionally,the decision fusion module combines the similarity scores calculated between entities based on embeddings at different levels to make the final alignment decision.Extensive experimental results on public datasets indicate that our DCEA performs better than state-of-the-art baselines.展开更多
The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applicatio...The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applications.Therefore the research and development on 6 G have been put on the agenda.Regarding demands and characteristics of future 6 G,artificial intelligence(A),big data(B)and cloud computing(C)will play indispensable roles in achieving the highest efficiency and the largest benefits.Interestingly,the initials of these three aspects remind us the significance of vitamin ABC to human body.In this article we specifically expound on the three elements of ABC and relationships in between.We analyze the basic characteristics of wireless big data(WBD)and the corresponding technical action in A and C,which are the high dimensional feature and spatial separation,the predictive ability,and the characteristics of knowledge.Based on the abilities of WBD,a new learning approach for wireless AI called knowledge+data-driven deep learning(KD-DL)method,and a layered computing architecture of mobile network integrating cloud/edge/terminal computing,is proposed,and their achievable efficiency is discussed.These progress will be conducive to the development of future 6 G.展开更多
The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildin...The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.展开更多
Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule base...Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria. However, in a knowledge learning system there is usually a set of rules. These rules are not independent, but interactive. They tend to affect each other and form a rulesystem. In such case, it is no longer reasonable to isolate each rule from others for evaluation. A best rule according to certain criterion is not always the best one for the whole system. Furthermore, the data in the real world from which people want to create their learning system are often ill-defined and inconsistent. In this case, the completeness and consistency criteria for rule selection are no longer essential. In this paper, some ideas about how to solve the rule-selection problem in a systematic way are proposed. These ideas have been applied in the design of a Chinese business card layout analysis system and gained a good result on the training data set of 425 images. The implementation of the system and the result are presented in this paper.展开更多
Association,aiming to link bounding boxes of the same identity in a video sequence,is a central component in multi-object tracking(MOT).To train association modules,e.g.,parametric networks,real video data are usually...Association,aiming to link bounding boxes of the same identity in a video sequence,is a central component in multi-object tracking(MOT).To train association modules,e.g.,parametric networks,real video data are usually used.However,annotating person tracks in consecutive video frames is expensive,and such real data,due to its inflexibility,offer us limited opportunities to evaluate the system performance w.r.t.changing tracking scenarios.In this paper,we study whether 3D synthetic data can replace real-world videos for association training.Specifically,we introduce a large-scale synthetic data engine named MOTX,where the motion characteristics of cameras and objects are manually configured to be similar to those of real-world datasets.We show that,compared with real data,association knowledge obtained from synthetic data can achieve very similar performance on real-world test sets without domain adaption techniques.Our intriguing observation is credited to two factors.First and foremost,3D engines can well simulate motion factors such as camera movement,camera view,and object movement so that the simulated videos can provide association modules with effective motion features.Second,the experimental results show that the appearance domain gap hardly harms the learning of association knowledge.In addition,the strong customization ability of MOTX allows us to quantitatively assess the impact of motion factors on MOT,which brings new insights to the community.展开更多
Automated diagnosis of chest X-rays is pivotal in radiology,aiming to alleviate the workload of radiologists.Traditional methods primarily rely on visual features or label dependence,which is a limitation in detecting...Automated diagnosis of chest X-rays is pivotal in radiology,aiming to alleviate the workload of radiologists.Traditional methods primarily rely on visual features or label dependence,which is a limitation in detecting nuanced or rare lesions.To address this,we present KEXNet,a pioneering knowledge-enhanced X-ray lesion detection model.KEXNet employs a unique strategy akin to expert radiologists,integrating a knowledge graph based on expert annotations with an interpretable graph learning approach.This novel method combines object detection with a graph neural network,facilitating precise local lesion detection.For global lesion detection,KEXNet synergizes knowledge-enhanced local features with global image features,enhancing diagnostic accuracy.Our evaluations on three benchmark datasets demonstrate that KEXNet outshines existing models,particularly in identifying small or infrequent lesions.Notably,on the Chest ImaGenome dataset,KEXNet’s AUC for local lesion detection surpasses 8.9%compared to the state-of-the-art method AnaXNet,showcasing its potential in revolutionizing automated chest X-ray diagnostics.展开更多
In September 2024,Beicheng Youth Night School was launched at Beijing City University to allow young and middle-aged students to use their spare time to learn new knowledge and skills.
基金Supported in part by Science Center for Gas Turbine Project(Project No.P2022-DC-I-003-001)National Natural Science Foundation of China(Grant No.52275130).
文摘Despite significant progress in the Prognostics and Health Management(PHM)domain using pattern learning systems from data,machine learning(ML)still faces challenges related to limited generalization and weak interpretability.A promising approach to overcoming these challenges is to embed domain knowledge into the ML pipeline,enhancing the model with additional pattern information.In this paper,we review the latest developments in PHM,encapsulated under the concept of Knowledge Driven Machine Learning(KDML).We propose a hierarchical framework to define KDML in PHM,which includes scientific paradigms,knowledge sources,knowledge representations,and knowledge embedding methods.Using this framework,we examine current research to demonstrate how various forms of knowledge can be integrated into the ML pipeline and provide roadmap to specific usage.Furthermore,we present several case studies that illustrate specific implementations of KDML in the PHM domain,including inductive experience,physical model,and signal processing.We analyze the improvements in generalization capability and interpretability that KDML can achieve.Finally,we discuss the challenges,potential applications,and usage recommendations of KDML in PHM,with a particular focus on the critical need for interpretability to ensure trustworthy deployment of artificial intelligence in PHM.
基金supported by the Postdoctoral Fellowship Program of CPSF(Grant No.GZB20230685)the National Science Foundation of China(Grant No.42277161).
文摘Forecasting landslide deformation is challenging due to influence of various internal and external factors on the occurrence of systemic and localized heterogeneities.Despite the potential to improve landslide predictability,deep learning has yet to be sufficiently explored for complex deformation patterns associated with landslides and is inherently opaque.Herein,we developed a holistic landslide deformation forecasting method that considers spatiotemporal correlations of landslide deformation by integrating domain knowledge into interpretable deep learning.By spatially capturing the interconnections between multiple deformations from different observation points,our method contributes to the understanding and forecasting of landslide systematic behavior.By integrating specific domain knowledge relevant to each observation point and merging internal properties with external variables,the local heterogeneity is considered in our method,identifying deformation temporal patterns in different landslide zones.Case studies involving reservoir-induced landslides and creeping landslides demonstrated that our approach(1)enhances the accuracy of landslide deformation forecasting,(2)identifies significant contributing factors and their influence on spatiotemporal deformation characteristics,and(3)demonstrates how identifying these factors and patterns facilitates landslide forecasting.Our research offers a promising and pragmatic pathway toward a deeper understanding and forecasting of complex landslide behaviors.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+1 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT),Macao SAR (015/2020/AMJ)。
文摘Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.
文摘A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.
基金supported by the National Science Foundation of China Grant No.61762092“Dynamic multi-objective requirement optimization based on transfer learning”,No.61762089+2 种基金“The key research of high order tensor decomposition in distributed environment”the Open Foundation of the Key Laboratory in Software Engineering of Yunnan Province,Grant No.2017SE204,”Research on extracting software feature models using transfer learning”.
文摘In view of the low interpretability of existing collaborative filtering recommendation algorithms and the difficulty of extracting information from content-based recommendation algorithms,we propose an efficient KGRS model.KGRS first obtains reasoning paths of knowledge graph and embeds the entities of paths into vectors based on knowledge representation learning TransD algorithm,then uses LSTM and soft attention mechanism to capture the semantic of each path reasoning,then uses convolution operation and pooling operation to distinguish the importance of different paths reasoning.Finally,through the full connection layer and sigmoid function to get the prediction ratings,and the items are sorted according to the prediction ratings to get the user’s recommendation list.KGRS is tested on the movielens-100k dataset.Compared with the related representative algorithm,including the state-of-the-art interpretable recommendation models RKGE and RippleNet,the experimental results show that KGRS has good recommendation interpretation and higher recommendation accuracy.
文摘The use of multiple-choice(MC)question types has been one of the most contentious issues in language testing.Much has been said and written about the use of MC over the years.However,no attempt has ever been made to introduce any innovation in test item types.The researchers proposed a jumbled words test item(JW)based on cognitive science and deep learning principles,and addressed the feasibility of replacing the type of multiple-choice(MC)question with JW to meet the ongoing rapid development of language testing practice.Two research questions were proposed ad hoc,focusing on the co-relationship between JW and MC scores.RASCH-GZ was used to perform item analyses(Rasch,1960).The item difficulty parameters thus obtained were used to compare the two different test items.The sample data metric includes 40 Chinese participants.The findings revealed that correlation analysis revealed that the performance of the same group of subjects taking both JW and MC was not relevant(Pearson Corr=0).This is primarily due to the total elimination of guessing factors inherent in test-takers during JW test performance.Three factors were specified for the design of the JW test:compute program,test difficulty,and score acceptability.These all have three dimensions.Data collected through questionnaires were analyzed using EFA in SPSS V.24.0.KMOs(=0.867)were found to be approximately one and significance at 0.000(0.05),indicating that the construct of theuestionnaire thus designed has better validity for factor analysis.Three important conclusions were obtained,the implications of which could provide impetus for our testing counterparts to practice more precisely and correctly,potentially reshaping our overall language testing practice.Limitations and recommendations for future research were also discussed.
基金supported by the National Natural Science Foundation of China(Nos.62366003 and 62066019)the Natural Science Foundation of Jiangxi Province(No.20232BAB202046)the Graduate Innovation Foundation of Jiangxi University of Science and Technology(No.XY2022-S040).
文摘With the advancement of the manufacturing industry,the investigation of the shop floor scheduling problem has gained increasing importance.The Job shop Scheduling Problem(JSP),as a fundamental scheduling problem,holds considerable theoretical research value.However,finding a satisfactory solution within a given time is difficult due to the NP-hard nature of the JSP.A co-operative-guided ant colony optimization algorithm with knowledge learning(namely KLCACO)is proposed to address this difficulty.This algorithm integrates a data-based swarm intelligence optimization algorithm with model-based JSP schedule knowledge.A solution construction scheme based on scheduling knowledge learning is proposed for KLCACO.The problem model and algorithm data are fused by merging scheduling and planning knowledge with individual scheme construction to enhance the quality of the generated individual solutions.A pheromone guidance mechanism,which is based on a collaborative machine strategy,is used to simplify information learning and the problem space by collaborating with different machine processing orders.Additionally,the KLCACO algorithm utilizes the classical neighborhood structure to optimize the solution,expanding the search space of the algorithm and accelerating its convergence.The KLCACO algorithm is compared with other highperformance intelligent optimization algorithms on four public benchmark datasets,comprising 48 benchmark test cases in total.The effectiveness of the proposed algorithm in addressing JSPs is validated,demonstrating the feasibility of the KLCACO algorithm for knowledge and data fusion in complex combinatorial optimization problems.
文摘The“University”proposes the proposition of“investigating things to acquire knowledge”,presenting the rudimentary form of Chinese philosophical epistemology.Over the following 2,500 years,it has undergone numerous debates on the relationship between knowledge and action;however,no philosopher has developed a comprehensive epistemological system that explores the nature,source,formation,application,truth,testing,structure of human knowledge,as well as the relationship between language and thinking.The concept of“know”in the philosophy of“investigating things to attain knowledge”is equivalent to concepts such as“idea”and“meaning”in Western philosophy.However,the cognitive state of“know”has not been fully explored and expanded upon,nor has the distinction between empirical and rational recognition of“know”been made.Confucianism advocated the“learning”of language communication as one of the ways to acquire knowledge,but it failed to evolve a method of using language to conduct formal logical reasoning to acquire knowledge and test truth.The“eight trigrams”deduction and yin-yang and five elements theory in the“I Ching”have stifled the emergence of modern scientific epistemological methods in China in terms of thinking mode.Confucian epistemology:focuses on“reason”and interpersonal relationships,with an emphasis on the establishment of moral ethics and social order.Taoist epistemology:pursues a realm beyond experience and social conventions,understanding the world through introspection and insight;it focuses on grasping“Tao”,providing vague guidance for determining universal truths and acquiring precise knowledge;it is prone to falling into the dilemma of nihilism and relativism.The concept of unity between heaven and man and unity between knowledge and action has led to the lack of differentiation in Chinese philosophical epistemology regarding the study of the relationship between subject and object,not involving the way in which the subject understands the object,the limitations of understanding,and the interaction between subject and object in the process of understanding.
基金supported by the“pioneer”and“Leading Goose”Key R&D Program of Zhejiang Province under Grant No.2022C03106the Zhejiang Provincial Natural Science Foundation of China under Grant No.LY23F020010the National Natural Science Foundation of China under Grant No.62077015.
文摘Entity alignment,which aims to identify entities with the same meaning in different Knowledge Graphs(KGs),is a key step in knowledge integration.Despite the promising results achieved by existing methods,they often fail to fully leverage the structure information of KGs for entity alignment.Therefore,our goal is to thoroughly explore the features of entity neighbors and relationships to obtain better entity embeddings.In this work,we propose DCEA,an effective dual-context representation learning framework for entity alignment.Specifically,the neighbor-level embedding module introduces relation information to more accurately aggregate neighbor context.The relation-level embedding module utilizes neighbor context to enhance relation-level embeddings.To eliminate semantic gaps between neighbor-level and relation-level embeddings,and fully exploit their complementarity,we design a hybrid embedding fusion model that adaptively performs embedding fusion to obtain powerful joint entity embeddings.We also jointly optimize the contrastive loss of multi-level embeddings,enhancing their mutual reinforcement while preserving the characteristics of neighbor and relation embeddings.Additionally,the decision fusion module combines the similarity scores calculated between entities based on embeddings at different levels to make the final alignment decision.Extensive experimental results on public datasets indicate that our DCEA performs better than state-of-the-art baselines.
基金supported by Key Program of Natural Science Foundation of China(Grant No.61631018)Anhui Provincial Natural Science Foundation(Grant No.1908085MF177)Huawei Technology Innovative Research(YBN2018095087)。
文摘The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applications.Therefore the research and development on 6 G have been put on the agenda.Regarding demands and characteristics of future 6 G,artificial intelligence(A),big data(B)and cloud computing(C)will play indispensable roles in achieving the highest efficiency and the largest benefits.Interestingly,the initials of these three aspects remind us the significance of vitamin ABC to human body.In this article we specifically expound on the three elements of ABC and relationships in between.We analyze the basic characteristics of wireless big data(WBD)and the corresponding technical action in A and C,which are the high dimensional feature and spatial separation,the predictive ability,and the characteristics of knowledge.Based on the abilities of WBD,a new learning approach for wireless AI called knowledge+data-driven deep learning(KD-DL)method,and a layered computing architecture of mobile network integrating cloud/edge/terminal computing,is proposed,and their achievable efficiency is discussed.These progress will be conducive to the development of future 6 G.
基金jointly supported by the Opening Fund of Key Laboratory of Low-grade Energy Utilization Technologies and Systems of Ministry of Education of China(Chongqing University)(LLEUTS-202305)the Opening Fund of State Key Laboratory of Green Building in Western China(LSKF202316)+4 种基金the open Foundation of Anhui Province Key Laboratory of Intelligent Building and Building Energy-saving(IBES2022KF11)“The 14th Five-Year Plan”Hubei Provincial advantaged characteristic disciplines(groups)project of Wuhan University of Science and Technology(2023D0504,2023D0501)the National Natural Science Foundation of China(51906181)the 2021 Construction Technology Plan Project of Hubei Province(2021-83)the Science and Technology Project of Guizhou Province:Integrated Support of Guizhou[2023]General 393.
文摘The available modelling data shortage issue makes it difficult to guarantee the performance of data-driven building energy prediction(BEP)models for both the newly built buildings and existing information-poor buildings.Both knowledge transfer learning(KTL)and data incremental learning(DIL)can address the data shortage issue of such buildings.For new building scenarios with continuous data accumulation,the performance of BEP models has not been fully investigated considering the data accumulation dynamics.DIL,which can learn dynamic features from accumulated data adapting to the developing trend of new building time-series data and extend BEP model's knowledge,has been rarely studied.Previous studies have shown that the performance of KTL models trained with fixed data can be further improved in scenarios with dynamically changing data.Hence,this study proposes an improved transfer learning cross-BEP strategy continuously updated using the coarse data incremental(CDI)manner.The hybrid KTL-DIL strategy(LSTM-DANN-CDI)uses domain adversarial neural network(DANN)for KLT and long short-term memory(LSTM)as the Baseline BEP model.Performance evaluation is conducted to systematically qualify the effectiveness and applicability of KTL and improved KTL-DIL.Real-world data from six-type 36 buildings of six types are adopted to evaluate the performance of KTL and KTL-DIL in data-driven BEP tasks considering factors like the model increment time interval,the available target and source building data volumes.Compared with LSTM,results indicate that KTL(LSTM-DANN)and the proposed KTL-DIL(LSTM-DANN-CDI)can significantly improve the BEP performance for new buildings with limited data.Compared with the pure KTL strategy LSTM-DANN,the improved KTL-DIL strategy LSTM-DANN-CDI has better prediction performance with an average performance improvement ratio of 60%.
文摘Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system. Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria. However, in a knowledge learning system there is usually a set of rules. These rules are not independent, but interactive. They tend to affect each other and form a rulesystem. In such case, it is no longer reasonable to isolate each rule from others for evaluation. A best rule according to certain criterion is not always the best one for the whole system. Furthermore, the data in the real world from which people want to create their learning system are often ill-defined and inconsistent. In this case, the completeness and consistency criteria for rule selection are no longer essential. In this paper, some ideas about how to solve the rule-selection problem in a systematic way are proposed. These ideas have been applied in the design of a Chinese business card layout analysis system and gained a good result on the training data set of 425 images. The implementation of the system and the result are presented in this paper.
基金supported by the ARC Discovery Early Career Researcher Award,China(No.DE200101283)the ARC Discovery Project,China(No.DP210102801).
文摘Association,aiming to link bounding boxes of the same identity in a video sequence,is a central component in multi-object tracking(MOT).To train association modules,e.g.,parametric networks,real video data are usually used.However,annotating person tracks in consecutive video frames is expensive,and such real data,due to its inflexibility,offer us limited opportunities to evaluate the system performance w.r.t.changing tracking scenarios.In this paper,we study whether 3D synthetic data can replace real-world videos for association training.Specifically,we introduce a large-scale synthetic data engine named MOTX,where the motion characteristics of cameras and objects are manually configured to be similar to those of real-world datasets.We show that,compared with real data,association knowledge obtained from synthetic data can achieve very similar performance on real-world test sets without domain adaption techniques.Our intriguing observation is credited to two factors.First and foremost,3D engines can well simulate motion factors such as camera movement,camera view,and object movement so that the simulated videos can provide association modules with effective motion features.Second,the experimental results show that the appearance domain gap hardly harms the learning of association knowledge.In addition,the strong customization ability of MOTX allows us to quantitatively assess the impact of motion factors on MOT,which brings new insights to the community.
基金supported by the National Key Research and Development Program of China(No.2021YFF1201200)the Science and Technology Major Project of Changsha(No.kh2402004).
文摘Automated diagnosis of chest X-rays is pivotal in radiology,aiming to alleviate the workload of radiologists.Traditional methods primarily rely on visual features or label dependence,which is a limitation in detecting nuanced or rare lesions.To address this,we present KEXNet,a pioneering knowledge-enhanced X-ray lesion detection model.KEXNet employs a unique strategy akin to expert radiologists,integrating a knowledge graph based on expert annotations with an interpretable graph learning approach.This novel method combines object detection with a graph neural network,facilitating precise local lesion detection.For global lesion detection,KEXNet synergizes knowledge-enhanced local features with global image features,enhancing diagnostic accuracy.Our evaluations on three benchmark datasets demonstrate that KEXNet outshines existing models,particularly in identifying small or infrequent lesions.Notably,on the Chest ImaGenome dataset,KEXNet’s AUC for local lesion detection surpasses 8.9%compared to the state-of-the-art method AnaXNet,showcasing its potential in revolutionizing automated chest X-ray diagnostics.
文摘In September 2024,Beicheng Youth Night School was launched at Beijing City University to allow young and middle-aged students to use their spare time to learn new knowledge and skills.