Purpose-Takagi-Sugeno-Kang(TSK)fuzzy systems are widely used in classification and regression problems.However,when the data size is large and the feature dimensions are high,the optimization performance of the TSK fu...Purpose-Takagi-Sugeno-Kang(TSK)fuzzy systems are widely used in classification and regression problems.However,when the data size is large and the feature dimensions are high,the optimization performance of the TSK fuzzy system is poor,and its generalization capability is insufficient.Therefore,this paper proposes an algorithm based on mini-batch gradient descent(MBGD)and batch normalization(BN)to train TSK fuzzy classifiers.Design/methodology/approach-This study uses the AdaDerivative(A)optimizer to optimize the learning rate,weighted firing level(W)applies different weight factors to various features to calculate the firing level of each rule,and adaptive rule firing level loss(A)and knowledge distillation loss(KD)are proposed to encourage each rule to have different average firing levels,as well as to ensure that the output of the rule consequent is close to the mean.Thereby,the final algorithm TSK-BN-2AWKD is obtained.To evaluate the performance of TSK-BN-2AWKD,experiments were conducted on 15 real datasets.Findings-The experimental results show that A,A,W and KD are effective individually,and the algorithm TSK-BN-2AWKD,after integrating them,can effectively improve the classification performance.Originality/value-This study uses the AdaDerivative optimizer to train TSK fuzzy classifiers.Additionally,adaptive rule firing level loss is proposed to assign different expectations to each rule individually.Finally,KD is proposed to force the outputs of the rule consequent to be close to the mean,solving the narrow vision problem that may exist in TSK fuzzy systems.展开更多
基金supported in part by the National Natural Science Foundation of China(No:12271211)the Fujian Alliance of Mathematics(No:2023SXLMMS06)the Open Fund of Digital Fujian Big Data Modeling and Intelligent Computing Institute and the Pre-Research Fund of Jimei University.
文摘Purpose-Takagi-Sugeno-Kang(TSK)fuzzy systems are widely used in classification and regression problems.However,when the data size is large and the feature dimensions are high,the optimization performance of the TSK fuzzy system is poor,and its generalization capability is insufficient.Therefore,this paper proposes an algorithm based on mini-batch gradient descent(MBGD)and batch normalization(BN)to train TSK fuzzy classifiers.Design/methodology/approach-This study uses the AdaDerivative(A)optimizer to optimize the learning rate,weighted firing level(W)applies different weight factors to various features to calculate the firing level of each rule,and adaptive rule firing level loss(A)and knowledge distillation loss(KD)are proposed to encourage each rule to have different average firing levels,as well as to ensure that the output of the rule consequent is close to the mean.Thereby,the final algorithm TSK-BN-2AWKD is obtained.To evaluate the performance of TSK-BN-2AWKD,experiments were conducted on 15 real datasets.Findings-The experimental results show that A,A,W and KD are effective individually,and the algorithm TSK-BN-2AWKD,after integrating them,can effectively improve the classification performance.Originality/value-This study uses the AdaDerivative optimizer to train TSK fuzzy classifiers.Additionally,adaptive rule firing level loss is proposed to assign different expectations to each rule individually.Finally,KD is proposed to force the outputs of the rule consequent to be close to the mean,solving the narrow vision problem that may exist in TSK fuzzy systems.