For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm u...For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.展开更多
Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been ...Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been proven that computing minimal reduc- tion of decision tables is a non-derterministic polynomial (NP)-hard problem. A new cooperative extended attribute reduction algorithm named Co-PSAR based on improved PSO is proposed, in which the cooperative evolutionary strategy with suitable fitness func- tions is involved to learn a good hypothesis for accelerating the optimization of searching minimal attribute reduction. Experiments on Benchmark functions and University of California, Irvine (UCI) data sets, compared with other algorithms, verify the superiority of the Co-PSAR algorithm in terms of the convergence speed, efficiency and accuracy for the attribute reduction.展开更多
Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into g...Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into genetic algorithm, we proposed a heuristic genetic algorithm. In the genetic algorithm, we constructed a new operator to maintaining the classification ability. The experiment shows that our algorithm is efficient and effective for minimal reduct, even for the special example that the simple heuristic algorithm can’t get the right result.展开更多
Feature selection(FS) aims to determine a minimal feature(attribute) subset from a problem domain while retaining a suitably high accuracy in representing the original features. Rough set theory(RST) has been us...Feature selection(FS) aims to determine a minimal feature(attribute) subset from a problem domain while retaining a suitably high accuracy in representing the original features. Rough set theory(RST) has been used as such a tool with much success. RST enables the discovery of data dependencies and the reduction of the number of attributes contained in a dataset using the data alone,requiring no additional information. This paper describes the fundamental ideas behind RST-based approaches,reviews related FS methods built on these ideas,and analyses more frequently used RST-based traditional FS algorithms such as Quickreduct algorithm,entropy based reduct algorithm,and relative reduct algorithm. It is found that some of the drawbacks in the existing algorithms and our proposed improved algorithms can overcome these drawbacks. The experimental analyses have been carried out in order to achieve the efficiency of the proposed algorithms.展开更多
The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general red...The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.展开更多
The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, el...The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, eliminating those that are redundant or meaningless. Such reducts may even serve as input to other classifiers other than Rough Sets. The typical high dimensionality of current databases precludes the use of greedy methods to find optimal or suboptimal reducts in the search space and requires the use of stochastic methods. In this context, the calculation of reducts is typically performed by a genetic algorithm, but other metaheuristics have been proposed with better performance. This work proposes the innovative use of two known metaheuristics for this calculation, the Variable Neighborhood Search, the Variable Neighborhood Descent, besides a third heuristic called Decrescent Cardinality Search. The last one is a new heuristic specifically proposed for reduct calculation. Considering some databases commonly found in the literature of the area, the reducts that have been obtained present lower cardinality, i.e., a lower number of attributes.展开更多
Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significa...Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significant impact on the overall efficiency of attribute reduction.The information granulation of the existing neighborhood rough set models is usually a single layer,and the construction of each information granule needs to search all the samples in the universe,which is inefficient.To fill such gap,a new neighborhood rough set model is proposed,which aims to improve the efficiency of attribute reduction by means of two-layer information granulation.The first layer of information granulation constructs a mapping-equivalence relation that divides the universe into multiple mutually independent mapping-equivalence classes.The second layer of information granulation views each mapping-equivalence class as a sub-universe and then performs neighborhood informa-tion granulation.A model named mapping-equivalence neighborhood rough set model is derived from the strategy of two-layer information granulation.Experimental results show that compared with other neighborhood rough set models,this model can effectively improve the efficiency of attribute reduction and reduce the uncertainty of the system.The strategy provides a new thinking for the exploration of neighborhood rough set models and the study of attribute reduction acceleration problems.展开更多
Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional gr...Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional greedy-based neighborhood rough set attribute reduction algorithms have a high computational complexity and long processing time. In this paper, a novel attribute reduction algorithm based on attribute importance is proposed. By using conditional information, the attribute reduction problem in neighborhood rough sets is discussed, and the importance of attributes is measured by conditional information gain. The algorithm iteratively removes the attribute with the lowest importance, thus achieving the goal of attribute reduction. Six groups of UCI datasets are selected, and the proposed algorithm SAR is compared with L<sub>2</sub>-ELM, LapTELM, CTSVM, and TBSVM classifiers. The results demonstrate that SAR can effectively improve the time consumption and accuracy issues in attribute reduction.展开更多
The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attr...The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.展开更多
Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relations...Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relationship between RS and topology spaces for the attribute reduction problem.However,the mentioned recent methods followed a strategy to construct a new measure for attribute selection.Meanwhile,the strategy for searching for the reduct is still to select each attribute and gradually add it to the reduct.Consequently,those methods tended to be inefficient for high-dimensional datasets.To overcome these challenges,we use the separability property of Hausdorff topology to quickly identify distinguishable attributes,this approach significantly reduces the time for the attribute filtering stage of the algorithm.In addition,we propose the concept of Hausdorff topological homomorphism to construct candidate reducts,this method significantly reduces the number of candidate reducts for the wrapper stage of the algorithm.These are the two main stages that have the most effect on reducing computing time for the attribute reduction of the proposed algorithm,which we call the Cluster Filter Wrapper algorithm based on Hausdorff Topology.Experimental validation on the UCI Machine Learning Repository Data shows that the proposed method achieves efficiency in both the execution time and the size of the reduct.展开更多
Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features ma...Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.展开更多
Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree...Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree and new similarity category are defined. In the meantime,similarity category clusters which are divided by condition attribute are provided. And then,two theorems are presented. Subsequently,a new attribute reduction algorithm is proposed. Finally,the new attribute reduction algorithm is verified through a performance evaluation decision table of the self-repairing flight-control system. The result shows the proposed attribute reduction algorithm is able to deal with fuzzy decision table to a certain extent.展开更多
The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to ...The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to the rise of the diagnosis error rate.Therefore,in order to obtain high quality oil immersed transformer fault attribute data sets,an improved imperialist competitive algorithm was proposed to optimize the rough set to discretize the original fault data set and the attribute reduction.The feasibility of the proposed algorithm was verified by experiments and compared with other intelligent algorithms.Results show that the algorithm was stable at the 27th iteration with a reduction rate of 56.25%and a reduction accuracy of 98%.By using BP neural network to classify the reduction results,the accuracy was 86.25%,and the overall effect was better than those of the original data and other algorithms.Hence,the proposed method is effective for fault attribute reduction of oil immersed transformer.展开更多
A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confide...A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA problem with multiple decision attributes and multiple condition attributes. Empirical results show that the decision rule with the highest confidence measures will be used as the final decision rules in the MADA problem with multiple conflicting decision attributes and multiple condition attributes if there are some conflicts among decision rules resulting from multiple decision attributes. The confidence-measure-based VPRS model can effectively solve the conflicts of decision rules from multiple decision attributes and thus a class of MADA problem with multiple conflicting decision attributes and multiple condition attributes are solved.展开更多
基金Anhui Provincial University Research Project(Project Number:2023AH051659)Tongling University Talent Research Initiation Fund Project(Project Number:2022tlxyrc31)+1 种基金Tongling University School-Level Scientific Research Project(Project Number:2021tlxytwh05)Tongling University Horizontal Project(Project Number:2023tlxyxdz237)。
文摘For neighborhood rough set attribute reduction algorithms based on dependency degree,a neighborhood computation method incorporating attribute weight values and a neighborhood rough set attribute reduction algorithm using discernment as the heuristic information was proposed.The reduction algorithm comprehensively considers the dependency degree and neighborhood granulation degree of attributes,allowing for a more accurate measurement of the importance degrees of attributes.Example analyses and experimental results demonstrate the feasibility and effectiveness of the algorithm.
基金supported by the National Natural Science Foundation of China (60873069 61171132)+3 种基金the Jiangsu Government Scholarship for Overseas Studies (JS-2010-K005)the Funding of Jiangsu Innovation Program for Graduate Education (CXZZ11 0219)the Open Project Program of Jiangsu Provincial Key Laboratory of Computer Information Processing Technology (KJS1023)the Applying Study Foundation of Nantong (BK2011062)
文摘Particle swarm optimization (PSO) is a new heuristic algorithm which has been applied to many optimization problems successfully. Attribute reduction is a key studying point of the rough set theory, and it has been proven that computing minimal reduc- tion of decision tables is a non-derterministic polynomial (NP)-hard problem. A new cooperative extended attribute reduction algorithm named Co-PSAR based on improved PSO is proposed, in which the cooperative evolutionary strategy with suitable fitness func- tions is involved to learn a good hypothesis for accelerating the optimization of searching minimal attribute reduction. Experiments on Benchmark functions and University of California, Irvine (UCI) data sets, compared with other algorithms, verify the superiority of the Co-PSAR algorithm in terms of the convergence speed, efficiency and accuracy for the attribute reduction.
文摘Knowledge reduction is an important issue when dealing with huge amounts of data. And it has been proved that computing the minimal reduct of decision system is NP-complete. By introducing heuristic information into genetic algorithm, we proposed a heuristic genetic algorithm. In the genetic algorithm, we constructed a new operator to maintaining the classification ability. The experiment shows that our algorithm is efficient and effective for minimal reduct, even for the special example that the simple heuristic algorithm can’t get the right result.
基金supported by the UGC, SERO, Hyderabad under FDP during XI plan period, and the UGC, New Delhi for financial assistance under major research project Grant No. F-34-105/2008
文摘Feature selection(FS) aims to determine a minimal feature(attribute) subset from a problem domain while retaining a suitably high accuracy in representing the original features. Rough set theory(RST) has been used as such a tool with much success. RST enables the discovery of data dependencies and the reduction of the number of attributes contained in a dataset using the data alone,requiring no additional information. This paper describes the fundamental ideas behind RST-based approaches,reviews related FS methods built on these ideas,and analyses more frequently used RST-based traditional FS algorithms such as Quickreduct algorithm,entropy based reduct algorithm,and relative reduct algorithm. It is found that some of the drawbacks in the existing algorithms and our proposed improved algorithms can overcome these drawbacks. The experimental analyses have been carried out in order to achieve the efficiency of the proposed algorithms.
基金Supported by the National Natural Science Foundation of China (No.60308002)
文摘The logging attribute optimization is an important task in the well-logging interpretation. A method of attribute reduction is presented based on rough set. Firstly, the core information of the sample by a general reductive method is determined. Then, the significance of dispensable attribute in the reduction-table is calculated. Finally, the minimum relative reduction set is achieved. The typical calculation and quantitative computation of reservoir parameter in oil logging show that the method of attribute reduction is greatly effective and feasible in logging interpretation.
文摘The Rough Sets Theory is used in data mining with emphasis on the treatment of uncertain or vague information. In the case of classification, this theory implicitly calculates reducts of the full set of attributes, eliminating those that are redundant or meaningless. Such reducts may even serve as input to other classifiers other than Rough Sets. The typical high dimensionality of current databases precludes the use of greedy methods to find optimal or suboptimal reducts in the search space and requires the use of stochastic methods. In this context, the calculation of reducts is typically performed by a genetic algorithm, but other metaheuristics have been proposed with better performance. This work proposes the innovative use of two known metaheuristics for this calculation, the Variable Neighborhood Search, the Variable Neighborhood Descent, besides a third heuristic called Decrescent Cardinality Search. The last one is a new heuristic specifically proposed for reduct calculation. Considering some databases commonly found in the literature of the area, the reducts that have been obtained present lower cardinality, i.e., a lower number of attributes.
基金supported by the National Natural Science Foundation of China (Nos.62006099,62076111)the Key Laboratory of Oceanographic Big Data Mining&Application of Zhejiang Province (No.OBDMA202104).
文摘Attribute reduction,as one of the essential applications of the rough set,has attracted extensive attention from scholars.Information granulation is a key step of attribute reduction,and its efficiency has a significant impact on the overall efficiency of attribute reduction.The information granulation of the existing neighborhood rough set models is usually a single layer,and the construction of each information granule needs to search all the samples in the universe,which is inefficient.To fill such gap,a new neighborhood rough set model is proposed,which aims to improve the efficiency of attribute reduction by means of two-layer information granulation.The first layer of information granulation constructs a mapping-equivalence relation that divides the universe into multiple mutually independent mapping-equivalence classes.The second layer of information granulation views each mapping-equivalence class as a sub-universe and then performs neighborhood informa-tion granulation.A model named mapping-equivalence neighborhood rough set model is derived from the strategy of two-layer information granulation.Experimental results show that compared with other neighborhood rough set models,this model can effectively improve the efficiency of attribute reduction and reduce the uncertainty of the system.The strategy provides a new thinking for the exploration of neighborhood rough set models and the study of attribute reduction acceleration problems.
文摘Attribute reduction is a hot topic in rough set research. As an extension of rough sets, neighborhood rough sets can effectively solve the problem of information loss after data discretization. However, traditional greedy-based neighborhood rough set attribute reduction algorithms have a high computational complexity and long processing time. In this paper, a novel attribute reduction algorithm based on attribute importance is proposed. By using conditional information, the attribute reduction problem in neighborhood rough sets is discussed, and the importance of attributes is measured by conditional information gain. The algorithm iteratively removes the attribute with the lowest importance, thus achieving the goal of attribute reduction. Six groups of UCI datasets are selected, and the proposed algorithm SAR is compared with L<sub>2</sub>-ELM, LapTELM, CTSVM, and TBSVM classifiers. The results demonstrate that SAR can effectively improve the time consumption and accuracy issues in attribute reduction.
基金Anhui Province Natural Science Research Project of Colleges and Universities(2023AH040321)Excellent Scientific Research and Innovation Team of Anhui Colleges(2022AH010098).
文摘The presence of numerous uncertainties in hybrid decision information systems(HDISs)renders attribute reduction a formidable task.Currently available attribute reduction algorithms,including those based on Pawlak attribute importance,Skowron discernibility matrix,and information entropy,struggle to effectively manages multiple uncertainties simultaneously in HDISs like the precise measurement of disparities between nominal attribute values,and attributes with fuzzy boundaries and abnormal values.In order to address the aforementioned issues,this paper delves into the study of attribute reduction withinHDISs.First of all,a novel metric based on the decision attribute is introduced to solve the problem of accurately measuring the differences between nominal attribute values.The newly introduced distance metric has been christened the supervised distance that can effectively quantify the differences between the nominal attribute values.Then,based on the newly developed metric,a novel fuzzy relationship is defined from the perspective of“feedback on parity of attribute values to attribute sets”.This new fuzzy relationship serves as a valuable tool in addressing the challenges posed by abnormal attribute values.Furthermore,leveraging the newly introduced fuzzy relationship,the fuzzy conditional information entropy is defined as a solution to the challenges posed by fuzzy attributes.It effectively quantifies the uncertainty associated with fuzzy attribute values,thereby providing a robust framework for handling fuzzy information in hybrid information systems.Finally,an algorithm for attribute reduction utilizing the fuzzy conditional information entropy is presented.The experimental results on 12 datasets show that the average reduction rate of our algorithm reaches 84.04%,and the classification accuracy is improved by 3.91%compared to the original dataset,and by an average of 11.25%compared to the other 9 state-of-the-art reduction algorithms.The comprehensive analysis of these research results clearly indicates that our algorithm is highly effective in managing the intricate uncertainties inherent in hybrid data.
基金funded by Vietnam National Foundation for Science and Technology Development(NAFOSTED)under Grant Number 102.05-2021.10.
文摘Attribute reduction through the combined approach of Rough Sets(RS)and algebraic topology is an open research topic with significant potential for applications.Several research works have introduced a strong relationship between RS and topology spaces for the attribute reduction problem.However,the mentioned recent methods followed a strategy to construct a new measure for attribute selection.Meanwhile,the strategy for searching for the reduct is still to select each attribute and gradually add it to the reduct.Consequently,those methods tended to be inefficient for high-dimensional datasets.To overcome these challenges,we use the separability property of Hausdorff topology to quickly identify distinguishable attributes,this approach significantly reduces the time for the attribute filtering stage of the algorithm.In addition,we propose the concept of Hausdorff topological homomorphism to construct candidate reducts,this method significantly reduces the number of candidate reducts for the wrapper stage of the algorithm.These are the two main stages that have the most effect on reducing computing time for the attribute reduction of the proposed algorithm,which we call the Cluster Filter Wrapper algorithm based on Hausdorff Topology.Experimental validation on the UCI Machine Learning Repository Data shows that the proposed method achieves efficiency in both the execution time and the size of the reduct.
基金supported by the UGC, SERO, Hyderabad under FDP during XI plan periodthe UGC, New Delhi for financial assistance under major research project Grant No. F-34-105/2008
文摘Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.
基金supported by the Foundation and Frontier Technologies Research Plan Projects of Henan Province of China under Grant No. 102300410266
文摘Classical rough set has a limited processing capacity in fuzzy decision table. Combining fuzzy set with classical rough set,attribute reduction algorithm on fuzzy decision table is studied. First,new similarity degree and new similarity category are defined. In the meantime,similarity category clusters which are divided by condition attribute are provided. And then,two theorems are presented. Subsequently,a new attribute reduction algorithm is proposed. Finally,the new attribute reduction algorithm is verified through a performance evaluation decision table of the self-repairing flight-control system. The result shows the proposed attribute reduction algorithm is able to deal with fuzzy decision table to a certain extent.
基金Sponsored by the National Natural Science Foundation of China(Grant No.51504085)the Natural Science Foundation for Returness of Heilongjiang Province of China(Grant No.LC2017026).
文摘The original fault data of oil immersed transformer often contains a large number of unnecessary attributes,which greatly increases the elapsed time of the algorithm and reduces the classification accuracy,leading to the rise of the diagnosis error rate.Therefore,in order to obtain high quality oil immersed transformer fault attribute data sets,an improved imperialist competitive algorithm was proposed to optimize the rough set to discretize the original fault data set and the attribute reduction.The feasibility of the proposed algorithm was verified by experiments and compared with other intelligent algorithms.Results show that the algorithm was stable at the 27th iteration with a reduction rate of 56.25%and a reduction accuracy of 98%.By using BP neural network to classify the reduction results,the accuracy was 86.25%,and the overall effect was better than those of the original data and other algorithms.Hence,the proposed method is effective for fault attribute reduction of oil immersed transformer.
基金The National Natural Science Foundation of China (No.70221001)the Knowledge Innovation Program of Chinese Academyof Sciences (No.3547600)Strategy Research Grant of City University of Hong Kong (No.7001677)
文摘A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA problem with multiple decision attributes and multiple condition attributes. Empirical results show that the decision rule with the highest confidence measures will be used as the final decision rules in the MADA problem with multiple conflicting decision attributes and multiple condition attributes if there are some conflicts among decision rules resulting from multiple decision attributes. The confidence-measure-based VPRS model can effectively solve the conflicts of decision rules from multiple decision attributes and thus a class of MADA problem with multiple conflicting decision attributes and multiple condition attributes are solved.