In this paper,a video compressed sensing reconstruction algorithm based on multidimensional reference frames is proposed using the sparse characteristics of video signals in different sparse representation domains.Fir...In this paper,a video compressed sensing reconstruction algorithm based on multidimensional reference frames is proposed using the sparse characteristics of video signals in different sparse representation domains.First,the overall structure of the proposed video compressed sensing algorithm is introduced in this paper.The paper adopts a multi-reference frame bidirectional prediction hypothesis optimization algorithm.Then,the paper proposes a reconstruction method for CS frames at the re-decoding end.In addition to using key frames of each GOP reconstructed in the time domain as reference frames for reconstructing CS frames,half-pixel reference frames and scaled reference frames in the pixel domain are also used as CS frames.Reference frames of CS frames are used to obtain higher quality assumptions.Themethod of obtaining reference frames in the pixel domain is also discussed in detail in this paper.Finally,the reconstruction algorithm proposed in this paper is compared with video compression algorithms in the literature that have better reconstruction results.Experiments show that the algorithm has better performance than the best multi-reference frame video compression sensing algorithm and can effectively improve the quality of slowmotion video reconstruction.展开更多
Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outl...Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outlier. In this work, an effective outlier detection method based on multi-dimensional clustering and local density(ODBMCLD) is proposed. ODBMCLD firstly identifies the center objects by the local density peak of data objects, and clusters the whole dataset based on the center objects. Then, outlier objects belonging to different clusters will be marked as candidates of abnormal data. Finally, the top N points among these abnormal candidates are chosen as final anomaly objects with high outlier factors. The feasibility and effectiveness of the method are verified by experiments.展开更多
With the increasing integration of emerging source-load types such as distributed photovoltaics,electric vehicles,and energy storage into distribution networks,the operational characteristics of these networks have ev...With the increasing integration of emerging source-load types such as distributed photovoltaics,electric vehicles,and energy storage into distribution networks,the operational characteristics of these networks have evolved from traditional single-load centers to complex multi-source,multi-load systems.This transition not only increases the difficulty of effectively classifying distribution networks due to their heightened complexity but also renders traditional energy management approaches-primarily focused on economic objectives-insufficient to meet the growing demands for flexible scheduling and dynamic response.To address these challenges,this paper proposes an adaptive multi-objective energy management strategy that accounts for the distinct operational requirements of distribution networks with a high penetration of new-type source-loads.The goal is to establish a comprehensive energy management framework that optimally balances energy efficiency,carbon reduction,and economic performance in modern distribution networks.To enhance classification accuracy,the strategy constructs amulti-dimensional scenario classification model that integrates environmental and climatic factors by analyzing the operational characteristics of new-type distribution networks and incorporating expert knowledge.An improved split-coupling K-means preclustering algorithm is employed to classify distribution networks effectively.Based on the classification results,fuzzy logic control is then utilized to dynamically optimize the weighting of each objective,allowing for an adaptive adjustment of priorities to achieve a flexible and responsivemulti-objective energy management strategy.The effectiveness of the proposed approach is validated through practical case studies.Simulation results indicate that the proposed method improves classification accuracy by 18.18%compared to traditional classification methods and enhances energy savings and carbon reduction by 4.34%and 20.94%,respectively,compared to the fixed-weight strategy.展开更多
The characteristic of geographic information system(GfS) spatial data operation is that query is much more frequent than insertion and deletion, and a new hybrid spatial clustering method used to build R-tree for GI...The characteristic of geographic information system(GfS) spatial data operation is that query is much more frequent than insertion and deletion, and a new hybrid spatial clustering method used to build R-tree for GIS spatial data was proposed in this paper. According to the aggregation of clustering method, R-tree was used to construct rules and specialty of spatial data. HCR-tree was the R-tree built with HCR algorithm. To test the efficiency of HCR algorithm, it was applied not only to the data organization of static R-tree but also to the nodes splitting of dynamic R-tree. The results show that R-tree with HCR has some advantages such as higher searching efficiency, less disk accesses and so on.展开更多
The problem of joint eigenvalue estimation for the non-defective commuting set of matrices A is addressed. A procedure revealing the joint eigenstructure by simultaneous diagonalization of. A with simultaneous Schur d...The problem of joint eigenvalue estimation for the non-defective commuting set of matrices A is addressed. A procedure revealing the joint eigenstructure by simultaneous diagonalization of. A with simultaneous Schur decomposition (SSD) and balance procedure alternately is proposed for performance considerations and also for overcoming the convergence difficulties of previous methods based only on simultaneous Schur form and unitary transformations, it is shown that the SSD procedure can be well incorporated with the balancing algorithm in a pingpong manner, i. e., each optimizes a cost function and at the same time serves as an acceleration procedure for the other. Under mild assumptions, the convergence of the two cost functions alternately optimized, i. e., the norm of A and the norm of the left-lower part of A is proved. Numerical experiments are conducted in a multi-dimensional harmonic retrieval application and suggest that the presented method converges considerably faster than the methods based on only unitary transformation for matrices which are not near to normality.展开更多
Harris hawks optimization(HHO)algorithm is an efficient method of solving function optimization problems.However,it is still confronted with some limitations in terms of low precision,low convergence speed and stagnat...Harris hawks optimization(HHO)algorithm is an efficient method of solving function optimization problems.However,it is still confronted with some limitations in terms of low precision,low convergence speed and stagnation to local optimum.To this end,an improved HHO(IHHO)algorithm based on good point set and nonlinear convergence formula is proposed.First,a good point set is used to initialize the positions of the population uniformly and randomly in the whole search area.Second,a nonlinear exponential convergence formula is designed to balance exploration stage and exploitation stage of IHHO algorithm,aiming to find all the areas containing the solutions more comprehensively and accurately.The proposed IHHO algorithm tests 17 functions and uses Wilcoxon test to verify the effectiveness.The results indicate that IHHO algorithm not only has faster convergence speed than other comparative algorithms,but also improves the accuracy of solution effectively and enhances its robustness under low dimensional and high dimensional conditions.展开更多
Web-based social networking is increasingly gaining popularity due to the rapid development of computer networking technologies. However, social networking applications still cannot obtain a wider acceptance by many u...Web-based social networking is increasingly gaining popularity due to the rapid development of computer networking technologies. However, social networking applications still cannot obtain a wider acceptance by many users due to some unresolved issues, such as trust, security, and privacy. In social networks, trust is mainly studied whether a remote user behaves as expected by an interested user via other users, who are respectively named trustee, trustor, and recommenders. A trust graph consists of a trustor, a trustee, some recommenders, and the trust relationships between them. In this paper, we propose a novel FlowTrust approach to model a trust graph with network flows, and evaluate the maximum amount of trust that can flow through a trust graph using network flow theory. FlowTrust supports multi-dimensional trust. We use trust value and confidence level as two trust factors. We deduce four trust metrics from these two trust factors, which are maximum flow of trust value, maximum flow of confidence level, minimum cost of uncertainty with maximum flow of trust, and minimum cost of mistrust with maximum flow of confidence. We also propose three FlowTrust algorithms to normalize these four trust metrics. We compare our proposed FlowTrust approach with the existing RelTrust and CircuitTrust approaches. We show that all three approaches are comparable in terms of the inferred trust values. Therefore, FlowTrust is the best of the three since it also supports multi-dimensional trust.展开更多
文摘In this paper,a video compressed sensing reconstruction algorithm based on multidimensional reference frames is proposed using the sparse characteristics of video signals in different sparse representation domains.First,the overall structure of the proposed video compressed sensing algorithm is introduced in this paper.The paper adopts a multi-reference frame bidirectional prediction hypothesis optimization algorithm.Then,the paper proposes a reconstruction method for CS frames at the re-decoding end.In addition to using key frames of each GOP reconstructed in the time domain as reference frames for reconstructing CS frames,half-pixel reference frames and scaled reference frames in the pixel domain are also used as CS frames.Reference frames of CS frames are used to obtain higher quality assumptions.Themethod of obtaining reference frames in the pixel domain is also discussed in detail in this paper.Finally,the reconstruction algorithm proposed in this paper is compared with video compression algorithms in the literature that have better reconstruction results.Experiments show that the algorithm has better performance than the best multi-reference frame video compression sensing algorithm and can effectively improve the quality of slowmotion video reconstruction.
基金Project(61362021)supported by the National Natural Science Foundation of ChinaProject(2016GXNSFAA380149)supported by Natural Science Foundation of Guangxi Province,China+1 种基金Projects(2016YJCXB02,2017YJCX34)supported by Innovation Project of GUET Graduate Education,ChinaProject(2011KF11)supported by the Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education,China
文摘Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outlier. In this work, an effective outlier detection method based on multi-dimensional clustering and local density(ODBMCLD) is proposed. ODBMCLD firstly identifies the center objects by the local density peak of data objects, and clusters the whole dataset based on the center objects. Then, outlier objects belonging to different clusters will be marked as candidates of abnormal data. Finally, the top N points among these abnormal candidates are chosen as final anomaly objects with high outlier factors. The feasibility and effectiveness of the method are verified by experiments.
基金supported by the Science and Technology Project of the Headquarters of the State Grid Corporation(project code:5400-202323233A-1-1-ZN).
文摘With the increasing integration of emerging source-load types such as distributed photovoltaics,electric vehicles,and energy storage into distribution networks,the operational characteristics of these networks have evolved from traditional single-load centers to complex multi-source,multi-load systems.This transition not only increases the difficulty of effectively classifying distribution networks due to their heightened complexity but also renders traditional energy management approaches-primarily focused on economic objectives-insufficient to meet the growing demands for flexible scheduling and dynamic response.To address these challenges,this paper proposes an adaptive multi-objective energy management strategy that accounts for the distinct operational requirements of distribution networks with a high penetration of new-type source-loads.The goal is to establish a comprehensive energy management framework that optimally balances energy efficiency,carbon reduction,and economic performance in modern distribution networks.To enhance classification accuracy,the strategy constructs amulti-dimensional scenario classification model that integrates environmental and climatic factors by analyzing the operational characteristics of new-type distribution networks and incorporating expert knowledge.An improved split-coupling K-means preclustering algorithm is employed to classify distribution networks effectively.Based on the classification results,fuzzy logic control is then utilized to dynamically optimize the weighting of each objective,allowing for an adaptive adjustment of priorities to achieve a flexible and responsivemulti-objective energy management strategy.The effectiveness of the proposed approach is validated through practical case studies.Simulation results indicate that the proposed method improves classification accuracy by 18.18%compared to traditional classification methods and enhances energy savings and carbon reduction by 4.34%and 20.94%,respectively,compared to the fixed-weight strategy.
文摘The characteristic of geographic information system(GfS) spatial data operation is that query is much more frequent than insertion and deletion, and a new hybrid spatial clustering method used to build R-tree for GIS spatial data was proposed in this paper. According to the aggregation of clustering method, R-tree was used to construct rules and specialty of spatial data. HCR-tree was the R-tree built with HCR algorithm. To test the efficiency of HCR algorithm, it was applied not only to the data organization of static R-tree but also to the nodes splitting of dynamic R-tree. The results show that R-tree with HCR has some advantages such as higher searching efficiency, less disk accesses and so on.
基金The National Natural Science Foundation of China(No.60572072,60496311),the National High Technology Researchand Development Program of China (863Program ) ( No.2003AA123310),the International Cooperation Project on Beyond 3G Mobile of China (No.2005DFA10360).
文摘The problem of joint eigenvalue estimation for the non-defective commuting set of matrices A is addressed. A procedure revealing the joint eigenstructure by simultaneous diagonalization of. A with simultaneous Schur decomposition (SSD) and balance procedure alternately is proposed for performance considerations and also for overcoming the convergence difficulties of previous methods based only on simultaneous Schur form and unitary transformations, it is shown that the SSD procedure can be well incorporated with the balancing algorithm in a pingpong manner, i. e., each optimizes a cost function and at the same time serves as an acceleration procedure for the other. Under mild assumptions, the convergence of the two cost functions alternately optimized, i. e., the norm of A and the norm of the left-lower part of A is proved. Numerical experiments are conducted in a multi-dimensional harmonic retrieval application and suggest that the presented method converges considerably faster than the methods based on only unitary transformation for matrices which are not near to normality.
基金supported by the National Natural Science Foundation of China(61872126)。
文摘Harris hawks optimization(HHO)algorithm is an efficient method of solving function optimization problems.However,it is still confronted with some limitations in terms of low precision,low convergence speed and stagnation to local optimum.To this end,an improved HHO(IHHO)algorithm based on good point set and nonlinear convergence formula is proposed.First,a good point set is used to initialize the positions of the population uniformly and randomly in the whole search area.Second,a nonlinear exponential convergence formula is designed to balance exploration stage and exploitation stage of IHHO algorithm,aiming to find all the areas containing the solutions more comprehensively and accurately.The proposed IHHO algorithm tests 17 functions and uses Wilcoxon test to verify the effectiveness.The results indicate that IHHO algorithm not only has faster convergence speed than other comparative algorithms,but also improves the accuracy of solution effectively and enhances its robustness under low dimensional and high dimensional conditions.
文摘Web-based social networking is increasingly gaining popularity due to the rapid development of computer networking technologies. However, social networking applications still cannot obtain a wider acceptance by many users due to some unresolved issues, such as trust, security, and privacy. In social networks, trust is mainly studied whether a remote user behaves as expected by an interested user via other users, who are respectively named trustee, trustor, and recommenders. A trust graph consists of a trustor, a trustee, some recommenders, and the trust relationships between them. In this paper, we propose a novel FlowTrust approach to model a trust graph with network flows, and evaluate the maximum amount of trust that can flow through a trust graph using network flow theory. FlowTrust supports multi-dimensional trust. We use trust value and confidence level as two trust factors. We deduce four trust metrics from these two trust factors, which are maximum flow of trust value, maximum flow of confidence level, minimum cost of uncertainty with maximum flow of trust, and minimum cost of mistrust with maximum flow of confidence. We also propose three FlowTrust algorithms to normalize these four trust metrics. We compare our proposed FlowTrust approach with the existing RelTrust and CircuitTrust approaches. We show that all three approaches are comparable in terms of the inferred trust values. Therefore, FlowTrust is the best of the three since it also supports multi-dimensional trust.