In contemporary power systems,delving into the flexible regulation potential of demand-side resources is of paramount significance for the efficient operation of power grids.This research puts forward an innovative mu...In contemporary power systems,delving into the flexible regulation potential of demand-side resources is of paramount significance for the efficient operation of power grids.This research puts forward an innovative multivariate flexible load aggregation control approach that takes dynamic demand response into full consideration.In the initial stage,using generalized time-domain aggregation modelling for a wide array of heterogeneous flexible loads,including temperature-controlled loads,electric vehicles,and energy storage devices,a novel calculation method for their maximum adjustable capacities is devised.Distinct from conventional methods,this newly developed approach enables more precise and adaptable quantification of the load-adjusting capabilities,thereby enhancing the accuracy and flexibility of demand-side resource management.Subsequently,an SSA-BiLSTM flexible load classification prediction model is established.This model represents an innovative application in the field,effectively combining the advantages of the Sparrow Search Algorithm(SSA)and the Bidirectional Long-Short-Term Memory(BiLSTM)neural network.Furthermore,a parallel Markov chain is introduced to evaluate the switching state transfer probability of flexible loads accurately.This integration allows for a more refined determination of the maximum response capacity range of the flexible load aggregator,significantly improving the precision of capacity assessment compared to existing methods.Finally,in consonance with the intra-day scheduling plan,a newly developed diffuse filling algorithm is implemented to control the activation times of flexible loads precisely,thus achieving real-time dynamic demand response.Through in-depth case analysis and comprehensive comparative studies,the effectiveness of the proposed method is convincingly validated.With its innovative techniques and enhanced performance,it is demonstrated that this method has the potential to substantially enhance the utilization efficiency of demand-side resources in power systems,providing a novel and effective solution for optimizing power grid operation and demand-side management.展开更多
Estimation of velocity profile within mud depth is a long-standing and essential problem in debris flow dynamics.Until now,various velocity profiles have been proposed based on the fitting analysis of experimental mea...Estimation of velocity profile within mud depth is a long-standing and essential problem in debris flow dynamics.Until now,various velocity profiles have been proposed based on the fitting analysis of experimental measurements,but these are often limited by the observation conditions,such as the number of configured sensors.Therefore,the resulting linear velocity profiles usually exhibit limitations in reproducing the temporal-varied and nonlinear behavior during the debris flow process.In this study,we present a novel approach to explore the debris flow velocity profile in detail upon our previous 3D-HBPSPH numerical model,i.e.,the three-dimensional Smoothed Particle Hydrodynamic model incorporating the Herschel-Bulkley-Papanastasiou rheology.Specifically,we propose a stratification aggregation algorithm for interpreting the details of SPH particles,which enables the recording of temporal velocities of debris flow at different mud depths.To analyze the velocity profile,we introduce a logarithmic-based nonlinear model with two key parameters,that a controlling the shape of velocity profile and b concerning its temporal evolution.We verify the proposed velocity profile and explore its sensitivity using 34 sets of velocity data from three individual flume experiments in previous literature.Our results demonstrate that the proposed temporalvaried nonlinear velocity profile outperforms the previous linear profiles.展开更多
In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinfor...In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.展开更多
Energy is one of the most important items to determine the network lifetime due to low power energy nodes included in the network. Generally, data aggregation tree concept is used to find an energy efficient solution....Energy is one of the most important items to determine the network lifetime due to low power energy nodes included in the network. Generally, data aggregation tree concept is used to find an energy efficient solution. However, even the best aggregation tree does not share the load of data packets to the transmitting nodes fairly while it is consuming the lowest possible energy of the network. Therefore, after some rounds, this problem causes to consume the whole energy of some heavily loaded nodes and hence results in with the death of the network. In this paper, by using the Genetic Algorithm (GA), we investigate the energy efficient data collecting spanning trees to find a suitable route which balances the data load throughout the network and thus balances the residual energy in the network in addition to consuming totally low power of the network. Using an algorithm which is able to balance the residual energy among the nodes can help the network to withstand more and consequently extend its own lifetime. In this work, we calculate all possible routes represented by the aggregation trees through the genetic algorithm. GA finds the optimum tree which is able to balance the data load and the energy in the network. Simulation results show that this balancing operation practically increases the network lifetime.展开更多
Three dimensional frictional contact problems are formulated as linear complementarity problems based on the parametric variational principle. Two aggregate-functionbased algorithms for solving complementarity problem...Three dimensional frictional contact problems are formulated as linear complementarity problems based on the parametric variational principle. Two aggregate-functionbased algorithms for solving complementarity problems are proposed. One is called the self-adjusting interior point algorithm, the other is called the aggregate function smoothing algorithm. Numerical experiment shows the efficiency of the proposed two algorithms.展开更多
A kind of single linked lists named aggregative chain is introduced to the algorithm, thus improving the architecture of FP tree. The new FP tree is a one-way tree and only the pointers that point its parent at each n...A kind of single linked lists named aggregative chain is introduced to the algorithm, thus improving the architecture of FP tree. The new FP tree is a one-way tree and only the pointers that point its parent at each node are kept. Route information of different nodes in a same item are compressed into aggregative chains so that the frequent patterns will be produced in aggregative chains without generating node links and conditional pattern bases. An example of Web key words retrieval is given to analyze and verify the frequent pattern algorithm in this paper.展开更多
Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest l...Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.展开更多
A new algorithm for solving the three-dimensional elastic contact problem with friction is presented. The algorithm is a non-interior smoothing algorithm based on an NCP-function. The parametric variational principle ...A new algorithm for solving the three-dimensional elastic contact problem with friction is presented. The algorithm is a non-interior smoothing algorithm based on an NCP-function. The parametric variational principle and parametric quadratic programming method were applied to the analysis of three-dimensional frictional contact problem. The solution of the contact problem was finally reduced to a linear complementarity problem, which was reformulated as a system of nonsmooth equations via an NCP-function. A smoothing approximation to the nonsmooth equations was given by the aggregate function. A Newton method was used to solve the resulting smoothing nonlinear equations. The algorithm presented is easy to understand and implement. The reliability and efficiency of this algorithm are demonstrated both by the numerical experiments of LCP in mathematical way and the examples of contact problems in mechanics.展开更多
The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between c...The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.展开更多
Block-matching and 3D-filtering(BM3D) is a state of the art denoising algorithm for image/video,which takes full advantages of the spatial correlation and the temporal correlation of the video. The algorithm performan...Block-matching and 3D-filtering(BM3D) is a state of the art denoising algorithm for image/video,which takes full advantages of the spatial correlation and the temporal correlation of the video. The algorithm performance comes at the price of more similar blocks finding and filtering which bring high computation and memory access. Area, memory bandwidth and computation are the major bottlenecks to design a feasible architecture because of large frame size and search range. In this paper, we introduce a novel structure to increase data reuse rate and reduce the internal static-random-access-memory(SRAM) memory. Our target is to design a phase alternating line(PAL) or real-time processing chip of BM3 D. We propose an application specific integrated circuit(ASIC) architecture of BM3 D for a 720 × 576 BT656 PAL format. The feature of the chip is with 100 MHz system frequency and a 166-MHz 32-bit double data rate(DDR). When noise is σ = 25, we successfully realize real-time denoising and achieve about 10 d B peak signal to noise ratio(PSNR) advance just by one iteration of the BM3 D algorithm.展开更多
Designing an excellent original topology not only improves the accuracy of routing, but also improves the restoring rate of failure. In this paper, we propose a new heuristic topology generation algorithm—GA-PODCC (G...Designing an excellent original topology not only improves the accuracy of routing, but also improves the restoring rate of failure. In this paper, we propose a new heuristic topology generation algorithm—GA-PODCC (Genetic Algorithm based on the Pareoto Optimality of Delay, Configuration and Consumption), which utilizes a genetic algorithm to optimize the link delay and resource configuration/consumption. The novelty lies in designing the two stages of genetic operation: The first stage is to pick the best population by means of the crossover, mutation, and selection operation;The second stage is to select an excellent individual from the best population. The simulation results show that, using the same number of nodes, GA-PODCC algorithm improves the balance of all the three optimization objectives, maintaining a low level of distortion in topology aggregation.展开更多
Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify dat...Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.展开更多
Particles,including soot,aerosol and ash,usually exist as fractal aggregates.The radiative properties of the particle fractal aggregates have a great influence on studying the light or heat radiative transfer in the p...Particles,including soot,aerosol and ash,usually exist as fractal aggregates.The radiative properties of the particle fractal aggregates have a great influence on studying the light or heat radiative transfer in the particle medium.In the present work,the performance of the single-layer inversion model and the double-layer inversion model in reconstructing the geometric structure of particle fractal aggregates is studied based on the light reflectancetransmittance measurement method.An improved artificial fish-swarm algorithm(IAFSA)is proposed to solve the inverse problem.The result reveals that the accuracy of double-layer inversion model is more satisfactory as it can provide more uncorrelated information than the single-layer inversion model.Moreover,the developed IAFSA show higher accuracy and better robustness than the original artificial fish swarm algorithm(AFSA)for avoiding local optimization problems effectively.As a whole,the present work supplies a useful kind of measurement technology for predicting geometrical morphology of particle fractal aggregates.展开更多
Wireless Sensor Networks (WSNs) are mainly deployed for data acquisition, thus, the network performance can be passively measured by exploiting whether application data from various sensor nodes reach the sink. In thi...Wireless Sensor Networks (WSNs) are mainly deployed for data acquisition, thus, the network performance can be passively measured by exploiting whether application data from various sensor nodes reach the sink. In this paper, therefore, we take into account the unique data aggregation communication paradigm of WSNs and model the problem of link loss rates inference as a Maximum-Likelihood Estimation problem. And we propose an inference algorithm based on the standard Expectation-Maximization (EM) techniques. Our algorithm is applicable not only to periodic data collection scenarios but to event detection scenarios. Finally, we validate the algorithm through simulations and it exhibits good performance and scalability.展开更多
Game theory-based models and design tools have gained substantial prominence for controlling and optimizing behavior within distributed engineering systems due to the inherent distribution of decisions among individua...Game theory-based models and design tools have gained substantial prominence for controlling and optimizing behavior within distributed engineering systems due to the inherent distribution of decisions among individuals.In non-cooperative settings,aggregative games serve as a mathematical framework model for the interdependent optimal decision-making problem among a group of non-cooperative players.In such scenarios,each player's decision is influenced by an aggregation of all players'decisions.Nash equilibrium(NE)seeking in aggregative games has emerged as a vibrant topic driven by applications that harness the aggregation property.This paper presents a comprehensive overview of the current research on aggregative games with a focus on communication topology.A systematic classification is conducted on distributed algorithm research based on communication topologies such as undirected networks,directed networks,and time-varying networks.Furthermore,it sorts out the challenges and compares the algorithms'convergence performance.It also delves into real-world applications of distributed optimization techniques grounded in aggregative games.Finally,it proposes several challenges that can guide future research directions.展开更多
Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication w...Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication with neighbors.In this work,we implement the stochastic gradient descent algorithm(SGD)distributedly to optimize tracking errors based on local state and aggregation of the neighbors'estimation.However,Byzantine agents can mislead neighbors,causing deviations from optimal tracking.We prove that the swarm achieves resilient convergence if aggregated results lie within the normal neighbors'convex hull,which can be guaranteed by the introduced centerpoint-based aggregation rule.In the given simulated scenarios,distributed learning using average,geometric median(GM),and coordinate-wise median(CM)based aggregation rules fail to track the target.Compared to solely using the centerpoint aggregation method,our approach,which combines a pre-filter with the centroid aggregation rule,significantly enhances resilience against Byzantine attacks,achieving faster convergence and smaller tracking errors.展开更多
基金the Science and Technology Project of State Grid Shanxi Electric Power Co.,Ltd.,with the project number 52051L240001.
文摘In contemporary power systems,delving into the flexible regulation potential of demand-side resources is of paramount significance for the efficient operation of power grids.This research puts forward an innovative multivariate flexible load aggregation control approach that takes dynamic demand response into full consideration.In the initial stage,using generalized time-domain aggregation modelling for a wide array of heterogeneous flexible loads,including temperature-controlled loads,electric vehicles,and energy storage devices,a novel calculation method for their maximum adjustable capacities is devised.Distinct from conventional methods,this newly developed approach enables more precise and adaptable quantification of the load-adjusting capabilities,thereby enhancing the accuracy and flexibility of demand-side resource management.Subsequently,an SSA-BiLSTM flexible load classification prediction model is established.This model represents an innovative application in the field,effectively combining the advantages of the Sparrow Search Algorithm(SSA)and the Bidirectional Long-Short-Term Memory(BiLSTM)neural network.Furthermore,a parallel Markov chain is introduced to evaluate the switching state transfer probability of flexible loads accurately.This integration allows for a more refined determination of the maximum response capacity range of the flexible load aggregator,significantly improving the precision of capacity assessment compared to existing methods.Finally,in consonance with the intra-day scheduling plan,a newly developed diffuse filling algorithm is implemented to control the activation times of flexible loads precisely,thus achieving real-time dynamic demand response.Through in-depth case analysis and comprehensive comparative studies,the effectiveness of the proposed method is convincingly validated.With its innovative techniques and enhanced performance,it is demonstrated that this method has the potential to substantially enhance the utilization efficiency of demand-side resources in power systems,providing a novel and effective solution for optimizing power grid operation and demand-side management.
基金supported by the National Natural Science Foundation of China(Grant No.52078493)the Natural Science Foundation of Hunan Province(Grant No.2022JJ30700)+2 种基金the Natural Science Foundation for Excellent Young Scholars of Hunan(Grant No.2021JJ20057)the Science and Technology Plan Project of Changsha(Grant No.kq2305006)the Innovation Driven Program of Central South University(Grant No.2023CXQD033).
文摘Estimation of velocity profile within mud depth is a long-standing and essential problem in debris flow dynamics.Until now,various velocity profiles have been proposed based on the fitting analysis of experimental measurements,but these are often limited by the observation conditions,such as the number of configured sensors.Therefore,the resulting linear velocity profiles usually exhibit limitations in reproducing the temporal-varied and nonlinear behavior during the debris flow process.In this study,we present a novel approach to explore the debris flow velocity profile in detail upon our previous 3D-HBPSPH numerical model,i.e.,the three-dimensional Smoothed Particle Hydrodynamic model incorporating the Herschel-Bulkley-Papanastasiou rheology.Specifically,we propose a stratification aggregation algorithm for interpreting the details of SPH particles,which enables the recording of temporal velocities of debris flow at different mud depths.To analyze the velocity profile,we introduce a logarithmic-based nonlinear model with two key parameters,that a controlling the shape of velocity profile and b concerning its temporal evolution.We verify the proposed velocity profile and explore its sensitivity using 34 sets of velocity data from three individual flume experiments in previous literature.Our results demonstrate that the proposed temporalvaried nonlinear velocity profile outperforms the previous linear profiles.
文摘In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural networkbased reinforcement learning, thereby potentially leading to more effective policy improvement.
文摘Energy is one of the most important items to determine the network lifetime due to low power energy nodes included in the network. Generally, data aggregation tree concept is used to find an energy efficient solution. However, even the best aggregation tree does not share the load of data packets to the transmitting nodes fairly while it is consuming the lowest possible energy of the network. Therefore, after some rounds, this problem causes to consume the whole energy of some heavily loaded nodes and hence results in with the death of the network. In this paper, by using the Genetic Algorithm (GA), we investigate the energy efficient data collecting spanning trees to find a suitable route which balances the data load throughout the network and thus balances the residual energy in the network in addition to consuming totally low power of the network. Using an algorithm which is able to balance the residual energy among the nodes can help the network to withstand more and consequently extend its own lifetime. In this work, we calculate all possible routes represented by the aggregation trees through the genetic algorithm. GA finds the optimum tree which is able to balance the data load and the energy in the network. Simulation results show that this balancing operation practically increases the network lifetime.
基金The project supported by the National Natural Science foundation of china(10225212,50178016.10302007)the National Kev Basic Research Special Foundation and the Ministry of Education of China
文摘Three dimensional frictional contact problems are formulated as linear complementarity problems based on the parametric variational principle. Two aggregate-functionbased algorithms for solving complementarity problems are proposed. One is called the self-adjusting interior point algorithm, the other is called the aggregate function smoothing algorithm. Numerical experiment shows the efficiency of the proposed two algorithms.
基金Supported by the Natural Science Foundation ofLiaoning Province (20042020)
文摘A kind of single linked lists named aggregative chain is introduced to the algorithm, thus improving the architecture of FP tree. The new FP tree is a one-way tree and only the pointers that point its parent at each node are kept. Route information of different nodes in a same item are compressed into aggregative chains so that the frequent patterns will be produced in aggregative chains without generating node links and conditional pattern bases. An example of Web key words retrieval is given to analyze and verify the frequent pattern algorithm in this paper.
文摘Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.
文摘A new algorithm for solving the three-dimensional elastic contact problem with friction is presented. The algorithm is a non-interior smoothing algorithm based on an NCP-function. The parametric variational principle and parametric quadratic programming method were applied to the analysis of three-dimensional frictional contact problem. The solution of the contact problem was finally reduced to a linear complementarity problem, which was reformulated as a system of nonsmooth equations via an NCP-function. A smoothing approximation to the nonsmooth equations was given by the aggregate function. A Newton method was used to solve the resulting smoothing nonlinear equations. The algorithm presented is easy to understand and implement. The reliability and efficiency of this algorithm are demonstrated both by the numerical experiments of LCP in mathematical way and the examples of contact problems in mechanics.
基金Projects(60873265,60903222) supported by the National Natural Science Foundation of China Project(IRT0661) supported by the Program for Changjiang Scholars and Innovative Research Team in University of China
文摘The Circle algorithm was proposed for large datasets.The idea of the algorithm is to find a set of vertices that are close to each other and far from other vertices.This algorithm makes use of the connection between clustering aggregation and the problem of correlation clustering.The best deterministic approximation algorithm was provided for the variation of the correlation of clustering problem,and showed how sampling can be used to scale the algorithms for large datasets.An extensive empirical evaluation was given for the usefulness of the problem and the solutions.The results show that this method achieves more than 50% reduction in the running time without sacrificing the quality of the clustering.
基金the National Natural Science Foundation of China(No.61234001)
文摘Block-matching and 3D-filtering(BM3D) is a state of the art denoising algorithm for image/video,which takes full advantages of the spatial correlation and the temporal correlation of the video. The algorithm performance comes at the price of more similar blocks finding and filtering which bring high computation and memory access. Area, memory bandwidth and computation are the major bottlenecks to design a feasible architecture because of large frame size and search range. In this paper, we introduce a novel structure to increase data reuse rate and reduce the internal static-random-access-memory(SRAM) memory. Our target is to design a phase alternating line(PAL) or real-time processing chip of BM3 D. We propose an application specific integrated circuit(ASIC) architecture of BM3 D for a 720 × 576 BT656 PAL format. The feature of the chip is with 100 MHz system frequency and a 166-MHz 32-bit double data rate(DDR). When noise is σ = 25, we successfully realize real-time denoising and achieve about 10 d B peak signal to noise ratio(PSNR) advance just by one iteration of the BM3 D algorithm.
文摘Designing an excellent original topology not only improves the accuracy of routing, but also improves the restoring rate of failure. In this paper, we propose a new heuristic topology generation algorithm—GA-PODCC (Genetic Algorithm based on the Pareoto Optimality of Delay, Configuration and Consumption), which utilizes a genetic algorithm to optimize the link delay and resource configuration/consumption. The novelty lies in designing the two stages of genetic operation: The first stage is to pick the best population by means of the crossover, mutation, and selection operation;The second stage is to select an excellent individual from the best population. The simulation results show that, using the same number of nodes, GA-PODCC algorithm improves the balance of all the three optimization objectives, maintaining a low level of distortion in topology aggregation.
基金Supported by National Natural Science Foundation of China (No. 50475117)Tianjin Natural Science Foundation (No.06YFJMJC03700).
文摘Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.
基金supported by the National Natural Science Foundation of China(No.51806103)the Natural Science Foundation of Jiangsu Province(No.BK20170800)Aeronautical Science Foundation of China(No.201928052002)。
文摘Particles,including soot,aerosol and ash,usually exist as fractal aggregates.The radiative properties of the particle fractal aggregates have a great influence on studying the light or heat radiative transfer in the particle medium.In the present work,the performance of the single-layer inversion model and the double-layer inversion model in reconstructing the geometric structure of particle fractal aggregates is studied based on the light reflectancetransmittance measurement method.An improved artificial fish-swarm algorithm(IAFSA)is proposed to solve the inverse problem.The result reveals that the accuracy of double-layer inversion model is more satisfactory as it can provide more uncorrelated information than the single-layer inversion model.Moreover,the developed IAFSA show higher accuracy and better robustness than the original artificial fish swarm algorithm(AFSA)for avoiding local optimization problems effectively.As a whole,the present work supplies a useful kind of measurement technology for predicting geometrical morphology of particle fractal aggregates.
文摘Wireless Sensor Networks (WSNs) are mainly deployed for data acquisition, thus, the network performance can be passively measured by exploiting whether application data from various sensor nodes reach the sink. In this paper, therefore, we take into account the unique data aggregation communication paradigm of WSNs and model the problem of link loss rates inference as a Maximum-Likelihood Estimation problem. And we propose an inference algorithm based on the standard Expectation-Maximization (EM) techniques. Our algorithm is applicable not only to periodic data collection scenarios but to event detection scenarios. Finally, we validate the algorithm through simulations and it exhibits good performance and scalability.
基金supported in part by the Fundamental Research Funds for the Central Universities(SWU-XDJH202312)the National Natural Science Foundation of China(62173278)the Chongqing Science Fund for Distinguished Young Scholars(2024NSCQJQX0103).
文摘Game theory-based models and design tools have gained substantial prominence for controlling and optimizing behavior within distributed engineering systems due to the inherent distribution of decisions among individuals.In non-cooperative settings,aggregative games serve as a mathematical framework model for the interdependent optimal decision-making problem among a group of non-cooperative players.In such scenarios,each player's decision is influenced by an aggregation of all players'decisions.Nash equilibrium(NE)seeking in aggregative games has emerged as a vibrant topic driven by applications that harness the aggregation property.This paper presents a comprehensive overview of the current research on aggregative games with a focus on communication topology.A systematic classification is conducted on distributed algorithm research based on communication topologies such as undirected networks,directed networks,and time-varying networks.Furthermore,it sorts out the challenges and compares the algorithms'convergence performance.It also delves into real-world applications of distributed optimization techniques grounded in aggregative games.Finally,it proposes several challenges that can guide future research directions.
基金supported By Guangdong Major Project of Basic and Applied Basic Research(2023B0303000009)Guangdong Basic and Applied Basic Research Foundation(2024A1515030153,2025A1515011587)+1 种基金Project of Department of Education of Guangdong Province(2023ZDZX4046)Shen-zhen Natural Science Fund(Stable Support Plan Program 20231122121608001),Ningbo Municipal Science and Technology Bureau(ZX2024000604).
文摘Dear Editor,Through distributed machine learning,multi-UAV systems can achieve global optimization goals without a centralized server,such as optimal target tracking,by leveraging local calculation and communication with neighbors.In this work,we implement the stochastic gradient descent algorithm(SGD)distributedly to optimize tracking errors based on local state and aggregation of the neighbors'estimation.However,Byzantine agents can mislead neighbors,causing deviations from optimal tracking.We prove that the swarm achieves resilient convergence if aggregated results lie within the normal neighbors'convex hull,which can be guaranteed by the introduced centerpoint-based aggregation rule.In the given simulated scenarios,distributed learning using average,geometric median(GM),and coordinate-wise median(CM)based aggregation rules fail to track the target.Compared to solely using the centerpoint aggregation method,our approach,which combines a pre-filter with the centroid aggregation rule,significantly enhances resilience against Byzantine attacks,achieving faster convergence and smaller tracking errors.