期刊文献+
共找到133篇文章
< 1 2 7 >
每页显示 20 50 100
Online distributed optimization with stochastic gradients:high probability bound of regrets
1
作者 Yuchen Yang Kaihong Lu Long Wang 《Control Theory and Technology》 EI CSCD 2024年第3期419-430,共12页
In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate ... In this paper,the problem of online distributed optimization subject to a convex set is studied via a network of agents.Each agent only has access to a noisy gradient of its own objective function,and can communicate with its neighbors via a network.To handle this problem,an online distributed stochastic mirror descent algorithm is proposed.Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets.Different from them,we study the high probability bound of the regrets,i.e.,the sublinear bound of the regret is characterized by the natural logarithm of the failure probability's inverse.Under mild assumptions on the graph connectivity,we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon.Finally,a simulation is provided to demonstrate the effectiveness of our theoretical results. 展开更多
关键词 distributed optimization Online optimization Stochastic gradient High probability
原文传递
Random gradient-free method for online distributed optimization with strongly pseudoconvex cost functions
2
作者 Xiaoxi Yan Cheng Li +1 位作者 Kaihong Lu Hang Xu 《Control Theory and Technology》 EI CSCD 2024年第1期14-24,共11页
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state inf... This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results. 展开更多
关键词 Multi-agent system Online distributed optimization Pseudoconvex optimization Random gradient-free method
原文传递
Distributed optimization of electricity-Gas-Heat integrated energy system with multi-agent deep reinforcement learning 被引量:5
3
作者 Lei Dong Jing Wei +1 位作者 Hao Lin Xinying Wang 《Global Energy Interconnection》 EI CAS CSCD 2022年第6期604-617,共14页
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co... The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents. 展开更多
关键词 Integrated energy system Multi-agent system distributed optimization Multi-agent deep deterministic policy gradient Real-time optimization decision
在线阅读 下载PDF
Fully privacy-preserving distributed optimization in power systems based on secret sharing 被引量:1
4
作者 Nianfeng Tian Qinglai Guo +1 位作者 Hongbin Sun Xin Zhou 《iEnergy》 2022年第3期351-362,共12页
With the increasing development of smart grid,multi-party cooperative computation between several entities has become a typical characteristic of modern energy systems.Traditionally,data exchange among parties is inev... With the increasing development of smart grid,multi-party cooperative computation between several entities has become a typical characteristic of modern energy systems.Traditionally,data exchange among parties is inevitable,rendering how to complete multi-party collaborative optimization without exposing any private information a critical issue.This paper proposes a fully privacy-preserving distributed optimization framework based on secure multi-party computation(SMPC)with secret sharing protocols.The framework decomposes the collaborative optimization problem into a master problem and several subproblems.The process of solving the master problem is executed in the SMPC framework via the secret sharing protocols among agents.The relationships of agents are completely equal,and there is no privileged agent or any third party.The process of solving subproblems is conducted by agents individually.Compared to the traditional distributed optimization framework,the proposed SMPC-based framework can fully preserve individual private information.Exchanged data among agents are encrypted and no private information disclosure is assured.Furthermore,the framework maintains a limited and acceptable increase in computational costs while guaranteeing opti-mality.Case studies are conducted on test systems of different scales to demonstrate the principle of secret sharing and verify the feasibility and scalability of the proposed methodology. 展开更多
关键词 Secure multi-party computation privacy preservation secret sharing distributed optimization.
在线阅读 下载PDF
Fully asynchronous distributed optimization with linear convergence over directed networks
5
作者 SHA Xingyu ZHANG Jiaqi YOU Keyou 《中山大学学报(自然科学版)(中英文)》 CAS CSCD 北大核心 2023年第5期1-23,共23页
We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it f... We study distributed optimization problems over a directed network,where nodes aim to minimize the sum of local objective functions via directed communications with neighbors.Many algorithms are designed to solve it for synchronized or randomly activated implementation,which may create deadlocks in practice.In sharp contrast,we propose a fully asynchronous push-pull gradient(APPG) algorithm,where each node updates without waiting for any other node by using possibly delayed information from neighbors.Then,we construct two novel augmented networks to analyze asynchrony and delays,and quantify its convergence rate from the worst-case point of view.Particularly,all nodes of APPG converge to the same optimal solution at a linear rate of O(λ^(k)) if local functions have Lipschitz-continuous gradients and their sum satisfies the Polyak-?ojasiewicz condition(convexity is not required),where λ ∈(0,1) is explicitly given and the virtual counter k increases by one when any node updates.Finally,the advantage of APPG over the synchronous counterpart and its linear speedup efficiency are numerically validated via a logistic regression problem. 展开更多
关键词 fully asynchronous distributed optimization linear convergence Polyak-Łojasiewicz condition
在线阅读 下载PDF
Distributed optimization for discrete-time multiagent systems with nonconvex control input constraints and switching topologies
6
作者 Xiao-Yu Shen Shuai Su Hai-Liang Hou 《Chinese Physics B》 SCIE EI CAS CSCD 2021年第12期283-290,共8页
This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm w... This paper addresses the distributed optimization problem of discrete-time multiagent systems with nonconvex control input constraints and switching topologies.We introduce a novel distributed optimization algorithm with a switching mechanism to guarantee that all agents eventually converge to an optimal solution point,while their control inputs are constrained in their own nonconvex region.It is worth noting that the mechanism is performed to tackle the coexistence of the nonconvex constraint operator and the optimization gradient term.Based on the dynamic transformation technique,the original nonlinear dynamic system is transformed into an equivalent one with a nonlinear error term.By utilizing the nonnegative matrix theory,it is shown that the optimization problem can be solved when the union of switching communication graphs is jointly strongly connected.Finally,a numerical simulation example is used to demonstrate the acquired theoretical results. 展开更多
关键词 multiagent systems nonconvex input constraints switching topologies distributed optimization
原文传递
Distributed Optimization for Heterogenous Second⁃Order Multi⁃Agent Systems
7
作者 Qing Zhang Zhikun Gong +1 位作者 Zhengquan Yang Zengqiang Chen 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2020年第4期53-59,共7页
A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to t... A continuous⁃time distributed optimization was researched for second⁃order heterogeneous multi⁃agent systems.The aim of this study is to keep the velocities of all agents the same and make the velocities converge to the optimal value to minimize the sum of local cost functions.First,an effective distributed controller which only uses local information was designed.Then,the stability and optimization of the systems were verified.Finally,a simulation case was used to illustrate the analytical results. 展开更多
关键词 distributed optimization heterogeneous multi⁃agent system local cost function CONSENSUS
在线阅读 下载PDF
Distributed asynchronous double accelerated optimization for ethylene plant considering delays
8
作者 Ting Wang Zhongmei Li Wenli Du 《Chinese Journal of Chemical Engineering》 2025年第2期245-250,共6页
Considering the complexity of plant-wide optimization for large-scale industries, a distributed optimization framework to solve the profit optimization problem in ethylene whole process is proposed. To tackle the dela... Considering the complexity of plant-wide optimization for large-scale industries, a distributed optimization framework to solve the profit optimization problem in ethylene whole process is proposed. To tackle the delays arising from the residence time for materials passing through production units during the process with guaranteed constraint satisfaction, an asynchronous distributed parameter projection algorithm with gradient tracking method is introduced. Besides, the heavy ball momentum and Nesterov momentum are incorporated into the proposed algorithm in order to achieve double acceleration properties. The experimental results show that the proposed asynchronous algorithm can achieve a faster convergence compared with the synchronous algorithm. 展开更多
关键词 Asynchronous distributed optimization Plant-wide optimization Heavy ball Nesterov Inequality constraints
在线阅读 下载PDF
A Stochastic Extremum Seeking Approach for Distributed Optimization with Binary-Valued Intermittent Measurements over Directed Graphs
9
作者 ZHANG Yuan LIU Shujun 《Journal of Systems Science & Complexity》 2025年第5期1887-1908,共22页
This paper focuses on solving the distributed optimization problem with binary-valued intermittent measurements of local objective functions.In this paper,a binary-valued measurement represents whether the measured va... This paper focuses on solving the distributed optimization problem with binary-valued intermittent measurements of local objective functions.In this paper,a binary-valued measurement represents whether the measured value is smaller than a fixed threshold.Meanwhile,the“intermittent”scenario arises when there is a non-zero probability of not detecting each local function value during the measuring process.Using this kind of coarse measurement,the authors propose a discrete-time stochastic extremum seeking-based algorithm for distributed optimization over a directed graph.As is well-known,many existing distributed optimization algorithms require a doubly-stochastic weight matrix to ensure the average consensus of agents.However,in practical engineering,achieving doublestochasticity,especially for directed graphs,is not always feasible or desirable.To overcome this limitation,the authors design a row-stochastic matrix and a column-stochastic matrix as weight matrices in the proposed algorithm instead of relying on doubly-stochasticity.Under some mild conditions,the authors rigorously prove that agents can reach the average consensus and ultimately find the optimal solution.Finally,the authors provide a numerical example to illustrate the effectiveness of the algorithm. 展开更多
关键词 Binary-valued measurement directed graph distributed optimization intermittent measurement stochastic extremum seeking
原文传递
Gradient-free distributed online optimization in networks
10
作者 Yuhang Liu Wenxiao Zhao +2 位作者 Nan Zhang Dongdong Lv Shuai Zhang 《Control Theory and Technology》 2025年第2期207-220,共14页
In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss ... In this paper,we consider the distributed online optimization problem on a time-varying network,where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated.Moreover,we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency.By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms,we design two kinds of gradient-free distributed online optimization algorithms without projection step,which can economize considerable computational resources as well as has less limitations on the applicability.We prove that both of two algorithms achieves consensus of the estimates and regrets of\(O\left(\log(T)\right)\)for local strongly convex objective,respectively.Finally,a simulation example is provided to verify the theoretical results. 展开更多
关键词 distributed optimization Online convex optimization Gradient-free algorithm Projection-free algorithm
原文传递
Privacy Distributed Constrained Optimization Over Time-Varying Unbalanced Networks and Its Application in Federated Learning
11
作者 Mengli Wei Wenwu Yu +2 位作者 Duxin Chen Mingyu Kang Guang Cheng 《IEEE/CAA Journal of Automatica Sinica》 2025年第2期335-346,共12页
This paper investigates a class of constrained distributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into accoun... This paper investigates a class of constrained distributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into account recent progress and addressing these concerns separately, there remains a lack of solutions offering theoretical guarantees for both privacy protection and constrained ZOO over time-varying unbalanced graphs.We hereby propose a novel algorithm, termed the differential privacy(DP) distributed push-sum based zeroth-order constrained optimization algorithm(DP-ZOCOA). Operating over time-varying unbalanced graphs, DP-ZOCOA obviates the need for supplemental suboptimization problem computations, thereby reducing overhead in comparison to distributed primary-dual methods. DP-ZOCOA is specifically tailored to tackle constrained ZOO problems over time-varying unbalanced graphs,offering a guarantee of convergence to the optimal solution while robustly preserving privacy. Moreover, we provide rigorous proofs of convergence and privacy for DP-ZOCOA, underscoring its efficacy in attaining optimal convergence without constraints. To enhance its applicability, we incorporate DP-ZOCOA into the federated learning framework and formulate a decentralized zeroth-order constrained federated learning algorithm(ZOCOA-FL) to address challenges stemming from the timevarying imbalance of communication topology. Finally, the performance and effectiveness of the proposed algorithms are thoroughly evaluated through simulations on distributed least squares(DLS) and decentralized federated learning(DFL) tasks. 展开更多
关键词 Constrained distributed optimization decentralized federated learning(DFL) differential privacy(DP) time-varying unbalanced graphs zeroth-order gradient
在线阅读 下载PDF
Event-triggered distributed optimization for model-free multi-agent systems 被引量:1
12
作者 Shanshan ZHENG Shuai LIU Licheng WANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第2期214-224,共11页
In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are availa... In this paper,the distributed optimization problem is investigated for a class of general nonlinear model-free multi-agent systems.The dynamical model of each agent is unknown and only the input/output data are available.A model-free adaptive control method is employed,by which the original unknown nonlinear system is equivalently converted into a dynamic linearized model.An event-triggered consensus scheme is developed to guarantee that the consensus error of the outputs of all agents is convergent.Then,by means of the distributed gradient descent method,a novel event-triggered model-free adaptive distributed optimization algorithm is put forward.Sufficient conditions are established to ensure the consensus and optimality of the addressed system.Finally,simulation results are provided to validate the effectiveness of the proposed approach. 展开更多
关键词 distributed optimization Multi-agent systems Model-free adaptive control Event-triggered mechanism
原文传递
Distributed Optimization and Scaling Design for Solving Sylvester Equations
13
作者 CHENG Songsong YU Xin +2 位作者 ZENG Xianlin LIANG Shu HONG Yiguang 《Journal of Systems Science & Complexity》 SCIE EI CSCD 2024年第6期2487-2510,共24页
This paper develops distributed algorithms for solving Sylvester equations.The authors transform solving Sylvester equations into a distributed optimization problem,unifying all eight standard distributed matrix struc... This paper develops distributed algorithms for solving Sylvester equations.The authors transform solving Sylvester equations into a distributed optimization problem,unifying all eight standard distributed matrix structures.Then the authors propose a distributed algorithm to find the least squares solution and achieve an explicit linear convergence rate.These results are obtained by carefully choosing the step-size of the algorithm,which requires particular information of data and Laplacian matrices.To avoid these centralized quantities,the authors further develop a distributed scaling technique by using local information only.As a result,the proposed distributed algorithm along with the distributed scaling design yields a universal method for solving Sylvester equations over a multi-agent network with the constant step-size freely chosen from configurable intervals.Finally,the authors provide three examples to illustrate the effectiveness of the proposed algorithms. 展开更多
关键词 distributed optimization least squares solution linear convergence rate step-size interval Sylvester equation
原文传递
Zeroth-Order Methods for Online Distributed Optimization with Strongly Pseudoconvex Cost Functions
14
作者 Xiaoxi YAN Muyuan MA Kaihong LU 《Journal of Systems Science and Information》 CSCD 2024年第1期145-160,共16页
This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most exis... This paper studies an online distributed optimization problem over multi-agent systems.In this problem,the goal of agents is to cooperatively minimize the sum of locally dynamic cost functions.Different from most existing works on distributed optimization,here we consider the case where the cost function is strongly pseudoconvex and real gradients of objective functions are not available.To handle this problem,an online zeroth-order stochastic optimization algorithm involving the single-point gradient estimator is proposed.Under the algorithm,each agent only has access to the information associated with its own cost function and the estimate of the gradient,and exchange local state information with its immediate neighbors via a time-varying digraph.The performance of the algorithm is measured by the expectation of dynamic regret.Under mild assumptions on graphs,we prove that if the cumulative deviation of minimizer sequence grows within a certain rate,then the expectation of dynamic regret grows sublinearly.Finally,a simulation example is given to illustrate the validity of our results. 展开更多
关键词 multi-agent systems strongly pseudoconvex function single-point gradient estimator online distributed optimization
原文传递
Distributed optimization via dynamic event-triggered scheme with metric subregularity condition
15
作者 Xin Yu Xi Chen +1 位作者 Yuan Fan Songsong Cheng 《Autonomous Intelligent Systems》 2024年第1期358-367,共10页
In this paper,we present a continuous-time algorithm with a dynamic event-triggered communication(DETC)mechanism for solving a class of distributed convex optimization problems that satisfy a metric subregularity cond... In this paper,we present a continuous-time algorithm with a dynamic event-triggered communication(DETC)mechanism for solving a class of distributed convex optimization problems that satisfy a metric subregularity condition.The proposed algorithm addresses the challenge of limited bandwidth in multi-agent systems by utilizing a continuous-time optimization approach with DETC.Furthermore,we prove that the distributed event-triggered algorithm converges exponentially to the optimal set,even without strong convexity conditions.Finally,we provide a comparison example to demonstrate the efficiency of our algorithm in communication resource-saving. 展开更多
关键词 distributed optimization Event-triggered Metric subregularity Exponential convergence
原文传递
Distributed Economic Dispatch Algorithms of Microgrids Integrating Grid-Connected and Isolated Modes
16
作者 Zhongxin Liu Yanmeng Zhang +1 位作者 Yalin Zhang Fuyong Wang 《IEEE/CAA Journal of Automatica Sinica》 2025年第1期86-98,共13页
The economic dispatch problem(EDP) of microgrids operating in both grid-connected and isolated modes within an energy internet framework is addressed in this paper. The multi-agent leader-following consensus algorithm... The economic dispatch problem(EDP) of microgrids operating in both grid-connected and isolated modes within an energy internet framework is addressed in this paper. The multi-agent leader-following consensus algorithm is employed to address the EDP of microgrids in grid-connected mode, while the push-pull algorithm with a fixed step size is introduced for the isolated mode. The proposed algorithm of isolated mode is proven to converge to the optimum when the interaction digraph of microgrids is strongly connected. A unified algorithmic framework is proposed to handle the two modes of operation of microgrids simultaneously, enabling our algorithm to achieve optimal power allocation and maintain the balance between power supply and demand in any mode and any mode switching. Due to the push-pull structure of the algorithm and the use of fixed step size,the proposed algorithm can better handle the case of unbalanced graphs, and the convergence speed is improved. It is documented that when the transmission topology is strongly connected and there is bi-directional communication between the energy router and its neighbors, the proposed algorithm in composite mode achieves economic dispatch even with arbitrary mode switching.Finally, we demonstrate the effectiveness and superiority of our algorithm through numerical simulations. 展开更多
关键词 Consensus algorithm distributed optimization economic dispatch(ED) energy router(ER) multi-agent systems
在线阅读 下载PDF
Privacy Preserving Distributed Bandit Residual Feedback Online Optimization Over Time-Varying Unbalanced Graphs
17
作者 Zhongyuan Zhao Zhiqiang Yang +2 位作者 Luyao Jiang Ju Yang Quanbo Ge 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第11期2284-2297,共14页
This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed on... This paper considers the distributed online optimization(DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback(OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm. 展开更多
关键词 Differential privacy distributed online optimization(DOO) federated learning one-point residual feedback(OPRF) time-varying unbalanced graphs
在线阅读 下载PDF
Distributed Stochastic Optimization with Compression for Non-Strongly Convex Objectives
18
作者 Xuanjie Li Yuedong Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期459-481,共23页
We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization p... We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios. 展开更多
关键词 distributed stochastic optimization arbitrary compression fidelity non-strongly convex objective function
在线阅读 下载PDF
Variable stiffness design optimization of fiber-reinforced composite laminates with regular and irregular holes considering fiber continuity for additive manufacturing 被引量:1
19
作者 Yi LIU Zunyi DUAN +6 位作者 Chunping ZHOU Yuan SI Chenxi GUAN Yi XIONG Bin XU Jun YAN Jihong ZHU 《Chinese Journal of Aeronautics》 2025年第3期334-354,共21页
Fiber-reinforced composites are an ideal material for the lightweight design of aerospace structures. Especially in recent years, with the rapid development of composite additive manufacturing technology, the design o... Fiber-reinforced composites are an ideal material for the lightweight design of aerospace structures. Especially in recent years, with the rapid development of composite additive manufacturing technology, the design optimization of variable stiffness of fiber-reinforced composite laminates has attracted widespread attention from scholars and industry. In these aerospace composite structures, numerous cutout panels and shells serve as access points for maintaining electrical, fuel, and hydraulic systems. The traditional fiber-reinforced composite laminate subtractive drilling manufacturing inevitably faces the problems of interlayer delamination, fiber fracture, and burr of the laminate. Continuous fiber additive manufacturing technology offers the potential for integrated design optimization and manufacturing with high structural performance. Considering the integration of design and manufacturability in continuous fiber additive manufacturing, the paper proposes linear and nonlinear filtering strategies based on the Normal Distribution Fiber Optimization (NDFO) material interpolation scheme to overcome the challenge of discrete fiber optimization results, which are difficult to apply directly to continuous fiber additive manufacturing. With minimizing structural compliance as the objective function, the proposed approach provides a strategy to achieve continuity of discrete fiber paths in the variable stiffness design optimization of composite laminates with regular and irregular holes. In the variable stiffness design optimization model, the number of candidate fiber laying angles in the NDFO material interpolation scheme is considered as design variable. The sensitivity information of structural compliance with respect to the number of candidate fiber laying angles is obtained using the analytical sensitivity analysis method. Based on the proposed variable stiffness design optimization method for complex perforated composite laminates, the numerical examples consider the variable stiffness design optimization of typical non-perforated and perforated composite laminates with circular, square, and irregular holes, and systematically discuss the number of candidate discrete fiber laying angles, discrete fiber continuous filtering strategies, and filter radius on structural compliance, continuity, and manufacturability. The optimized discrete fiber angles of variable stiffness laminates are converted into continuous fiber laying paths using a streamlined process for continuous fiber additive manufacturing. Meanwhile, the optimized non-perforated and perforated MBB beams after discrete fiber continuous treatment, are manufactured using continuous fiber co-extrusion additive manufacturing technology to verify the effectiveness of the variable stiffness fiber optimization framework proposed in this paper. 展开更多
关键词 Variable stiffness composite laminates Discrete material interpolation scheme Normal distribution fiber optimization Discrete fiber continuous filtering strategy Additive manufacturing of composite laminates
原文传递
Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize 被引量:4
20
作者 Xiaoxing Ren Dewei Li +1 位作者 Yugeng Xi Haibin Shao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第8期1451-1464,共14页
In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing th... In this paper,we consider distributed convex optimization problems on multi-agent networks.We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols.Specifically,we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD)algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol.Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem.Moreover,the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works.Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach. 展开更多
关键词 distributed optimization dynamic stepsize gradient method multi-agent networks
在线阅读 下载PDF
上一页 1 2 7 下一页 到第
使用帮助 返回顶部