*Result*: Distributed fixed/predefined-time optimization for multi-agent systems: new exponential-function-based algorithms.

Title:
Distributed fixed/predefined-time optimization for multi-agent systems: new exponential-function-based algorithms.
Authors:
Li, Luke1 (AUTHOR) Luke0258@aeu.edu.cn, Gan, Qintao1 (AUTHOR) ganqintao@aeu.edu.cn, Li, Ruihong1 (AUTHOR) ruihongli@aeu.edu.cn, Kang, Qiaokun1 (AUTHOR) kangqiaokun@aeu.edu.cn, Liu, Yuanlong1 (AUTHOR) liuyuanlong@aeu.edu.cn
Source:
ISA Transactions. Feb2026, Vol. 169, p75-87. 13p.
Database:
Supplemental Index

*Further Information*

*Distributed convex optimization with time-varying or time-invariant cost functions remains one of the central challenges in multi-agent systems (MASs). Achieving efficient distributed optimization within a fixed/predefined-time continues to face difficulties such as restrictive assumptions and high computational complexity. This paper proposes innovative distributed optimization frameworks to address the aforementioned limitations. For time-invariant optimization problems, an estimator-based two-stage distributed protocol is introduced, which achieves both inter-agent consensus and convergence to the global optimum within a fixed/predefined-time. Notably, this protocol only requires strong convexity of the global cost function, thereby relaxing the constraints on local functions. For time-varying scenarios, an enhanced zero-gradient-sum (ZGS) framework is developed by integrating a zeroing neural network (ZNN) with sliding mode control. This framework not only eliminates dependence on initial conditions but also implicitly computes the Hessian inverse through ZNN dynamics, effectively avoiding the O(n 3) computational burden associated with explicit matrix inversion. Numerical simulations validate the superior convergence speed and broad applicability of our method, attesting to its great potential for distributed optimization. • Exponential-function-based fixed/predefined-time controller with simplified design. • Distributed estimator to reconstruct the average global gradient. • Relaxation of strong convexity in time-invariant distributed optimization. • ZNN model with a new activation function to approximate the Hessian inverse in real-time. • Algorithms with exact fixed/predefined-time optimization for arbitrary initial states. [ABSTRACT FROM AUTHOR]*