0% found this document useful (0 votes)
19 views10 pages

Sparse Controller For Networked Control Systems

This paper presents a comprehensive analysis of optimal structured and sparse H∞ controllers for continuous-time linear time-invariant systems, addressing three key design problems: minimizing controller sparsity while meeting performance requirements, balancing system performance with controller sparsity, and enhancing performance under structural constraints. The authors propose a linearization technique that converts nonconvex problems into convex semidefinite programming problems and develop an iterative linear matrix inequality algorithm to ensure convergence. Numerical simulations validate the effectiveness of the proposed methods in reducing communication burdens in networked control systems.

Uploaded by

Hoang Tuan Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views10 pages

Sparse Controller For Networked Control Systems

This paper presents a comprehensive analysis of optimal structured and sparse H∞ controllers for continuous-time linear time-invariant systems, addressing three key design problems: minimizing controller sparsity while meeting performance requirements, balancing system performance with controller sparsity, and enhancing performance under structural constraints. The authors propose a linearization technique that converts nonconvex problems into convex semidefinite programming problems and develop an iterative linear matrix inequality algorithm to ensure convergence. Numerical simulations validate the effectiveness of the proposed methods in reducing communication burdens in networked control systems.

Uploaded by

Hoang Tuan Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

1

Sparse H∞ Controller for Networked Control Systems: Non-Structured


and Optimal Structured Design
Zhaohua Yang, Pengyu Wang, Haishan Zhang, Shiyue Jia, Nachuan Yang,
Yuxing Zhong and Ling Shi, Fellow, IEEE
arXiv:2411.00370v2 [math.OC] 6 May 2025

Abstract—This paper provides a comprehensive analysis of the number of communication channels and, consequently, the
the design of optimal structured and sparse H∞ controllers for scalability and potential of NCS with such centralized designs.
continuous-time linear time-invariant (LTI) systems. Three prob- This motivates us to design control inputs using partial state
lems are considered. First, designing the sparsest H∞ controller,
which minimizes the sparsity of the controller while satisfying the information, which corresponds to a sparse controller gain
given performance requirements. Second, designing a sparsity- matrix.
promoting H∞ controller, which balances system performance Extensive work has been conducted on the optimal sparse
and controller sparsity. Third, designing a H∞ controller subject controller design. In the seminal work [6], the authors pro-
to a structural constraint, which enhances system performance
with a specified sparsity pattern. For each problem, we adopt posed an alternating direction method of multiplier (ADMM)
a linearization technique that transforms the original nonconvex algorithm for designing sparse controllers. Instead of ADMM,
problem into a convex semidefinite programming (SDP) problem. in [7], the authors proposed an iterative convex program-
Subsequently, we design an iterative linear matrix inequality ming algorithm. More recently, various algorithms have been
(ILMI) algorithm for each problem, which ensures guaranteed proposed for the structured and sparse controller design [8–
convergence. We further characterize the first-order optimality
using the Karush-Kuhn-Tucker (KKT) conditions and prove that
12]. The sparsity mentioned above refers to the elementwise
any limit point of the solution sequence generated by the ILMI sparsity over the gain matrix. In contrast, in the landmark
algorithm is a stationary point. For the first and second problems, work [13], the authors proposed the concept of row sparse
we validate that our algorithms can reduce the number of and column sparse, which motivates the sensor and actuator
non-zero elements and thus the communication burden through selection problems. In such types of problems, a change
several numerical simulations. For the third problem, we refine
the solutions obtained in existing literature, demonstrating that of variable technique can be applied to turn the feasible
our approaches achieve significant improvements. region into a convex set. In [14–16], the authors proposed
various efficient algorithms in large-scale problems, including
Index Terms—Linear matrix inequality, sparsity, networked
control systems, H∞ performance ADMM, proximal gradient, and quasi-Newton. The sparse
control has also been investigated in topology design [17],
consensus network [18], covariance completion [19], target
I. I NTRODUCTION tracking [20], hands-off control [21, 22], observer design [23],

N ETWORKED control systems (NCS) have found ubiq-


uitous industrial applications and have received much
attention from the research community. In such systems,
and remote state estimation [24]. However, to the best of our
knowledge, most previous works focus on H2 performance,
with very few considering H∞ performance, which is an-
information is exchanged via a communication network among other important performance criterion. Designing structured
system components such as controllers, sensors, and actuators and sparse H∞ controllers remains an challenging problem
[1]. Such types of systems have found applications in power due to its nonconvexity. Additionally, we cannot design a
networks [2], transportation networks [3], and sensor and first-order algorithm for analysis since we lack an analytical
actuator networks [4]. Within NCS, we can design a central- expression for a given controller. In [25], the authors designed
ized feedback control strategy that requires each controller to a structured H∞ controller by considering a convex subset
access the full state. Although this approach achieves optimal of the nonconvex feasibility domain, which omitted some
control performance, it typically imposes a prohibitively high potential solutions and could lead to suboptimality. Therefore,
communication burden in large-scale systems. Moreover, in more advanced algorithms capable of thoroughly exploring the
practical applications, the communication and computation nonconvex feasibility region remain to be developed, which
capabilities are often limited [5], which significantly restricts motivates our research in this paper.
In this paper, we provide a systematic solution to the
Z. Yang, P. Wang, H. Zhang, S. Jia, Y. Zhong and L. Shi are with the
Department of Electronic and Computer Engineering, Hong Kong University optimal structured and sparse H∞ controller design. The main
of Science and Technology, Clear Water Bay, Hong Kong SAR. L. Shi is contributions are multifold.
also with the Department of Chemical and Biological Engineering, Hong
Kong University of Science and Technology. (email: [email protected]; 1) We consider three typical scenarios:
[email protected]; [email protected]; [email protected];
[email protected]; [email protected]). a) sparse controller design with bounded H∞ perfor-
N. Yang is with the Department of Electrical and Computer En- mance: minimizing the controller’s sparsity while
gineering, University of Alberta, Edmonton, Alberta T6G 1H9, Canada satisfying a specified performance threshold.
(email:[email protected]).
The work is supported by the Hong Kong RGC General Research Fund b) sparsity-promoting H∞ controller design: explor-
16203723 (Corresponding author: Yuxing Zhong). ing the trade-offs between system performance and
2

controller sparsity. II. P RELIMINARIES


c) structured H∞ controller design: enhancing the A. H∞ Performance
system performance subject to specified structural
In this paper, we consider a continuous-time LTI system
constraints on the controller.
given by
For a) and b) we, for the first time, design sparse H∞
ẋ(t) = Ax(t) + Bu(t) + Gd(t),
controllers without structural constraints. For c), we (1)
refine the solutions obtained in [25]. Using the same z(t) = Cx(t) + Du(t) + Hd(t),
numerical examples, our simulations demonstrate that where x ∈ Rnx is the system state, u ∈ Rnu the control
our approaches achieve significant improvements over input, d ∈ Rnd the exogenous disturbance, and z ∈ Rnz
the solutions presented in [25]. the controlled output. It is standard to assume (A, B) is
2) We develop a novel linearization technique that trans- stabilizable. We adopt a static state feedback control law to
forms the problems mentioned above into convex SDP regulate the system (1), i.e.,
problems, allowing for the explicit incorporation of
structural constraints and l1 regularization terms. We u(t) = Kx(t), (2)
further show that any feasible solution to the transformed
where K ∈ Rnu ×nx is the gain matrix to be designed. We can
problem is also feasible for the original problem, moti-
rewrite system (1) as follows
vating an ILMI algorithm to solve each problem.
3) By incorporating proximal terms into the objective func- ẋ(t) = (A + BK)x(t) + Gd(t),
(3)
tion, we demonstrate that the algorithm exhibits suffi- z(t) = (C + DK)x(t) + Hd(t).
cient decrease and is guaranteed to converge. Moreover,
we characterize the first-order optimality using KKT Therefore, the influence of the exogenous disturbance d on the
conditions, showing that any limit point of the solution controlled output z is described by
sequence satisfies the KKT conditions of the original z(s) = Tzd (s)d(s), (4)
problem and is thus a stationary point.
where Tzd (s) is the transfer function given by
This paper is organized as follows. In Section II, we provide
background on H∞ performance, sparsity control, and existing Tzd (s) = (C + DK)(sI − (A + BK))−1 G + H. (5)
structured H∞ controller design methods, and formulate the The H∞ norm of the system (3) is represented as ||Tzd (s)||∞ ,
problem. In Section III, we separately consider the three where ||Tzd (s)||∞ := maxω∈R σmax {Tzd (jω)}. It is well-
different problems, develop the linearization technique, present known that the following result holds [26]
the optimality conditions, and propose the ILMI algorithm. In Z ∞ Z ∞
Section IV, we provide numerical simulations demonstrating z H (t)z(t)dt ≤ ||Tzd (s)||2∞ dH (t)d(t)dt, (6)
the effectiveness of our methods for scenarios (i) and (ii), 0 0
as well as the superiority of our approaches in scenario (iii). which shows that ||Tzd (s)||2∞ represents the largest energy
Finally, in Section V, we conclude the paper. amplification from the exogenous disturbance to the system
Notations: output. The following fundamental result provides necessary
and sufficient conditions for H∞ controller synthesis with a
given performance threshold.
Rm×n real matrix with dimension m × n Lemma 1 ([27]): The system (3) is asymptotically stable
Sn symmetric matrix with dimension n × n (i.e., for all eigenvalues λ of A + BK, Re(λ) < 0) and
X ⊤ (X H ) transpose (conjugate transpose) of X ||Tzd (s)||∞ ≤ γ if and only if there exists a symmetric positive
Xij ([X]ij ) element at i-th row and j-th column definite matrix P ≻ 0 such that
Xk ([X]k ) matrix sequence at k-th iteration
Sym(P (A + BK)) P G (C + DK)⊤
 
X > 0 (X ≥ 0) element-wise positive (non-negative)
X ≻ 0 (X  0) positive definite (semidefinite)
 G⊤ P −γI H⊤   0. (7)
Tr(X) trace of X C + DK H −γI
I, 0, 1 (identity, zero, all-ones) matrix The optimal H∞ norm is the minimal γ such that (7) is
σmax maximum singular value of a matrix feasible. Therefore, we can obtain the optimal H∞ norm via
Sym(A) A + A⊤ the following SDP problem.
abs(·) element-wise absolute value operation
hX, Y i Frobenius inner product (Tr(X ⊤ Y ))
◦ Hadamard product min γ (8a)
X,Y,γ
|| · ||F Frobenius norm s.t. X ≻ 0, (8b)
[X1 |X2 | . . . |Xn ] row-stacked matrix  ⊤

Re(·) real part Sym(AX + BY ) G (CX + DY )
G⊤ −γI H⊤   0.
,

equality for definition
CX + DY H −γI
⊗ Kronecker
 product
A ∗
 A B (8c)
B⊤ C B⊤ C
3

The optimal H∞ controller can be recovered by Theorem 1 ([29]): Given P ≻ 0 and K ∈ Rnu ×nx , the
K = Y ∗ X ∗ −1 with the corresponding global minimal

following items are equivalent.
||Tzd (s)||∞ = γ ∗ , where (Y ∗ , X ∗ , γ ∗ ) is the solution to
P (C + DK)⊤
 
Sym((A + BK)P ) G
Problem (8).
(i)  G⊤ −γI H⊤  ≺ 0;
(C + DK)P H −γI
(12)
B. Sparsity in NCS
The feedback control strategy typically constructs a com- (ii) There exists X ∈ Rnx ×nx and α > 0 such that:
munication network between the control input and the system  
(A + BK)X α(A + BK)X + P 0 G
state. This network is generally dense, requiring each local  −X −αX 0 0  
Sym   ≺ 0.
controller to access the states of all subsystems. However,  
(C + DK)X α(C + DK)X − γ2 I H 
this could impose a prohibitively high communication burden 0 0 0 − γ2 I
on large-scale systems, which often have limited computation (13)
and communication capabilities. Therefore, it is preferable to Theorem 2 (Theorem 2 of [25]): Let {S1 , S2 , . . . , Sk } be
design the control input that utilizes local state information a basis of S and
in large-scale systems. This limited information exchange is
reflected in the sparsity of the gain matrix. L , [S1 |S2 | . . . |Sk ].
1) Sparse Controller and l1 Relaxation: Considering the Define the following structured set
state feedback control law, each local controller can be com-
puted as Υ , {Q ∈ Rnx ×nx : ∃Λ ∈ Sk s.t. L(Ik ⊗ Q) = L(Λ ⊗ Inx )}.
X Given γ > 0, if there exists P ≻ 0, R ∈
∀1 ≤ i ≤ nu , [u(t)]i = [K]ij [x(t)]j . (9)
span{S1 , S2 , . . . , Sk }, α > 0, and X ∈ Rnx ×nx such that:
1≤j≤nx

If [K]ij 6= 0, [u(t)]i needs access to [x(t)]j , and otherwise


X ∈ Υ, (14a)
[u(t)]i does not. Therefore, it is natural to define the com-  
munication burden as the number of nonzero elements of AX + BR α(AX + BR) + P 0 G
the matrix K. This is exactly the l0 norm, represented as
 −X −αX 0 0  
 ≺ 0.
Sym 
− γ2 I
 
CX + DR α(CX + DR) H 
||K||0 . However, the l0 norm is non-convex and discontinuous,
0 0 0 − γ2 I
and thus cannot be tackled directly. A common and effective (14b)
technique to address this difficulty is to alternatively consider
the l1 norm defined as Then, X is nonsingular and K = RX −1 ∈ S and
X X ||Tzd (s)||∞ ≤ γ.
||K||1 = |Kij |. (10) The corresponding problem for a minimal value of γ can
1≤i≤nu 1≤j≤nx be expressed as follows [25]:
This l1 relaxation turns the non-convex l0 norm into a convex min γ (15a)
one and paves the way for our later analysis. P,X,R,α,γ

2) Structured Controller: A structured controller refers to s.t. (14a), (14b), P ≻ 0, α > 0, (15b)
a controller subject to a structural constraint K ∈ S, where R ∈ span{S1 , S2 , . . . , Sk }. (15c)
S is a specified linear subspace. The corresponding structured
identity matrix is denoted as IS defined by A structured H∞ controller can be computed as K̄ = R̄X̄ −1 ∈
( S with the corresponding H∞ norm ||Tzd (s)|| ≤ γ̄, where
1, if Kij is a free variable, (P̄ , X̄, R̄, ᾱ, γ̄) is the solution to Problem (15).
[IS ]ij , (11)
0, if Kij = 0 is required.
D. Problem Formulation
Then we define the complementary structured identity matrix
as IS c , 1 − IS . Therefore, the structural constraint K ∈ S In this paper, we conduct a comprehensive analysis of the
can be alternatively expressed as K ◦ IS c = 0. optimal structured and sparse H∞ controller design, and three
typical problems are considered separately. To facilitate discus-
sions, we define the stability region F , {K ∈ Rnu ×nx | A +
C. A Convex Approach for Structured Control BK is Hurwitz (i.e., Re(λ) < 0 for all eigenvalues λ of A +
BK)}
Designing a structured H∞ controller is typically NP- Problem 1: (Sparse controller design with bounded H∞
hard [28], meaning we cannot equivalently transform this performance)
problem into solving an SDP feasibility problem. To tackle
this difficulty, Ferrante et al. [25] proposed exploring solutions min ||K||1 s.t. ||Tzd (s)||∞ ≤ γ, (16)
K∈F
within a convex subset of the nonconvex feasibility region.
Relevant results are provided below. where γ represents the performance threshold to be satisfied.
4

Problem 2: (Sparsity-promoting H∞ controller design) and its linearization around (K̃, P̃ ) in (21).
min ||Tzd (s)||∞ + λ||K||1 , (17) Lf (K, P ; K̃, P̃ ) := f (K̃, P̃ )
K∈F
1h i⊤
where λ ≥ 0 is a tuning parameter that represents the level of + B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ ) (21)
our emphasis on the network sparsity. 2
1 h i
Remark 1: The communication costs of channels could be + (A + B K̃ − P̃ )⊤ B(K − K̃) − (P − P̃ ) .
2
different in practice. Our approach can be extended to solve
this case by incorporating a weighted l1 norm which assigns We further propose the following problem, which will be
a different weight to each channel based on its cost. ||K||1 in incorporated into our final algorithm.
(16) and (17) can be generalized into ||W ◦K||1 , where ◦ is the
element-wise product and Wij represents the communication min ||K||1 (22a)
K,P
cost between controller [u(t)]i and state [x(t)]j . For simplicity,
s.t. P ≻ 0, (22b)
we assume that W = I in this paper, i.e., equal costs across  
all channels. −Lf (K, P ; K̃, P̃ ) ∗ ∗ ∗
Problem 3: (Structured H∞ controller design)  √1 (A + BK + P ) −I ∗ ∗ 
 2   0.
 G⊤ P 0 −γI ∗ 
min ||Tzd (s)||∞ s.t. K ◦ IS c = 0. (18) C + DK 0 H −γI
K∈F
(22c)
These three problems are representative and encompass the
majority of sparse controller design issues encountered in It is worth noting that Problem (22) is a convex problem
practical industrial applications. For the rest of this paper, we and can be solved using commercial solvers. In what follows,
will analyze and solve them separately. we will present a result (Theorem 3) that characterizes the
relationship between Problem (22) and Problem (19). We will
III. M AIN R ESULTS then describe our inspiration to design the final algorithm
A. Sparse Controller Design with Bounded H∞ Performance based on Problem (22).
Theorem 3: Every feasible solution (K̄, P̄ ) to Problem
In this subsection, we design a satisfactory sparse H∞
(22) is a feasible solution to Problem (19). For every feasible
controller, ensuring that its H∞ performance remains within
solution (K̂, P̂ ) to Problem (19), take K̃ = K̂, P̃ = P̂ , then
a specified threshold. In other words, our goal is to design
at least (K̂, P̂ ) is a feasible solution to Problem (22).
the sparsest controller while simultaneously satisfying the H∞
Proof : Since all the bi-linearity of Problem (19) comes from
performance constraint. By leveraging Lemma 1, Problem 1
the (A + BK)⊤ P + P (A + BK), it is natural to linearize this
is equivalent to the following problem.
term. We apply a well-known equality and reformulate this
term as follows
min ||K||1 (19a)
K,P (A + BK)⊤ P + P (A + BK)
s.t. P ≻ 0, (19b) 1
= (A + BK + P )⊤ (A + BK + P )
2
 ⊤

Sym(P (A + BK)) P G (C + DK)
1
 G⊤ P −γI H⊤   0. − (A + BK − P )⊤ (A + BK − P ).
C + DK H −γI 2
(19c) Consider the last term
In the absence of the sparsity regulation term in the objective 1
(A + BK − P )⊤ (A + BK − P )
function, Problem (19) turns out to be a feasibility problem and 2
can be solved by a standard change of variable [30]. However, 1h i⊤
= A + B K̃ − P̃ + B(K − K̃) − (P − P̃ )
this technique is no longer viable when the sparsity regulation h 2 i
term is the objective we want to minimize since this will A + B K̃ − P̃ + B(K − K̃) − (P − P̃ )
lead to minimizing ||Y X −1 ||1 , which is intractable. Instead of
directly solving Problem (19), we propose a linearized version 1
= (A + B K̃ − P̃ )⊤ (A + B K̃ − P̃ )
of Problem (19) and clarify their relationship. We also describe 2 h 
how this relationship addresses the difficulties and motivates us 1 i⊤
+ · Sym B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ )
to design a viable algorithm. At the end of this subsection, we 2
design an ILMI algorithm to solve Problem (19) numerically. 1h i⊤ h i
+ B(K − K̃) − (P − P̃ ) B(K − K̃) − (P − P̃ )
We show that this algorithm is guaranteed to converge and 2
every limit point is a stationary point of Problem (19). 1
 (A + B K̃ − P̃ )⊤ (A + B K̃ − P̃ )
To facilitate further discussions, we first define some critical 2 h 
functions as the prerequisites. Define 1 i⊤
+ · Sym B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ )
1 2
f (K, P ) := (A + BK − P )⊤ (A + BK − P ) (20) = Lf (K, P ; K̃, P̃ ).
2
5

The inequality follows from the semi-definiteness of the Algorithm 1 Sparse H∞ controller design given γ

second-order incremental matrix. Thus, Output: KA1 ;
Initialize k = 0;
(A + BK)⊤ P + P (A + BK) Initialize K0 = Y ∗ X ∗ −1 , P0 = X ∗ −1 as the optimal central-
1 ized solution to Problem (8);
 (A + BK + P )⊤ (A + BK + P )
2 repeat
− Lf (K, P ; K̃, P̃ ). - Solve Problem (24), K̃ = Kk , P̃ = Pk ;
Furthermore, - Assign the solution to Kk+1 , Pk+1 ;
  - k = k + 1;
(A + BK)⊤ P + P (A + BK) P G (C + DK)⊤ until ||Kk − Kk−1 ||F < ǫ, ||Pk − Pk−1 ||F < ǫ;
 G⊤ P −γI H⊤ 
return Kk ;
C + DK H −γI

 1 ⊤ where K̃, P̃ represent the optimization variables from the
2 (A + BK + P ) (A + BK + P ) − Lf (K, P ; K̃, P̃ )
 G⊤ P previous iteration, which will be clarified in the final algo-
C + DK rithm. The full algorithm is shown in Algorithm 1.
 Now we start to evaluate the optimality of Algorithm 1. To
PG (C + DK)⊤ facilitate discussions on the optimality, we first provide the
−γI H⊤ .
Lagrangian of Problem (19) and then give the Karush-Kuhn-
H −γI Tucker (KKT) conditions of Problem (19) and the definition
(23) of the stationary point.
The negative definiteness of the right-hand side is equivalent The Lagrangian of Problem (19) is shown as
to (22c) by applying the Schur complement. Consequently, the L(K, P, Λ1 , Λ2 ) = ||K||1 + hΛ1 , N (K, P )i − hΛ2 , P i,
feasible region (K, P ) of Problem (22) is a subset of that of
Problem (19) and we prove every feasible solution (K̄, P̄ ) to where Λ1 , Λ2 are the Lagrangian multipliers, and we denote
Problem (22) is a feasible solution to Problem (19). Note that 
Sym(P (A + BK)) P G (C + DK)⊤

the inequality in (23) turns out to be an equality when K = N (K, P ) :=  G⊤ P −γI H⊤ 
K̃, P = P̃ because in this case, the linearization introduces C + DK H −γI
no error. Then for every feasible solution (K̂, P̂ ) to Problem (25)
(19), take K̃ = K̂, P̃ = P̂ , then at least (K̂, P̂ ) is a feasible for simplicity. Therefore, we can derive the KKT conditions
solution to Problem (22) and possibly a convex set containing as follows
(K̂, P̂ ) is the feasible region of Problem (22). 
Theorem 3 indicates that instead of directly solving Problem Λ∗1  0, Λ∗2  0, (26a)
(19), we can alternatively solve a convex counterpart (Problem N (K ∗ , P ∗ )  0, P ∗  0, (26b)
(22)), and the solution will always remain within the feasible hΛ∗1 , N (K ∗ , P ∗ )i = 0, hΛ∗2 , P i = 0, (26c)
domain of Problem (19). Furthermore, if we consider solving ∂||K ∗ ||1 + ∇K hΛ∗1 , N (K ∗ , P ∗ )i = 0, (26d)
Problem (22) iteratively with linearization around the solution
from the previous step, the solution is at least not worse than ∇P hΛ∗1 , N (K ∗ , P ∗ )i − Λ2 = 0, (26e)
the previous step. This is because Problem (22) is convex where Λ∗1 , Λ∗2 are the KKT multipliers. With the KKT condi-
and the global minimum is not worse than the linearization tions, we define the stationary point below.
point. The analysis above motivates us to design an iterative Definition 1: (K, P ) is called a stationary point of Problem
algorithm to gradually approach the stationary point, which is (19) if it satisfies the corresponding KKT conditions (26).
the highest pursuit in non-convex optimization problems. Remark 2: Generally speaking, a stationary point is not
It is worth noting that although iteratively solving Problem necessarily a locally optimal point because of the existence
(22) ensures a non-increasing sequence of objective values, of saddle points. Although it is not a sufficient condition for
this does not guarantee convergence of the algorithm. This is local optimality, we can still remove points that are not locally
because the algorithm may oscillate between two points with optimal if we can show they do not satisfy the KKT conditions.
the same objective value. To guarantee the convergence of The stationary point is in general the highest pursuit in non-
the algorithm, the objective function must exhibit a sufficient convex optimization problems.
decrease property. Therefore, we slightly modify Problem (22) Theorem 4: Algorithm 1 generates a solution sequence that
by incorporating proximal terms in the objective function, as has at least one limit point, and each limit point is a stationary
shown in Problem (24). These proximal terms avoid the oscil- point of Problem 1.
lation issue and enforce the algorithm to gradually converge. Proof : Since Problem (24) is a convex problem, we can
obtain the global optimal solution through commercial solvers.
From the structure of Algorithm 1, we immediately obtain the
min ||K||1 + ||K − K̃||2F + ||P − P̃ ||2F (24a)
K,P following relationship at each step
s.t. (22b), (22c), (24b) ||Kk+1 ||1 + ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F ≤ ||Kk ||1 (27)
6

because the global optimal point is at least not worse than Algorithm 2 Sparsity promoting H∞ controller design
∗ ∗ ∗
the linearization point. From (27), we further obtain if Output: KA2 , PA2 , γA2 ;
(Kk+1 , Pk+1 ) is not equal to (Kk , Pk ), then the objective Initialize k = 0;
function will decrease. Since the objective function is lower- Initialize K0 = Y ∗ X ∗ −1 , P0 = X ∗ −1 , γ0 = γ ∗ as the optimal
bounded, the objective value must finally converge. From the centralized solution to Problem (8);
relation repeat
lim ||Kk+1 ||1 + ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F - Solve Problem (31) with K̃ = Kk , P̃ = Pk ;
k→∞ - Assign the solutions to Kk+1 , Pk+1 , γk+1 ;
≤ lim ||Kk ||1 , - k = k + 1;
k→∞
until ||Kk − Kk−1 ||F < ǫ, ||Pk − Pk−1 ||F < ǫ;
we have return Kk , Pk , γk ;
lim ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F = 0.
k→∞

Therefore the solution sequence must have at least one limit


point. According to Algorithm 1, the solution sequence is consider Problem 2. By utilizing Lemma 1, Problem 2 is
obtained by iteratively solving Problem (24). Thus, it is equivalent to the following problem.
natural to consider the optimality of Problem (24) at each
iteration. After recovering (22c) using the Schur complement,
we provide the Lagrangian of Problem (24) at the k + 1
iteration.
min γ + λ||K||1 (30a)
L(K, P, [Γ1 ]k+1 , [Γ2 ]k+1 ) K,P,γ

= ||K||1 + ||K − Kk ||2F + ||P − Pk ||2F s.t. P ≻ 0, (30b)



 
+ h[Γ1 ]k+1 , M (K, P ; Kk , Pk )i − h[Γ2 ]k+1 , P i, Sym(P (A + BK)) P G (C + DK)
 G⊤ P −γI H⊤   0.
where [Γ1 ]k+1 , [Γ2 ]k+1 denotes the Lagrangian multipliers, C + DK H −γI
and we denote
(30c)
M (K, P ; Kk , Pk ) =
 1 ⊤
2 (A + BK + P ) (A + BK + P ) − Lf (K, P ; Kk , Pk )
 G⊤ P
C + DK Different from Problem (19), here we incorporate γ into the
 objective function as a variable to optimize. The parameter λ
PG (C + DK)⊤ is the penalty weight that represents the level of our emphasis
−γI H⊤ 
on sparsity. As λ increases, the solution K would become
H −γI sparser.
(28)
Note that the LMI constraint of Problem (30) also contains
for conciseness. Since Problem (24) is a convex problem, the the bi-linear term. To tackle this difficulty, we use the same
optimal solution (Kk+1 , Pk+1 ) must satisfy the corresponding technique as the previous section: i) applying linearization;
KKT conditions. The KKT conditions of Problem (24) at (ii) leveraging the Schur complement to transform the bi-
the k + 1 iteration are listed in (29), where [Γ∗1 ]k+1 , [Γ∗2 ]k+1 linear matrix inequality into an LMI. The problem that will be
are known as the KKT multipliers. Besides, the equality adopted is shown in Problem (31), where we also incorporate
N (K, P ) = M (K, P ; K, P ) holds according to their defini- the proximal terms to guarantee sufficient decreasing. The
tions (25), (28). Then we take the limit for all KKT conditions K̃, P̃ are the linearization point. The full algorithm is shown
(29) and use subscript ∞ to denote any limit point. Let in Algorithm 2.
Λ∗1 = [Γ∗1 ]∞ , Λ∗2 = [Γ∗2 ]∞ . The limit point (K∞ , P∞ ) satisfies
the KKT conditions of (26), which means that (K∞ , P∞ ) is Theorem 5: Algorithm 2 generates a solution sequence that
a stationary point of Problem (19) and thus Problem 1.  has at least one limit point, and each limit point is a stationary
This subsection provides a reliable algorithm with guaran- point of Problem 2
teed convergence to solve Problem 1, a widely recognized Proof : It is similar to Theorem 4 and is omitted. 
scenario in sparse controller design. The next subsection
This subsection provides a similar algorithm to find the
will focus on the trade-off between system performance and
controller structure with a convergence guarantee. However,
communication burdens.
the solution is not optimal and can be improved due to the l1
relaxation. Because the l1 regulation penalizes elements with
B. Sparsity-promoting H∞ Controller Design large magnitude, the solution returned tends to unnecessarily
Different from the assumption of the previous subsection, have small elements within the nonzero pattern. This deviates
in reality, sometimes the performance threshold is unknown from our ultimate goal, which is penalizing the matrix car-
and we are interested in the relationship between system dinality instead of value. Therefore, we propose to refine our
performance and topology sparsity, which motivates us to solution by solving a structured controller design problem over
7

[Γ∗1 ]k+1  0, [Γ∗2 ]k+1  0, (29a)


M (Kk+1 , Pk+1 ; Kk , Pk )  0, Pk+1  0, (29b)
h[Γ∗1 ]k+1 , M (Kk+1 , Pk+1 ; Kk , Pk )i = 0, h[Γ∗2 ]k+1 , Pk+1 i = 0, (29c)
∂||Kk+1 ||1 + (Kk+1 − Kk ) + ∇K h[Γ∗1 ]k+1 , M (Kk+1 , Pk+1 ; Kk , Pk )i = 0, (29d)
(Pk+1 − Pk ) + ∇P h[Γ∗1 ]k+1 , M (Kk+1 , Pk+1 ; Kk , Pk )i − [Γ∗2 ]k+1 = 0, (29e)

Algorithm 3 Structured H∞ controller design problem


∗ ∗ ∗
Output: KA3 , PA3 , γA3 ;
Initialize k = 0;
Solve Probelm (15) and denote the solution as (P̄ , X̄, R̄, ᾱ, γ̄); min γ s.t. (32a)
P,X,α,γ
Initialize K0 = R̄X̄ −1 , P0 = P̄ −1 , γ0 = γ̄ ; P ≻ 0, α > 0, (32b)
repeat  
(A + B K̄)X α(A + B K̄)X + P 0 G
- Solve Problem (34) with K̃ = Kk , P̃ = Pk ;  −X −αX 0 0  
- Assign the solution to Kk+1 , Pk+1 , γk+1 ; Sym   ≺ 0
− γ2 I
 
(C + DK̄)X α(C + DK̄)X H 
- k = k + 1; 0 0 0 − γ2 I
until ||Kk − Kk−1 ||F < ǫ, ||Pk − Pk−1 ||F < ǫ; (32c)
return Kk , Pk , γk ;
Recall that (P̄ , X̄, R̄, ᾱ, γ̄) is the solution to Problem (15) and
K̄ = R̄X̄ −1 . Consequently, it is straightforward to see that
the structure obtained in this part. (P̄ , X̄, ᾱ, γ̄) is a feasible solution to Problem (32). Thus γ̄
serves as an upper bound of γ ∗ , which is returned by Problem
(32). According to Theorem 1, γ̄ also upper bounds the H∞
min γ + λ||K||1 + ||K − K̃||2F + ||P − P̃ ||2F (31a)
K,P,γ norm corresponding to K̄.
s.t. P ≻ 0, (31b) On the contrary, our approaches can effectively address

−Lf (K, P ; K̃, P̃ ) ∗ ∗ ∗
 these limitations accordingly: (i) our ILMI algorithm can
 √1 (A + BK + P ) −I ∗ ∗  iteratively consider different convex subsets within the non-
 2   0. convex feasibility domain and refine the solution at each step.
 G⊤ P 0 −γI ∗ 
This allows our method to explore a larger feasibility domain
C + DK 0 H −γI
than that proposed in [25]; (ii) denoting the output of our
(31c) ∗ ∗ ∗ ∗
algorithm as (KA3 , PA3 , γA3 ), γA3 is exactly the H∞ norm

corresponding to KA3 .
Despite the limitations mentioned earlier, we can leverage
the solution returned by Problem (15) as the initial point of our
ILMI algorithm discussed later for faster convergence, since
C. Structured H∞ Controller Design
this solution is closer to the optimal solution than a randomly
chosen feasible one.
Different from the previous two subsections, we now con-
sider designing an H∞ controller with a specified structural Following the concepts from the previous two subsections,
pattern K ∈ S, where S is a linear subspace. This is moti- we introduce Problem (33) and Problem (34). Our goal is to
vated by the reality that a controller’s structure is sometimes solve Problem (33), which is a structured optimal controller
naturally or artificially restricted, and a controller lacking such design problem. However, since Problem (33) is non-convex,
structure may fail in practice. For example, the controller for we iteratively solve Problem (34) instead. Finally, we demon-
a micro-electro-mechanical-system (MEMS) must adhere to a strate that the solution returned by our algorithm is a stationary
specific structural pattern due to the internal electrical circuitry point of Problem (33).
[31].
Ferrante et al. [25] proposed a viable approach to address min γ (33a)
K,P,γ
this problem, as described in Theorem 2. However, there
are some limitations: (i) their approach explores a structured s.t. P ≻ 0, K ◦ IS c = 0, (33b)
(C + DK)⊤
 
controller within a convex subset of the nonconvex feasibility Sym(P (A + BK)) P G
region, omitting some potential solutions and can lead to  G⊤ P −γI H⊤   0.
suboptimal results; (ii) the minimal value γ̄ returned by their C + DK H −γI
approach (Problem (15)) is not the H∞ norm itself, but rather (33c)
an upper bound of the H∞ norm correspnding to the structured
controller K̄. To clarify this point, we consider the following
8

0 0
min γ + ||K − K̃||2F + ||P − P̃ ||2F (34a) 4 4
K,P,γ
8 8
s.t. P ≻ 0, K ◦ IS c = 0, (34b) 12 12
16 16
 
−Lf (K, P ; K̃, P̃ ) ∗ ∗ ∗
20 20
 √1 (A + BK + P ) −I ∗ ∗  0 8 16 24 32 40 0 8 16 24 32 40
 2   0.
 G⊤ P 0 −γI ∗ 
C + DK 0 H −γI
(34c) 0 0
4 4
The full algorithm is shown in Algorithm 3. 8 8
Theorem 6: Algorithm 3 generates a solution sequence that 12 12
16 16
has at least one limit point, and each limit point is a stationary
20 20
point of Problem (33). 0 8 16 24 32 40 0 8 16 24 32 40
Proof : It is similar to Theorem 4 and is omitted. 
Remark 3: Apart from increasing convergence speed, an- ∗
other reason for utilizing the solution returned by Problem Fig. 1: The sparsity patterns of K0 ,K1 ,K2 ,KA1 (from left to
(15) for initialization is to avoid infeasibility in the first iter- right, top to bottom). The nonzero elements are labeled using
ation. Given (P̄ , X̄, R̄, ᾱ, γ̄), we compute K̃ = R̄X̄ −1 , P̃ = blue dots.
P̄ −1 , γ̃ = γ̄, where (K̃, P̃ , γ̃) are variables specified in Prob-
lem (34). At least (K̃, P̃ , γ̃), and potentially other superior
solutions, are feasible for Problem (34) since K̃◦IS c = 0 holds 500 80

directly and γ̃ upper bounds the H∞ norm corresponding to 450


70
K̃. However, this result does not necessarily hold for a general 400

(K̃, P̃ , γ̃) where K̃ ◦ IS c 6= 0. This is because in Problem 350 60

(34), we consider a convex subset of (K, P ) around (K̃, P̃ ) 300


50
within the nonconvex feasibility region. We cannot guarantee
||K||0

||K||1
250
that there exists a K in this region that satisfies K̃ ◦ IS c = 0. 40
200

150 30
IV. S IMULATIONS
100
In this section, we provide several numerical simulations to 20
50
verify our results. The numerical example for Problem 1 and
0 10
Problem 2 is described as follows: consider the mass-spring 0 1 2 3 4 5 6 7 8 9 10 11
system with N masses on a line [32]. The dynamic system can
be represented as x1 = [p1 , . . . , pN ]⊤ and x2 = x˙1 , where pi Fig. 2: The evolution of ||K||0 and ||K||1 .
is the displacement of the i-th mass from its reference point.
The state-space model can be modeled as (1) with
     
0 I 0 0 in Fig. 1. It is shown that the number of nonzero elements
A= , B= , G= ,
T 0 I I declines rapidly. The algorithm takes 4 iterations to reach the
optimal solution with 38 nonzero elements in 28.5563 seconds.
     
I 0 0
C= , D= , H= . The evolution of ||K||0 and ||K||1 are shown in Fig. 2. The
0 2I 2I
monotonic decreasing ||K||1 matches our theoretical results.
where T ∈ RN ×N is a Toeplitz matrix with -2 on its main di- However, since ||K||1 is an approximate of ||K||0 , ||K||0 is
agonal, 1 on its first sub- and super-diagonal, and 0 elsewhere. not guaranteed to monotonically decrease.
We assume N = 20, nx = 40, nu = 20, nd = 20, nz = 60.
The numerical examples for Problem 3 are the same as those
in [25], which will be clarified later. Throughout this section,
we choose ǫ = 1e − 3. For the rest of this section, Problem 1, B. Sparsity-promoting H∞ Controller Design
Problem 2, and Problem 3 will be considered separately.
In this part, we illustrate the trade-off between system
performance and the controller sparsity in Fig. 3. For each
A. Sparse Controller Design with Bounded H∞ Performance λ, Algorithm 2 can return a satisfactory solution within 10
By solving Problem (8), we obtain the global optimal H∞ iterations and 100 seconds. When λ is small, we impose a
controller K ∗ = Y ∗ X ∗ −1 and global minimal H∞ norm small penalty on the controller density, resulting in denser
γ ∗ = 2. We require the H∞ norm should not be larger controllers and higher performance. As λ increases, the penalty
than γ = 5. We run Algorithm 1 to compute the solution. on density also increases, leading to sparser controllers and a
The solution patterns at several starting iterations are shown corresponding degradation in system performance.
9

2) Overlapping Control: We consider the control with


9
140
overlapping information structure constraints [34, Example
8 2.16]. The system dynamics can be expressed in the form of
130
(1) with
7
120
   
  1 0 0
6 1 4 0
A= , B = 0 0 , G = 0 ,

norm
110
1 2 2
||K||0

5 0 1 1
100 (38)

H
 
0
4
90
C = 1 , D = 0, H = 0.
80 3 0
70 2 We specify a structural constraint with
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
 
1 1 0
IS =
c . (39)
0 1 1
Fig. 3: The relationship between ||K||0 , H∞ norm and λ.
Same as the previous example, we run Algorithm 3. We
initialize the algorithm by solving Problem (15) and obtain the
C. Structured H∞ Controller Design solution (P̄ , X̄, R̄, ᾱ, γ̄), where ᾱ = 0.09 and γ̄ = 0.13724
∗ ∗ ∗ ∗
[25]. Algorithm 3 returns KA3 , PA3 , γA3 with γA3 = 0.06 <
In this section, we will validate the effectiveness of our
0.13724, which validate our advantages over [25].
methods on two numerical examples discussed in [25], com-
pare our solutions with those obtained in [25] to demonstrate
our superiority. V. C ONCLUSIONS
1) Network Decentralized Control: We consider a water In this paper, we conducted a comprehensive analysis of
distribution system consisting of 5 subsystems [33]. The the optimal sparse H∞ controller design, and three typical
overall dynamics can be written in the form of (1) with problems were considered separately. For each problem, we
A = diag(A1 , A2 , . . . , A5 ), C = I, D = 0, H = 0, applied a novel linearization to relax the bilinear matrix in-
  equality into an LMI, and we showed that any feasible solution
Bu −Bd 0 0 0 0 to the relaxed problem was feasible for the original problem,
0 Bu −Bd 0 0 −Bd 
  which motivated us to develop an ILMI algorithm to compute
B= 0
 0 Bd −Bu 0 0  , G = I. the solution. We further characterized the first-order optimality
0 0 0 Bd Bu 0  of the original problem using KKT conditions. Moreover,
0 0 0 0 Bu Bd by incorporating proximal terms into the objective function,
(35) we showed that our algorithm is guaranteed to converge and
where for all i = 1, 2, . . . , 5, each limit point is a stationary point of the original prob-
      lem, which is the highest pursuit in non-convex optimization
−ξi βi 0 0 1 problems. Finally, the effectiveness of our algorithm was
Ai =  ξi −βi 0 , Bd = 1 , Bu = 0 . (36) validated via numerical simulations. For the non-structured
0 1 0 0 0 design, our algorithms effectively reduced the number of non-
In particular, we select ξ1 = 15, ξ2 = 20, ξ3 = 16, ξ4 = zero elements, thereby lowering communication costs. In the
16.7, ξ5 = 14, β1 = 0, β2 = 0, β3 = 12, β4 = 0, β5 = 22. case of structured design, our approaches showed significant
According to [33], a controller is decentralized in the sense improvement over those proposed in [25].
of networks if K has the same structural pattern as B ⊤ .
Therefore, the structural constraint can be expressed as K ∈ S, R EFERENCES
with [1] H. Sandberg, S. Amin, and K. H. Johansson, “Cyberphysical security in
  networked control systems: An introduction to the issue,” IEEE Control
1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 Systems Magazine, vol. 35, no. 1, pp. 20–23, 2015.
 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0  [2] A. Teixeira, H. Sandberg, and K. H. Johansson, “Networked control
systems under cyber attacks with applications to power networks,” in
 
 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 
IS c = . Proceedings of the American Control Conference, 2010, pp. 3690–3696.
 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 
  [3] B. Ran and D. Boyce, Modeling Dynamic Transportation Networks: An
 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 
Intelligent Transportation System Oriented Approach. Springer Science
0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 & Business Media, 2012.
(37) [4] R. Verdone, D. Dardari, G. Mazzini, and A. Conti, Wireless Sensor
We run Algorithm 3 to compute a structured controller. In and Actuator Networks: Technologies, Analysis and Design. Academic
Press, 2010.
the initialization part, we first solve Problem (15) and denote [5] L. Gupta, R. Jain, and G. Vaszkun, “Survey of important issues in UAV
the solution as (P̄ , X̄, R̄, ᾱ, γ̄), where ᾱ = 0.1554 and γ̄ = communication networks,” IEEE Communications Surveys & Tutorials,
∗ ∗ ∗ ∗ vol. 18, no. 2, pp. 1123–1152, 2015.
1.7887 [25]. Algorithm 3 returns KA3 , PA3 , γA3 with γA3 =
[6] F. Lin, M. Fardad, and M. R. Jovanović, “Design of optimal sparse
1.65 < 1.7887. The results demonstrate the superiority of our feedback gains via the alternating direction method of multipliers,” IEEE
methods over those proposed in [25]. Transactions on Automatic Control, vol. 58, no. 9, pp. 2426–2431, 2013.
10

[7] M. Fardad and M. R. Jovanović, “On the design of optimal structured approach to design of structured optimal state feedback gains,” IEEE
and sparse feedback gains via sequential convex programming,” in Transactions on Automatic Control, vol. 56, no. 12, pp. 2923–2929,
Proceedings of the American Control Conference, 2014, pp. 2426–2431. 2011.
[8] N. Yang, J. Tang, Y. Li, G. Shi, and L. Shi, “Log-barrier search for [33] F. Blanchini, E. Franco, and G. Giordano, “Network-decentralized
structural linear quadratic regulators,” IEEE Transactions on Automatic control strategies for stabilization,” IEEE Transactions on Automatic
Control, 2024. Control, vol. 60, no. 2, pp. 491–496, 2014.
[9] N. K. Dhingra and M. R. Jovanović, “A method of multipliers algorithm [34] A. Zecevic and D. D. Siljak, Control of Complex Systems: Structural
for sparsity-promoting optimal control,” in Proceedings of the American Constraints and Uncertainty. Springer Science & Business Media,
Control Conference (ACC), 2016, pp. 1942–1947. 2010.
[10] M. Babazadeh and A. Nobakhti, “Sparsity promotion in state feedback
controller design,” IEEE Transactions on Automatic Control, vol. 62,
no. 8, pp. 4066–4072, 2016.
[11] M. Cho, “Iterative thresholding and projection algorithms and model-
based deep neural networks for sparse LQR control design,” IEEE
Transactions on Automatic Control, 2024.
[12] N. Yang, J. Tang, Y. Li, and L. Shi, “Sparse optimization of H2
controllers for LTI systems: A log-barrier method,” Automatica, vol.
173, p. 112102, 2025.
[13] B. Polyak, M. Khlebnikov, and P. Shcherbakov, “An LMI approach
to structured sparse feedback design in linear control systems,” in
Proceedings of the European Control Conference (ECC), 2013, pp. 833–
838.
[14] N. K. Dhingra, M. R. Jovanović, and Z.-Q. Luo, “An ADMM algorithm
for optimal sensor and actuator selection,” in Proceedings of the IEEE
Conference on Decision and Control, 2014, pp. 4039–4044.
[15] A. Zare and M. R. Jovanović, “Optimal sensor selection via proximal
optimization algorithms,” in Proceedings of the IEEE Conference on
Decision and Control (CDC), 2018, pp. 6514–6518.
[16] A. Zare, H. Mohammadi, N. K. Dhingra, T. T. Georgiou, and M. R.
Jovanović, “Proximal algorithms for large-scale statistical modeling and
sensor/actuator selection,” IEEE Transactions on Automatic Control,
vol. 65, no. 8, pp. 3441–3456, 2019.
[17] J. Ding, C. Wen, G. Li, X. Yang, and T. Hu, “Sparsity-inspired optimal
topology control of complex networks,” IEEE Transactions on Network
Science and Engineering, vol. 7, no. 3, pp. 1825–1839, 2019.
[18] F. Lin, M. Fardad, and M. R. Jovanović, “Identification of sparse
communication graphs in consensus networks,” in Proceedings of the
Annual Allerton Conference on Communication, Control, and Comput-
ing (Allerton), 2012, pp. 85–89.
[19] A. Zare, M. R. Jovanović, and T. T. Georgiou, “Alternating direction
optimization algorithms for covariance completion problems,” in Pro-
ceedings of the American Control Conference (ACC), 2015, pp. 515–
520.
[20] E. Masazade, M. Fardad, and P. K. Varshney, “Sparsity-promoting ex-
tended Kalman filtering for target tracking in wireless sensor networks,”
IEEE Signal Processing Letters, vol. 19, no. 12, pp. 845–848, 2012.
[21] M. Nagahara, D. E. Quevedo, and D. Nešić, “Maximum hands-off
control: a paradigm of control effort minimization,” IEEE Transactions
on Automatic Control, vol. 61, no. 3, pp. 735–747, 2015.
[22] M. Nagahara, Sparsity Methods for Systems and Control. now
Publishers, 2020.
[23] N. Yang, Y. Li, T. Chen, and L. Shi, “Sparsity promoting observer
design for wireless sensor-estimator networks,” IEEE Transactions on
Automatic Control, 2024.
[24] Y. Zhong, N. Yang, L. Huang, G. Shi, and L. Shi, “Sparse sensor se-
lection for distributed systems: An l1 -relaxation approach,” Automatica,
vol. 165, p. 111670, 2024.
[25] F. Ferrante, C. Ravazzi, and F. Dabbene, “An LMI approach for
structured H∞ state feedback control,” IFAC-PapersOnLine, vol. 53,
no. 2, pp. 4058–4063, 2020.
[26] G. E. Dullerud and F. Paganini, A Course in Robust Control Theory: A
Convex Approach. Springer Science & Business Media, 2013, vol. 36.
[27] T. Iwasaki and R. E. Skelton, “All controllers for the general H∞
control problem: LMI existence conditions and state space formulas,”
Automatica, vol. 30, no. 8, pp. 1307–1317, 1994.
[28] V. Blondel and J. N. Tsitsiklis, “NP-hardness of some linear control
design problems,” SIAM Journal on Control and Optimization, vol. 35,
no. 6, pp. 2118–2127, 1997.
[29] G. Pipeleers, B. Demeulenaere, J. Swevers, and L. Vandenberghe,
“Extended LMI characterizations for stability and performance of linear
systems,” Systems & Control Letters, vol. 58, no. 7, pp. 510–518, 2009.
[30] G.-R. Duan and H.-H. Yu, LMIs in Control Systems: Analysis, Design
and Applications. CRC press, 2013.
[31] Z. Fan, J. Wang, S. Achiche, E. Goodman, and R. Rosenberg, “Struc-
tured synthesis of MEMS using evolutionary approaches,” Applied Soft
Computing, vol. 8, no. 1, pp. 579–589, 2008.
[32] F. Lin, M. Fardad, and M. R. Jovanovic, “Augmented Lagrangian

You might also like