Sparse Controller For Networked Control Systems
Sparse Controller For Networked Control Systems
Abstract—This paper provides a comprehensive analysis of the number of communication channels and, consequently, the
the design of optimal structured and sparse H∞ controllers for scalability and potential of NCS with such centralized designs.
continuous-time linear time-invariant (LTI) systems. Three prob- This motivates us to design control inputs using partial state
lems are considered. First, designing the sparsest H∞ controller,
which minimizes the sparsity of the controller while satisfying the information, which corresponds to a sparse controller gain
given performance requirements. Second, designing a sparsity- matrix.
promoting H∞ controller, which balances system performance Extensive work has been conducted on the optimal sparse
and controller sparsity. Third, designing a H∞ controller subject controller design. In the seminal work [6], the authors pro-
to a structural constraint, which enhances system performance
with a specified sparsity pattern. For each problem, we adopt posed an alternating direction method of multiplier (ADMM)
a linearization technique that transforms the original nonconvex algorithm for designing sparse controllers. Instead of ADMM,
problem into a convex semidefinite programming (SDP) problem. in [7], the authors proposed an iterative convex program-
Subsequently, we design an iterative linear matrix inequality ming algorithm. More recently, various algorithms have been
(ILMI) algorithm for each problem, which ensures guaranteed proposed for the structured and sparse controller design [8–
convergence. We further characterize the first-order optimality
using the Karush-Kuhn-Tucker (KKT) conditions and prove that
12]. The sparsity mentioned above refers to the elementwise
any limit point of the solution sequence generated by the ILMI sparsity over the gain matrix. In contrast, in the landmark
algorithm is a stationary point. For the first and second problems, work [13], the authors proposed the concept of row sparse
we validate that our algorithms can reduce the number of and column sparse, which motivates the sensor and actuator
non-zero elements and thus the communication burden through selection problems. In such types of problems, a change
several numerical simulations. For the third problem, we refine
the solutions obtained in existing literature, demonstrating that of variable technique can be applied to turn the feasible
our approaches achieve significant improvements. region into a convex set. In [14–16], the authors proposed
various efficient algorithms in large-scale problems, including
Index Terms—Linear matrix inequality, sparsity, networked
control systems, H∞ performance ADMM, proximal gradient, and quasi-Newton. The sparse
control has also been investigated in topology design [17],
consensus network [18], covariance completion [19], target
I. I NTRODUCTION tracking [20], hands-off control [21, 22], observer design [23],
The optimal H∞ controller can be recovered by Theorem 1 ([29]): Given P ≻ 0 and K ∈ Rnu ×nx , the
K = Y ∗ X ∗ −1 with the corresponding global minimal
∗
following items are equivalent.
||Tzd (s)||∞ = γ ∗ , where (Y ∗ , X ∗ , γ ∗ ) is the solution to
P (C + DK)⊤
Sym((A + BK)P ) G
Problem (8).
(i) G⊤ −γI H⊤ ≺ 0;
(C + DK)P H −γI
(12)
B. Sparsity in NCS
The feedback control strategy typically constructs a com- (ii) There exists X ∈ Rnx ×nx and α > 0 such that:
munication network between the control input and the system
(A + BK)X α(A + BK)X + P 0 G
state. This network is generally dense, requiring each local −X −αX 0 0
Sym ≺ 0.
controller to access the states of all subsystems. However,
(C + DK)X α(C + DK)X − γ2 I H
this could impose a prohibitively high communication burden 0 0 0 − γ2 I
on large-scale systems, which often have limited computation (13)
and communication capabilities. Therefore, it is preferable to Theorem 2 (Theorem 2 of [25]): Let {S1 , S2 , . . . , Sk } be
design the control input that utilizes local state information a basis of S and
in large-scale systems. This limited information exchange is
reflected in the sparsity of the gain matrix. L , [S1 |S2 | . . . |Sk ].
1) Sparse Controller and l1 Relaxation: Considering the Define the following structured set
state feedback control law, each local controller can be com-
puted as Υ , {Q ∈ Rnx ×nx : ∃Λ ∈ Sk s.t. L(Ik ⊗ Q) = L(Λ ⊗ Inx )}.
X Given γ > 0, if there exists P ≻ 0, R ∈
∀1 ≤ i ≤ nu , [u(t)]i = [K]ij [x(t)]j . (9)
span{S1 , S2 , . . . , Sk }, α > 0, and X ∈ Rnx ×nx such that:
1≤j≤nx
2) Structured Controller: A structured controller refers to s.t. (14a), (14b), P ≻ 0, α > 0, (15b)
a controller subject to a structural constraint K ∈ S, where R ∈ span{S1 , S2 , . . . , Sk }. (15c)
S is a specified linear subspace. The corresponding structured
identity matrix is denoted as IS defined by A structured H∞ controller can be computed as K̄ = R̄X̄ −1 ∈
( S with the corresponding H∞ norm ||Tzd (s)|| ≤ γ̄, where
1, if Kij is a free variable, (P̄ , X̄, R̄, ᾱ, γ̄) is the solution to Problem (15).
[IS ]ij , (11)
0, if Kij = 0 is required.
D. Problem Formulation
Then we define the complementary structured identity matrix
as IS c , 1 − IS . Therefore, the structural constraint K ∈ S In this paper, we conduct a comprehensive analysis of the
can be alternatively expressed as K ◦ IS c = 0. optimal structured and sparse H∞ controller design, and three
typical problems are considered separately. To facilitate discus-
sions, we define the stability region F , {K ∈ Rnu ×nx | A +
C. A Convex Approach for Structured Control BK is Hurwitz (i.e., Re(λ) < 0 for all eigenvalues λ of A +
BK)}
Designing a structured H∞ controller is typically NP- Problem 1: (Sparse controller design with bounded H∞
hard [28], meaning we cannot equivalently transform this performance)
problem into solving an SDP feasibility problem. To tackle
this difficulty, Ferrante et al. [25] proposed exploring solutions min ||K||1 s.t. ||Tzd (s)||∞ ≤ γ, (16)
K∈F
within a convex subset of the nonconvex feasibility region.
Relevant results are provided below. where γ represents the performance threshold to be satisfied.
4
Problem 2: (Sparsity-promoting H∞ controller design) and its linearization around (K̃, P̃ ) in (21).
min ||Tzd (s)||∞ + λ||K||1 , (17) Lf (K, P ; K̃, P̃ ) := f (K̃, P̃ )
K∈F
1h i⊤
where λ ≥ 0 is a tuning parameter that represents the level of + B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ ) (21)
our emphasis on the network sparsity. 2
1 h i
Remark 1: The communication costs of channels could be + (A + B K̃ − P̃ )⊤ B(K − K̃) − (P − P̃ ) .
2
different in practice. Our approach can be extended to solve
this case by incorporating a weighted l1 norm which assigns We further propose the following problem, which will be
a different weight to each channel based on its cost. ||K||1 in incorporated into our final algorithm.
(16) and (17) can be generalized into ||W ◦K||1 , where ◦ is the
element-wise product and Wij represents the communication min ||K||1 (22a)
K,P
cost between controller [u(t)]i and state [x(t)]j . For simplicity,
s.t. P ≻ 0, (22b)
we assume that W = I in this paper, i.e., equal costs across
all channels. −Lf (K, P ; K̃, P̃ ) ∗ ∗ ∗
Problem 3: (Structured H∞ controller design) √1 (A + BK + P ) −I ∗ ∗
2 0.
G⊤ P 0 −γI ∗
min ||Tzd (s)||∞ s.t. K ◦ IS c = 0. (18) C + DK 0 H −γI
K∈F
(22c)
These three problems are representative and encompass the
majority of sparse controller design issues encountered in It is worth noting that Problem (22) is a convex problem
practical industrial applications. For the rest of this paper, we and can be solved using commercial solvers. In what follows,
will analyze and solve them separately. we will present a result (Theorem 3) that characterizes the
relationship between Problem (22) and Problem (19). We will
III. M AIN R ESULTS then describe our inspiration to design the final algorithm
A. Sparse Controller Design with Bounded H∞ Performance based on Problem (22).
Theorem 3: Every feasible solution (K̄, P̄ ) to Problem
In this subsection, we design a satisfactory sparse H∞
(22) is a feasible solution to Problem (19). For every feasible
controller, ensuring that its H∞ performance remains within
solution (K̂, P̂ ) to Problem (19), take K̃ = K̂, P̃ = P̂ , then
a specified threshold. In other words, our goal is to design
at least (K̂, P̂ ) is a feasible solution to Problem (22).
the sparsest controller while simultaneously satisfying the H∞
Proof : Since all the bi-linearity of Problem (19) comes from
performance constraint. By leveraging Lemma 1, Problem 1
the (A + BK)⊤ P + P (A + BK), it is natural to linearize this
is equivalent to the following problem.
term. We apply a well-known equality and reformulate this
term as follows
min ||K||1 (19a)
K,P (A + BK)⊤ P + P (A + BK)
s.t. P ≻ 0, (19b) 1
= (A + BK + P )⊤ (A + BK + P )
2
⊤
Sym(P (A + BK)) P G (C + DK)
1
G⊤ P −γI H⊤ 0. − (A + BK − P )⊤ (A + BK − P ).
C + DK H −γI 2
(19c) Consider the last term
In the absence of the sparsity regulation term in the objective 1
(A + BK − P )⊤ (A + BK − P )
function, Problem (19) turns out to be a feasibility problem and 2
can be solved by a standard change of variable [30]. However, 1h i⊤
= A + B K̃ − P̃ + B(K − K̃) − (P − P̃ )
this technique is no longer viable when the sparsity regulation h 2 i
term is the objective we want to minimize since this will A + B K̃ − P̃ + B(K − K̃) − (P − P̃ )
lead to minimizing ||Y X −1 ||1 , which is intractable. Instead of
directly solving Problem (19), we propose a linearized version 1
= (A + B K̃ − P̃ )⊤ (A + B K̃ − P̃ )
of Problem (19) and clarify their relationship. We also describe 2 h
how this relationship addresses the difficulties and motivates us 1 i⊤
+ · Sym B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ )
to design a viable algorithm. At the end of this subsection, we 2
design an ILMI algorithm to solve Problem (19) numerically. 1h i⊤ h i
+ B(K − K̃) − (P − P̃ ) B(K − K̃) − (P − P̃ )
We show that this algorithm is guaranteed to converge and 2
every limit point is a stationary point of Problem (19). 1
(A + B K̃ − P̃ )⊤ (A + B K̃ − P̃ )
To facilitate further discussions, we first define some critical 2 h
functions as the prerequisites. Define 1 i⊤
+ · Sym B(K − K̃) − (P − P̃ ) (A + B K̃ − P̃ )
1 2
f (K, P ) := (A + BK − P )⊤ (A + BK − P ) (20) = Lf (K, P ; K̃, P̃ ).
2
5
The inequality follows from the semi-definiteness of the Algorithm 1 Sparse H∞ controller design given γ
∗
second-order incremental matrix. Thus, Output: KA1 ;
Initialize k = 0;
(A + BK)⊤ P + P (A + BK) Initialize K0 = Y ∗ X ∗ −1 , P0 = X ∗ −1 as the optimal central-
1 ized solution to Problem (8);
(A + BK + P )⊤ (A + BK + P )
2 repeat
− Lf (K, P ; K̃, P̃ ). - Solve Problem (24), K̃ = Kk , P̃ = Pk ;
Furthermore, - Assign the solution to Kk+1 , Pk+1 ;
- k = k + 1;
(A + BK)⊤ P + P (A + BK) P G (C + DK)⊤ until ||Kk − Kk−1 ||F < ǫ, ||Pk − Pk−1 ||F < ǫ;
G⊤ P −γI H⊤
return Kk ;
C + DK H −γI
1 ⊤ where K̃, P̃ represent the optimization variables from the
2 (A + BK + P ) (A + BK + P ) − Lf (K, P ; K̃, P̃ )
G⊤ P previous iteration, which will be clarified in the final algo-
C + DK rithm. The full algorithm is shown in Algorithm 1.
Now we start to evaluate the optimality of Algorithm 1. To
PG (C + DK)⊤ facilitate discussions on the optimality, we first provide the
−γI H⊤ .
Lagrangian of Problem (19) and then give the Karush-Kuhn-
H −γI Tucker (KKT) conditions of Problem (19) and the definition
(23) of the stationary point.
The negative definiteness of the right-hand side is equivalent The Lagrangian of Problem (19) is shown as
to (22c) by applying the Schur complement. Consequently, the L(K, P, Λ1 , Λ2 ) = ||K||1 + hΛ1 , N (K, P )i − hΛ2 , P i,
feasible region (K, P ) of Problem (22) is a subset of that of
Problem (19) and we prove every feasible solution (K̄, P̄ ) to where Λ1 , Λ2 are the Lagrangian multipliers, and we denote
Problem (22) is a feasible solution to Problem (19). Note that
Sym(P (A + BK)) P G (C + DK)⊤
the inequality in (23) turns out to be an equality when K = N (K, P ) := G⊤ P −γI H⊤
K̃, P = P̃ because in this case, the linearization introduces C + DK H −γI
no error. Then for every feasible solution (K̂, P̂ ) to Problem (25)
(19), take K̃ = K̂, P̃ = P̂ , then at least (K̂, P̂ ) is a feasible for simplicity. Therefore, we can derive the KKT conditions
solution to Problem (22) and possibly a convex set containing as follows
(K̂, P̂ ) is the feasible region of Problem (22).
Theorem 3 indicates that instead of directly solving Problem Λ∗1 0, Λ∗2 0, (26a)
(19), we can alternatively solve a convex counterpart (Problem N (K ∗ , P ∗ ) 0, P ∗ 0, (26b)
(22)), and the solution will always remain within the feasible hΛ∗1 , N (K ∗ , P ∗ )i = 0, hΛ∗2 , P i = 0, (26c)
domain of Problem (19). Furthermore, if we consider solving ∂||K ∗ ||1 + ∇K hΛ∗1 , N (K ∗ , P ∗ )i = 0, (26d)
Problem (22) iteratively with linearization around the solution
from the previous step, the solution is at least not worse than ∇P hΛ∗1 , N (K ∗ , P ∗ )i − Λ2 = 0, (26e)
the previous step. This is because Problem (22) is convex where Λ∗1 , Λ∗2 are the KKT multipliers. With the KKT condi-
and the global minimum is not worse than the linearization tions, we define the stationary point below.
point. The analysis above motivates us to design an iterative Definition 1: (K, P ) is called a stationary point of Problem
algorithm to gradually approach the stationary point, which is (19) if it satisfies the corresponding KKT conditions (26).
the highest pursuit in non-convex optimization problems. Remark 2: Generally speaking, a stationary point is not
It is worth noting that although iteratively solving Problem necessarily a locally optimal point because of the existence
(22) ensures a non-increasing sequence of objective values, of saddle points. Although it is not a sufficient condition for
this does not guarantee convergence of the algorithm. This is local optimality, we can still remove points that are not locally
because the algorithm may oscillate between two points with optimal if we can show they do not satisfy the KKT conditions.
the same objective value. To guarantee the convergence of The stationary point is in general the highest pursuit in non-
the algorithm, the objective function must exhibit a sufficient convex optimization problems.
decrease property. Therefore, we slightly modify Problem (22) Theorem 4: Algorithm 1 generates a solution sequence that
by incorporating proximal terms in the objective function, as has at least one limit point, and each limit point is a stationary
shown in Problem (24). These proximal terms avoid the oscil- point of Problem 1.
lation issue and enforce the algorithm to gradually converge. Proof : Since Problem (24) is a convex problem, we can
obtain the global optimal solution through commercial solvers.
From the structure of Algorithm 1, we immediately obtain the
min ||K||1 + ||K − K̃||2F + ||P − P̃ ||2F (24a)
K,P following relationship at each step
s.t. (22b), (22c), (24b) ||Kk+1 ||1 + ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F ≤ ||Kk ||1 (27)
6
because the global optimal point is at least not worse than Algorithm 2 Sparsity promoting H∞ controller design
∗ ∗ ∗
the linearization point. From (27), we further obtain if Output: KA2 , PA2 , γA2 ;
(Kk+1 , Pk+1 ) is not equal to (Kk , Pk ), then the objective Initialize k = 0;
function will decrease. Since the objective function is lower- Initialize K0 = Y ∗ X ∗ −1 , P0 = X ∗ −1 , γ0 = γ ∗ as the optimal
bounded, the objective value must finally converge. From the centralized solution to Problem (8);
relation repeat
lim ||Kk+1 ||1 + ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F - Solve Problem (31) with K̃ = Kk , P̃ = Pk ;
k→∞ - Assign the solutions to Kk+1 , Pk+1 , γk+1 ;
≤ lim ||Kk ||1 , - k = k + 1;
k→∞
until ||Kk − Kk−1 ||F < ǫ, ||Pk − Pk−1 ||F < ǫ;
we have return Kk , Pk , γk ;
lim ||Kk+1 − Kk ||2F + ||Pk+1 − Pk ||2F = 0.
k→∞
0 0
min γ + ||K − K̃||2F + ||P − P̃ ||2F (34a) 4 4
K,P,γ
8 8
s.t. P ≻ 0, K ◦ IS c = 0, (34b) 12 12
16 16
−Lf (K, P ; K̃, P̃ ) ∗ ∗ ∗
20 20
√1 (A + BK + P ) −I ∗ ∗ 0 8 16 24 32 40 0 8 16 24 32 40
2 0.
G⊤ P 0 −γI ∗
C + DK 0 H −γI
(34c) 0 0
4 4
The full algorithm is shown in Algorithm 3. 8 8
Theorem 6: Algorithm 3 generates a solution sequence that 12 12
16 16
has at least one limit point, and each limit point is a stationary
20 20
point of Problem (33). 0 8 16 24 32 40 0 8 16 24 32 40
Proof : It is similar to Theorem 4 and is omitted.
Remark 3: Apart from increasing convergence speed, an- ∗
other reason for utilizing the solution returned by Problem Fig. 1: The sparsity patterns of K0 ,K1 ,K2 ,KA1 (from left to
(15) for initialization is to avoid infeasibility in the first iter- right, top to bottom). The nonzero elements are labeled using
ation. Given (P̄ , X̄, R̄, ᾱ, γ̄), we compute K̃ = R̄X̄ −1 , P̃ = blue dots.
P̄ −1 , γ̃ = γ̄, where (K̃, P̃ , γ̃) are variables specified in Prob-
lem (34). At least (K̃, P̃ , γ̃), and potentially other superior
solutions, are feasible for Problem (34) since K̃◦IS c = 0 holds 500 80
||K||1
250
that there exists a K in this region that satisfies K̃ ◦ IS c = 0. 40
200
150 30
IV. S IMULATIONS
100
In this section, we provide several numerical simulations to 20
50
verify our results. The numerical example for Problem 1 and
0 10
Problem 2 is described as follows: consider the mass-spring 0 1 2 3 4 5 6 7 8 9 10 11
system with N masses on a line [32]. The dynamic system can
be represented as x1 = [p1 , . . . , pN ]⊤ and x2 = x˙1 , where pi Fig. 2: The evolution of ||K||0 and ||K||1 .
is the displacement of the i-th mass from its reference point.
The state-space model can be modeled as (1) with
0 I 0 0 in Fig. 1. It is shown that the number of nonzero elements
A= , B= , G= ,
T 0 I I declines rapidly. The algorithm takes 4 iterations to reach the
optimal solution with 38 nonzero elements in 28.5563 seconds.
I 0 0
C= , D= , H= . The evolution of ||K||0 and ||K||1 are shown in Fig. 2. The
0 2I 2I
monotonic decreasing ||K||1 matches our theoretical results.
where T ∈ RN ×N is a Toeplitz matrix with -2 on its main di- However, since ||K||1 is an approximate of ||K||0 , ||K||0 is
agonal, 1 on its first sub- and super-diagonal, and 0 elsewhere. not guaranteed to monotonically decrease.
We assume N = 20, nx = 40, nu = 20, nd = 20, nz = 60.
The numerical examples for Problem 3 are the same as those
in [25], which will be clarified later. Throughout this section,
we choose ǫ = 1e − 3. For the rest of this section, Problem 1, B. Sparsity-promoting H∞ Controller Design
Problem 2, and Problem 3 will be considered separately.
In this part, we illustrate the trade-off between system
performance and the controller sparsity in Fig. 3. For each
A. Sparse Controller Design with Bounded H∞ Performance λ, Algorithm 2 can return a satisfactory solution within 10
By solving Problem (8), we obtain the global optimal H∞ iterations and 100 seconds. When λ is small, we impose a
controller K ∗ = Y ∗ X ∗ −1 and global minimal H∞ norm small penalty on the controller density, resulting in denser
γ ∗ = 2. We require the H∞ norm should not be larger controllers and higher performance. As λ increases, the penalty
than γ = 5. We run Algorithm 1 to compute the solution. on density also increases, leading to sparser controllers and a
The solution patterns at several starting iterations are shown corresponding degradation in system performance.
9
norm
110
1 2 2
||K||0
5 0 1 1
100 (38)
H
0
4
90
C = 1 , D = 0, H = 0.
80 3 0
70 2 We specify a structural constraint with
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1 1 0
IS =
c . (39)
0 1 1
Fig. 3: The relationship between ||K||0 , H∞ norm and λ.
Same as the previous example, we run Algorithm 3. We
initialize the algorithm by solving Problem (15) and obtain the
C. Structured H∞ Controller Design solution (P̄ , X̄, R̄, ᾱ, γ̄), where ᾱ = 0.09 and γ̄ = 0.13724
∗ ∗ ∗ ∗
[25]. Algorithm 3 returns KA3 , PA3 , γA3 with γA3 = 0.06 <
In this section, we will validate the effectiveness of our
0.13724, which validate our advantages over [25].
methods on two numerical examples discussed in [25], com-
pare our solutions with those obtained in [25] to demonstrate
our superiority. V. C ONCLUSIONS
1) Network Decentralized Control: We consider a water In this paper, we conducted a comprehensive analysis of
distribution system consisting of 5 subsystems [33]. The the optimal sparse H∞ controller design, and three typical
overall dynamics can be written in the form of (1) with problems were considered separately. For each problem, we
A = diag(A1 , A2 , . . . , A5 ), C = I, D = 0, H = 0, applied a novel linearization to relax the bilinear matrix in-
equality into an LMI, and we showed that any feasible solution
Bu −Bd 0 0 0 0 to the relaxed problem was feasible for the original problem,
0 Bu −Bd 0 0 −Bd
which motivated us to develop an ILMI algorithm to compute
B= 0
0 Bd −Bu 0 0 , G = I. the solution. We further characterized the first-order optimality
0 0 0 Bd Bu 0 of the original problem using KKT conditions. Moreover,
0 0 0 0 Bu Bd by incorporating proximal terms into the objective function,
(35) we showed that our algorithm is guaranteed to converge and
where for all i = 1, 2, . . . , 5, each limit point is a stationary point of the original prob-
lem, which is the highest pursuit in non-convex optimization
−ξi βi 0 0 1 problems. Finally, the effectiveness of our algorithm was
Ai = ξi −βi 0 , Bd = 1 , Bu = 0 . (36) validated via numerical simulations. For the non-structured
0 1 0 0 0 design, our algorithms effectively reduced the number of non-
In particular, we select ξ1 = 15, ξ2 = 20, ξ3 = 16, ξ4 = zero elements, thereby lowering communication costs. In the
16.7, ξ5 = 14, β1 = 0, β2 = 0, β3 = 12, β4 = 0, β5 = 22. case of structured design, our approaches showed significant
According to [33], a controller is decentralized in the sense improvement over those proposed in [25].
of networks if K has the same structural pattern as B ⊤ .
Therefore, the structural constraint can be expressed as K ∈ S, R EFERENCES
with [1] H. Sandberg, S. Amin, and K. H. Johansson, “Cyberphysical security in
networked control systems: An introduction to the issue,” IEEE Control
1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 Systems Magazine, vol. 35, no. 1, pp. 20–23, 2015.
1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 [2] A. Teixeira, H. Sandberg, and K. H. Johansson, “Networked control
systems under cyber attacks with applications to power networks,” in
0 0 0 1 1 1 1 1 1 0 0 0 0 0 0
IS c = . Proceedings of the American Control Conference, 2010, pp. 3690–3696.
0 0 0 0 0 0 1 1 1 1 1 1 0 0 0
[3] B. Ran and D. Boyce, Modeling Dynamic Transportation Networks: An
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1
Intelligent Transportation System Oriented Approach. Springer Science
0 0 0 1 1 1 0 0 0 0 0 0 1 1 1 & Business Media, 2012.
(37) [4] R. Verdone, D. Dardari, G. Mazzini, and A. Conti, Wireless Sensor
We run Algorithm 3 to compute a structured controller. In and Actuator Networks: Technologies, Analysis and Design. Academic
Press, 2010.
the initialization part, we first solve Problem (15) and denote [5] L. Gupta, R. Jain, and G. Vaszkun, “Survey of important issues in UAV
the solution as (P̄ , X̄, R̄, ᾱ, γ̄), where ᾱ = 0.1554 and γ̄ = communication networks,” IEEE Communications Surveys & Tutorials,
∗ ∗ ∗ ∗ vol. 18, no. 2, pp. 1123–1152, 2015.
1.7887 [25]. Algorithm 3 returns KA3 , PA3 , γA3 with γA3 =
[6] F. Lin, M. Fardad, and M. R. Jovanović, “Design of optimal sparse
1.65 < 1.7887. The results demonstrate the superiority of our feedback gains via the alternating direction method of multipliers,” IEEE
methods over those proposed in [25]. Transactions on Automatic Control, vol. 58, no. 9, pp. 2426–2431, 2013.
10
[7] M. Fardad and M. R. Jovanović, “On the design of optimal structured approach to design of structured optimal state feedback gains,” IEEE
and sparse feedback gains via sequential convex programming,” in Transactions on Automatic Control, vol. 56, no. 12, pp. 2923–2929,
Proceedings of the American Control Conference, 2014, pp. 2426–2431. 2011.
[8] N. Yang, J. Tang, Y. Li, G. Shi, and L. Shi, “Log-barrier search for [33] F. Blanchini, E. Franco, and G. Giordano, “Network-decentralized
structural linear quadratic regulators,” IEEE Transactions on Automatic control strategies for stabilization,” IEEE Transactions on Automatic
Control, 2024. Control, vol. 60, no. 2, pp. 491–496, 2014.
[9] N. K. Dhingra and M. R. Jovanović, “A method of multipliers algorithm [34] A. Zecevic and D. D. Siljak, Control of Complex Systems: Structural
for sparsity-promoting optimal control,” in Proceedings of the American Constraints and Uncertainty. Springer Science & Business Media,
Control Conference (ACC), 2016, pp. 1942–1947. 2010.
[10] M. Babazadeh and A. Nobakhti, “Sparsity promotion in state feedback
controller design,” IEEE Transactions on Automatic Control, vol. 62,
no. 8, pp. 4066–4072, 2016.
[11] M. Cho, “Iterative thresholding and projection algorithms and model-
based deep neural networks for sparse LQR control design,” IEEE
Transactions on Automatic Control, 2024.
[12] N. Yang, J. Tang, Y. Li, and L. Shi, “Sparse optimization of H2
controllers for LTI systems: A log-barrier method,” Automatica, vol.
173, p. 112102, 2025.
[13] B. Polyak, M. Khlebnikov, and P. Shcherbakov, “An LMI approach
to structured sparse feedback design in linear control systems,” in
Proceedings of the European Control Conference (ECC), 2013, pp. 833–
838.
[14] N. K. Dhingra, M. R. Jovanović, and Z.-Q. Luo, “An ADMM algorithm
for optimal sensor and actuator selection,” in Proceedings of the IEEE
Conference on Decision and Control, 2014, pp. 4039–4044.
[15] A. Zare and M. R. Jovanović, “Optimal sensor selection via proximal
optimization algorithms,” in Proceedings of the IEEE Conference on
Decision and Control (CDC), 2018, pp. 6514–6518.
[16] A. Zare, H. Mohammadi, N. K. Dhingra, T. T. Georgiou, and M. R.
Jovanović, “Proximal algorithms for large-scale statistical modeling and
sensor/actuator selection,” IEEE Transactions on Automatic Control,
vol. 65, no. 8, pp. 3441–3456, 2019.
[17] J. Ding, C. Wen, G. Li, X. Yang, and T. Hu, “Sparsity-inspired optimal
topology control of complex networks,” IEEE Transactions on Network
Science and Engineering, vol. 7, no. 3, pp. 1825–1839, 2019.
[18] F. Lin, M. Fardad, and M. R. Jovanović, “Identification of sparse
communication graphs in consensus networks,” in Proceedings of the
Annual Allerton Conference on Communication, Control, and Comput-
ing (Allerton), 2012, pp. 85–89.
[19] A. Zare, M. R. Jovanović, and T. T. Georgiou, “Alternating direction
optimization algorithms for covariance completion problems,” in Pro-
ceedings of the American Control Conference (ACC), 2015, pp. 515–
520.
[20] E. Masazade, M. Fardad, and P. K. Varshney, “Sparsity-promoting ex-
tended Kalman filtering for target tracking in wireless sensor networks,”
IEEE Signal Processing Letters, vol. 19, no. 12, pp. 845–848, 2012.
[21] M. Nagahara, D. E. Quevedo, and D. Nešić, “Maximum hands-off
control: a paradigm of control effort minimization,” IEEE Transactions
on Automatic Control, vol. 61, no. 3, pp. 735–747, 2015.
[22] M. Nagahara, Sparsity Methods for Systems and Control. now
Publishers, 2020.
[23] N. Yang, Y. Li, T. Chen, and L. Shi, “Sparsity promoting observer
design for wireless sensor-estimator networks,” IEEE Transactions on
Automatic Control, 2024.
[24] Y. Zhong, N. Yang, L. Huang, G. Shi, and L. Shi, “Sparse sensor se-
lection for distributed systems: An l1 -relaxation approach,” Automatica,
vol. 165, p. 111670, 2024.
[25] F. Ferrante, C. Ravazzi, and F. Dabbene, “An LMI approach for
structured H∞ state feedback control,” IFAC-PapersOnLine, vol. 53,
no. 2, pp. 4058–4063, 2020.
[26] G. E. Dullerud and F. Paganini, A Course in Robust Control Theory: A
Convex Approach. Springer Science & Business Media, 2013, vol. 36.
[27] T. Iwasaki and R. E. Skelton, “All controllers for the general H∞
control problem: LMI existence conditions and state space formulas,”
Automatica, vol. 30, no. 8, pp. 1307–1317, 1994.
[28] V. Blondel and J. N. Tsitsiklis, “NP-hardness of some linear control
design problems,” SIAM Journal on Control and Optimization, vol. 35,
no. 6, pp. 2118–2127, 1997.
[29] G. Pipeleers, B. Demeulenaere, J. Swevers, and L. Vandenberghe,
“Extended LMI characterizations for stability and performance of linear
systems,” Systems & Control Letters, vol. 58, no. 7, pp. 510–518, 2009.
[30] G.-R. Duan and H.-H. Yu, LMIs in Control Systems: Analysis, Design
and Applications. CRC press, 2013.
[31] Z. Fan, J. Wang, S. Achiche, E. Goodman, and R. Rosenberg, “Struc-
tured synthesis of MEMS using evolutionary approaches,” Applied Soft
Computing, vol. 8, no. 1, pp. 579–589, 2008.
[32] F. Lin, M. Fardad, and M. R. Jovanovic, “Augmented Lagrangian