0% found this document useful (0 votes)
30 views18 pages

A Generalized Omega-K Algorithm To Process

This paper presents a generalized Omega-K algorithm for processing translationally variant bistatic synthetic aperture radar (TV-BSAR) data, addressing the challenges posed by 2-D spatial variations in the geometry of the transmitter and receiver. The proposed method utilizes a point target reference spectrum derived from the generalized Loffeld’s bistatic formula, allowing for effective imaging despite variations in range cell migrations and Doppler parameters. Numerical simulations and experimental data demonstrate the algorithm's effectiveness in handling various scenarios, including different trajectories and velocities.

Uploaded by

huazhu661
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views18 pages

A Generalized Omega-K Algorithm To Process

This paper presents a generalized Omega-K algorithm for processing translationally variant bistatic synthetic aperture radar (TV-BSAR) data, addressing the challenges posed by 2-D spatial variations in the geometry of the transmitter and receiver. The proposed method utilizes a point target reference spectrum derived from the generalized Loffeld’s bistatic formula, allowing for effective imaging despite variations in range cell migrations and Doppler parameters. Numerical simulations and experimental data demonstrate the algorithm's effectiveness in handling various scenarios, including different trajectories and velocities.

Uploaded by

huazhu661
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO.

10, OCTOBER 2014 6597

A Generalized Omega-K Algorithm to Process


Translationally Variant Bistatic-SAR Data
Based on Two-Dimensional Stolt Mapping
Junjie Wu, Member, IEEE, Zhongyu Li, Student Member, IEEE, Yulin Huang, Member, IEEE,
Jianyu Yang, Member, IEEE, and Qing Huo Liu, Fellow, IEEE

Abstract—In translationally variant (TV) bistatic synthetic vectors of the transmitter and receiver are not identical. In this
aperture radar (BSAR), 2-D spatial variation is a major problem mode, the geometry relationship between the transmitter and
to be tackled. In this paper, a generalized Omega-K imaging algo- receiver is not stationary. It is more general than TI mode for
rithm to deal with this problem is proposed. The method utilizes
a point target reference spectrum of the generalized Loffeld’s the real applications.
bistatic formula (LBF) (GLBF). Without the bistatic-deformation For TI-BSAR, many imaging algorithms have been pro-
term, GLBF is the latest development of LBF. Similar to the posed, including the range-Doppler algorithm [7], chirp-scaling
monostatic case, it has a much simpler form than other point algorithm [8], [9], and Omega-K algorithm [10]–[14]. Different
target reference spectra. Based on the spatial linearization of from TI-BSAR, TV-BSAR is spatially variant in azimuth be-
GLBF, the Stolt mapping relationship is derived. Different from
the traditional Omega-K algorithms for monostatic SAR and cause the relative position between the transmitter and receiver
translationally invariant BSAR, this approach uses a 2-D Stolt fre- varies with slow time. Thus, the targets in a given range bin
quency transformation. Through this transformation, the method suffer from different range cell migrations (RCMs) and Doppler
can deal with the 2-D spatial variation. It can also consider the reference functions. Consequently, the imaging of TV-BSAR is
linear spatial variation of Doppler parameters, which is usually severely azimuth dependent. For this mode, only a few works
not considered in the previous publications on bistatic Omega-K
algorithms. This method can handle the cases of TV-BSAR with have been published, including scaled inverse FFT [16], range-
different trajectories, different velocities, high squint angles, and Doppler [17], [18], chirp scaling [17], and nonlinear chirp
large bistatic angles. In addition, a compensation method for the scaling [19]. However, all of these methods neglect the variation
phase error caused by the linearization is discussed. Numerical of RCM along the azimuth direction. In addition, these methods
simulations and experimental data processing verify the effective- also ignore the higher order couplings between range and
ness of the proposed method.
azimuth. In [20], an Omega-K for hybrid spaceborne/airborne
Index Terms—Bistatic synthetic aperture radar (BSAR), gener- BSAR is proposed. It uses a 1-D frequency transformation like
alized Loffeld’s bistatic formula (LBF) (GLBF), Omega-K algo- the one in the monostatic case and neglects the variation of
rithm, 2-D spatial variation, 2-D Stolt mapping.
RCM along the azimuth direction as well. In [21], a back-
projection method for TV-BSAR is presented. However, its
I. I NTRODUCTION
focusing in the time domain is quite time consuming.

B ISTATIC synthetic aperture radar (BSAR) has received


considerable attention and research around the world re-
cently. Several airborne experiments have been carried out [1]–
Focusing SAR data in the frequency domain requires the
2-D point target reference spectrum (PTRS). However, for
BSAR with two moving platforms, the derivation of an analyt-
[3], and some spaceborne projects have been scheduled or ical PTRS is not very easy by using the traditional principle
launched as well [4], [5]. According to the relationship between of stationary phase (POSP) directly. This is because of the
the transmitter and receiver, BSAR can be categorized into two problem of double square root in the bistatic range history
types: translationally invariant (TI) BSAR and translationally [22]. Loffeld’s bistatic formula (LBF) has been introduced
variant (TV) BSAR [6]. TV means that the azimuth velocity to compute the PTRS of BSAR. The method of the original
LBF (OLBF) works well for many airborne and spaceborne
BSAR modes [22]. However, if the geometry configuration
Manuscript received November 26, 2012; revised September 3, 2013; ac-
cepted November 1, 2013. Date of publication February 3, 2014; date of
difference between the transmitter and receiver becomes con-
current version May 22, 2014. This work was supported by the National siderable, as in the hybrid spaceborne/airborne case, it will
High-tech Research and Development Program (863 Program) of China under suffer from a significant degradation. Some extended forms
Grant 2009AA12Z106, by the Pre-research Project of China under Grant
51307030302, and by the Fundamental Research Funds for the Central Uni-
have been reported to solve this problem. The method in [23]
versities under Grant A03006023901007. uses Doppler frequency-modulated rates to weigh the differ-
J. Wu, Z. Li, Y. Huang, and J. Yang are with School of Electronic Engi- ent contributions of the transmitter and receiver to the whole
neering, University of Electronic Science and Technology of China, Chengdu
611731, China (e-mail: [email protected]). Doppler frequency. This method has good performance for the
Q. H. Liu is with the Department of Electrical and Computer Engineering, hybrid spaceborne/airborne side-looking case. Nevertheless, as
Duke University, Durham, NC 27708 USA (e-mail: [email protected]). both OLBF and the form in [23] do not consider the Doppler
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org. contributions adequately, they are not suitable for large-squint
Digital Object Identifier 10.1109/TGRS.2014.2299069 cases of BSAR. To solve this problem, the latest extended form

0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
6598 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

of LBF is to consider the contributions of Doppler centroids


[24] and higher order Doppler terms [25] on Doppler history.
In LBF, the bistatic deformation (BD) term is not beneficial
to the derivation of imaging algorithms. In the current imaging
methods based on LBF [8], [16], [20], BD is generally elimi-
nated by using the average value in the 2-D frequency domain
after data segmentation. In fact, based on some analysis, the
BD term is not a necessary part in BSAR PTRS [24], [25].
Therefore, we can obtain a PTRS for BSAR similar to that
in the monostatic case. It has a much simpler form than other
PTRSs [26].
The Omega-K imaging algorithm is an important method for
monostatic SAR imaging. It does not need the approximations
used in range-Doppler and chirp-scaling imaging algorithms
[27], [28]. Its most important operation is the Stolt mapping
which is a 1-D frequency transformation [30], [31]. This trans-
formation can handle the range dependence of RCM, azimuth
compression, and range–azimuth coupling correctly. This gives
it the ability to process data acquired over wide azimuth aper-
tures or high squint angles. As for TI-BSAR, some Omega-K- Fig. 1. Geometry configuration of TV-BSAR.
type imaging algorithms were proposed in [10]–[15] and
[29]. However, the 1-D frequency transformation in traditional where α is the angle between y and y  , i.e., the angle between
Omega-K and those methods for TI-BSAR can just deal with the flight paths of the two platforms. In this paper, we assume
spatial variation along the range direction. As for TV-BSAR, that the transmitter and receiver both move along straight lines
because of the 2-D spatial variations, these methods are not with constant velocities.
available anymore. P (x, y) is an arbitrary point target in the imaging area. Its
In this paper, a generalized Omega-K imaging algorithm for corresponding coordinates in the system of x y  z are (x , y  ).
TV-BSAR is proposed. Based on the PTRS in [24] and [25], hT and hR are the heights of the transmitter and receiver,
first, the PTRS is linearized by the receiver’s closest range and respectively. The coordinates of the receiver when the slow time
along-track variables. To deal with the special phenomenon of is zero are (xR , yR , hR ), and the coordinates of the transmitter
2-D spatial variations, the algorithm performs a 2-D frequency are (xT , yT , hT ). rT (x ) and rR (x) are the slant ranges of
mapping. The advantage of this method is that it can consider the closest approach (SRCAs) of the transmitter and receiver,
the azimuth variation while keeping the focusing precision. In respectively. θsT (x ) and θsR (x) represent the squint angles
addition, a compensation method for the approximation error is of the transmitter and receiver, respectively. rR (x) and θsR (x)
presented. The Omega-K algorithms for TI-BSAR, BSAR with are functions of x. rT (x ) and θsT (x ) are functions of x . For
one stationary station, and monostatic SAR can be viewed as convenience, we use rT , rR , θsT , and θsR to represent them,
special cases of this method. respectively.
The rest of this paper is organized as follows. Section II The slant range histories with respect to P (x, y) of the
gives the signal model of TV-BSAR. Section III derives the transmitter and receiver are
proposed algorithm and gives the Stolt mappings for some 
special cases. Section IV includes the interpretation of the 2-D RT (η; x, y) = rT2 + vT2 (η − η0T )2 (2)
Stolt frequency transformation and error analysis. Numerical 
RR (η; x, y) = rR 2 + v 2 (η − η 2
simulations are given in Section V. Section VI verifies the R 0R ) (3)
method by experimental data processing. Section VII concludes
this paper. where η is the slow time variable, η = 0, when the composite
beam center radiates the scene center O, and η0T and η0R
are the zero-Doppler times of the transmitter and receiver,
II. S IGNAL M ODEL AND R EVIEW OF GLBF respectively. η0T = (y  − yT )/vT , and η0R = (y − yR )/vR .
Fig. 1 gives the imaging configuration of TV-BSAR. The The signal after demodulation to baseband and range Fourier
receiver is defined in the 3-D coordinate system of xyz, while transform is (constants are neglected)
the transmitter is in x y  z. The receiver moves along a line
S(f, η; x, y)
parallel to the y-axis, while the transmitter’s flight path is  
parallel to y  . These two systems have the same coordinate RT (η; x, y)+RR (η; x, y)
= S0 (f ) exp −j2π(f +f0 ) (4)
origin O, which is set to be the scene center. The relationship c
between these two systems is
     where f is the range frequency, S0 (f ) denotes the frequency
x cos α − sin α x spectrum of the baseband-transmitted signal, f0 is the carrier
= (1)
y sin α cos α y frequency, and c is the speed of light.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6599

The 2-D PTRS can be obtained by using POSP on azimuth III. O MEGA -K A LGORITHM FOR TV-BSAR

In the Omega-K algorithm, the key is to transform the PTRS
S2df (f, fη ; x, y) = S0 (f ) exp {−jφb (η, fη ; x, y)} dη (5)
phase term in (22) into a style which is linear with both spatial
domain and frequency domain coordinates
where
  ΦG (f, fη ) =⇒ ΦG (fA , fB ) = 2π(AfA + BfB ) (13)
f +f0
φb (η, fη ; x, y) = 2π [RT (η; x, y)+RR (η; x, y)]+fη η
c where A and B are the coordinates in the final image along
(6) two dimensions, respectively. fA and fB are the frequencies
and fη is the azimuth frequency. To solve the above integral, corresponding to A and B, respectively. After that, an image
the bistatic point of stationary phase (PSP) is required. Because can be generated by a 2-D inverse fast Fourier transform (IFFT).
two square root terms are involved in (6), it is difficult to obtain
the analytical PSP directly. A. Spatial Domain Linearization
Based on the generalized LBF (GLBF) [15], [24], [25], we
can get the PTRS of TV-BSAR as To obtain the style of (13), first, the spatial variables should
be separated from the 2-D PTRS. This separation can be
S2df (f, fη ; x, y) = S0 (f ) exp {−jΦG (f, fη ; x, y)} (7) achieved by a spatial domain linearization. First, the two lin-
earization variables should be chosen. Considering that the im-
where age would be generated in the receiver-referenced coordinates
ΦG (f, fη ; x, y) = φT (ηP T ) + φR (ηP R ) generally, in this paper, we use rR and y to linearize the PTRS.
In comparison with the linearization in terms of bistatic slant
2π range sum in [14], this linearization has smaller approximation
= [rT FT (f, fη ) + rR FR (f, fη )]
c error and can avoid the complicated geometry registration. This
+ 2π [fηT (fη )η0T + fηR (fη )η0R ] (8) is because rR and y are orthogonal to each other and two parts
 in (23) are already linear with rR and y.
2
cfηT (fη ) 1) Linearization of rT : rT is linearized by rR and y at
FT (f, fη ) = (f + f0 )2 − (9)
vT (rR0 , y0 ) by 2-D Taylor expansion

cfηR (fη )
2 rT (rR , y) = rT 0 + ar1 Δr + ay1 Δy (14)
FR (f, fη ) = (f + f0 )2 − . (10)
vR where rT 0 = rT (rR0 , y0 ), Δr = rR − rR0 , Δy = y − y0 , and
ar1 and ay1 are linear coefficients and are listed in Appendix I.
In (8)–(10), fηT (fη ) and fηR (fη ) are the Doppler contributions
2) Linearization of FT and FR : Generally, because θsT
of the transmitter and receiver, respectively
and θsR change with the spatial coordinates, FT (f, fη ) and
fηrT FR (f, fη ) are also spatially variant with the spatial coordinates.
fηT (fη ) = fηcT + (fη − fηc )
fηr In the current publications about BSAR imaging, these spatial
variations are always neglected. In this paper, we use a 2-D
fηrT fη3 − fη3T fηr
− 3
(fη − fηc )2 . . . (11) linear expansion to reflect this spatial variation
fηr
fηrR FT (f, fη ) ≈ FT (f, fη ; rR0 , y0 )+pr1 (f, fη )Δr+py1 (f, fη )Δy
fηR (fη ) = fηcR + (fη − fηc ) FR (f, fη ) ≈ FR (f, fη ; rR0 , y0 )+qr1 (f, fη )Δr+qy1 (f, fη )Δy
fηr
(15)
fηrR fη3 − fη3R fηr
− 3
(fη − fηc )2 . . . (12)
fηr where pr1 (f, fη ) and qr1 (f, fη ) are the first-order Taylor coef-
ficients of FT (f, fη ) and FR (f, fη ) with respect to Δr, respec-
where tively, and py1 (f, fη ) and qy1 (f, fη ) are the first-order Taylor
fηcT , fηrT , fη3T centroid, frequency-modulated rate, and coefficients of FT (f, fη ) and FR (f, fη ) with respect to y,
the third-order term of the transmitter’s respectively. Their expressions can be found in Appendix I.
Doppler contribution, respectively; 3) Linearization of fηT and fηR : Expand fηT and fηR in
fηcR , fηrR , fη3R centroid, frequency-modulated rate, and the terms of Δr and Δy
third-order term of the receiver’s Doppler
contribution, respectively; fηT (fη ) ≈fηT (fη ; rR0 , y0 ) + ςT r1 (fη )Δr + ςT y1 (fη )Δy
fηc , fηr , fη3 centroid, frequency-modulated rate, and fηR (fη ) ≈fηR (fη ; rR0 , y0 )+ςRr1 (fη )Δr+ςRy1 (fη )Δy (16)
the third-order term of the total Doppler
history. fηc = fηcT + fηcR , fηr = fηrT + where ςT r1 (fη ) and ςRr1 (fη ) are the first-order Taylor coeffi-
fηrR , and fη3 = fη3T + fη3R . cients of fηT (fη ) and fηR (fη ) with respect to Δr, respectively.
The number of terms used earlier is controlled by the ne- ςT y1 (fη ) and ςRy1 (fη ) are the first-order Taylor coefficients of
glected BD term which should be less than π/4. In [24], higher fηT (fη ) and fηR (fη ) with respect to Δy, respectively. Their
order terms than the linear one are disregarded. expressions can be found in Appendix I.
6600 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

4) Linearization of η0T : Expand η0T in terms of Δr and Δy pression, bulk azimuth compression, bulk RCM correction, and
bulk SRC. Therefore, the RFM function can be written as
η0T ≈ η0T (f, fη ; rR0 , y0 ) + gr1 (fη )Δr + gy1 (fη )Δy (17)
SRF M (f, fη ; rR0 ) = S0∗ (f )
where gr1 and gy1 are the first-order coefficients of η0T in terms 
ref
of Δr and Δy, respectively. They are also listed in Appendix I. rT 0 FTref + rR0 FRref yR fηR
× exp j2π η0T −
ref ref
+ fηT .
Then, we have c vR

ΦG (f, fη ) (19)

ref 1 After that, the reference target P (rR0 , y0 ) is correctly focused
rT 0 FTref + rR0 FRref yR fηR
≈ 2π + fηT η0T −
ref ref
by RFM, but targets away from P (rR0 , y0 ) are only partially
c vR
focused. Temporarily disregarding the second-order phase term
 and the coupling term of Δr and Δy, the residual phase of
ar1 FTref +pr1 (f, fη )rT 0 +qr1 (f, fη )rR0 + FRref
+ 2πΔr P (x, y) is
c
2 φRES (f, fη ; rR , y, rR0 , y0 )
ςRr1 (fη )yR
ref
+ςT r1 (fη )η0T − + gr1 fηTref

vR Δr
 =−2π ar1 FTref +pr1 (f, fη )rT 0 +qr1 (f, fη )rR0 +FRref
ay1 FTref + py1 (f, fη )rT 0 + qy1 (f, fη )rR0 c
+ 2πΔy 
c ref ςRr1 (fη )yR c ref
+ςT r1 (fη )η0T c− + gr1 fηT c
3 vR
ref
ςRy1 (fη)yR fηR  
ref
+ςT y1 (fη )η0T − ref
+gy1 fηT + Δy ay1 FTref +py1 (f, fη )rT 0 +qy1 (f, fη )rR0 vR
vR vR −2π
vR c
 4
2 ar1 pr1 (f, fη ) + qr1 (f, fη ) 
+ 2πΔr + ςT r1 (fη )gr1
c ref
+ςT y1(fη)η0T vR−ςRy1(fη)yR+gy1 fηT
ref ref
vR+fηR .
 5
ay1 py1 (f, fη ) ςRy1 (fη )
+ 2πΔy 2 + ςT y1 (fη )gy1 + (20)
c vR

ar1 py1 (f, fη ) + ay1 pr1 (f, fη ) + qy1 C. Stolt Mapping
+ 2πΔxΔy + ςT r1
c
6 The residual phase in (20) is linear with Δr and Δy. Letting
ςRr1 (fη ) the coefficients of Δr and Δy in (20) be equal to the new range
×(fη )gy1 + ςT y1 (fη )gr1 + (18)
vR and azimuth frequencies, we can get the 2-D Stolt frequency
mapping relationship as follows:
where FTref = FT (f, fη ; rR0 , y0 ), FRref = FR (f, fη ; rR0 , y0 ), ⎧ 
ref ref ref ⎪ f +f0 = ar1 FTref + pr1 (f, fη )rT 0 + qr1 (f, fη )rR0 + FRref
fηT = fηT (fη ; rR0 , y0 ), fηR = fηR (fη ; rR0 , y0 ), and η0T = ⎪

η0T (f, fη ; rR0 , y0 ). In the remaining part of this paper, sym-
ref
+ ςT r1 (fη )η0T c − ςRr1 (fη )yR c/vR + gr1 fηT
ref
c

bols with the superscript of “ref ” represent the values of the ⎪ fη = ay1 FT vR /c+py1(f, fη )rT 0 vR /c+qy1(f, fη )rR0 vR /c

ref

reference target. In (18), 1 is constant with spatial variables. ref
+ςT y1 (fη )η0T vR − ςRy1 (fη )yR + gy1 fηTref ref
vR + fηR
Therefore, it represents the spatially invariant part, including (21)
bulk RCM, bulk azimuth compression, and bulk second range where f  and fη are the mapped frequencies along range and
compression (SRC). It can be compensated for by reference azimuth directions, respectively.
function multiplication (RFM) in the 2-D frequency domain. After the frequency mapping, we have
2 is the linear spatially variant term in terms of Δr, while 3
is the linear spatially variant term in terms of Δy. 4 and 5   Δr Δy
φRES f  , fη ; rR , y, rR0 , y0 ≈ −2π(f  + f0 ) − 2πfη .
are the second-order terms of Δr and Δy, respectively. The last c vR
part 6 is the coupling phase between Δr and Δy. Up to this (22)
point, we have separated the spatial variables from the PTRS. It is linear with both the new spatial domain and frequency
The coefficients of Δr and Δy in (18) are only the functions of domain coordinates. Then, the data can be transformed into the
frequency variables. In the residual part, we use (rR , y) instead 2-D spatial domain by using 2-D IFFT to get the focused image.
of (x, y) to represent the coordinates of the targets. Next, we
will discuss the processing of 1, 2, and 3.
D. Stolt Mapping for Some Special Cases
TV-BSAR can be viewed as a general case of BSAR to some
B. RFM
extent. Therefore, the imaging algorithm for TV-BSAR should
First, the raw data are transformed into the 2-D frequency be suitable for some special cases as well. Here, the forms of
domain. After that, RFM is performed to achieve range com- the Stolt mapping for some special cases are given.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6601

1) Velocity Directions Are Orthogonal α = π/2: In this


case, ar1 = 0, and gy1 = 0. Substituting these two values into
(21), we can get the Stolt mapping relationship for this case.
2) Identical Velocity Directions vT = vR , α = 0: The two
platforms move along parallel trajectories but have different
constant velocities. In this case, ay1 = 0, and gr1 = 0. Substi-
tuting these two values into (21), we can get the Stolt mapping
relationship for this case. It is still a 2-D transformation because
of the 2-D spatial variation. The current hybrid spaceborne/
airborne BSAR experiments often work in this mode [21], [32].
3) One-Stationary Case vT = 0 or vR = 0: In this case,
one of the two stations is stationary. The azimuth aperture is
synthesized by only one moving station. In this mode, the fixed
transmitter or receiver could be mounted on a geostationary
satellite [33], a stratosphere low-speed airship, or a ground- Fig. 2. Imaging geometry from a top view for TV-BSAR.
based high-altitude platform [34]. Because the Doppler fre-
quency is totally contributed by the moving station, the spatial The factor of 2 outside the square root comes from the two-
variations of Doppler parameters do not exist. If we set the way wave propagation. It does not influence the imaging perfor-
transmitter to be stationary, fηT = 0, and fηR = fη . Then, mance in comparison with the traditional monostatic Omega-K
FTref = f + f0 , and FRref = FR . py1 = 0, qy1 = 0, ςT y1 = 0, algorithm without this factor.
ςRy1 = 0, gy1 = 0, gr1 = 0, pr1 = 0, qr1 = 0, ςT r1 = 0, and
ςRr1 = 0. The Stolt mapping relationship becomes IV. I NTERPRETATION OF THE 2-D S TOLT
  M APPING AND E RROR A NALYSIS
f + f0 = ar1 (f + f0 ) + FR
(23) Next, we discuss the effects of the aforementioned Stolt
fη = ay1 (f + f0 )vR /c + fη .
mapping in (34). To explain the effects of the frequency trans-
4) Identical Velocity Vectors vT = vR = v, α = 0: This is formation, we set three point targets in the imaging scene,
the TI case. The current airborne BSAR field experiments often P0 , PM , and PN . Their locations are shown in Fig. 2. P0
work in this mode [1]–[3]. Because of the azimuth invariance is the reference target with the coordinates of (rR0 , y0 ). The
in this mode, some monostatic-based imaging algorithms can relative locations and moving directions of the transmitter and
be developed. In this mode, ay1 = 0, py1 = 0, qy1 = 0, ςT y1 = receiver are shown from a top view. As for PM , Δy = 0. Only
0, ςRy1 = 0, gy1 = 1/v, and gr1 = 0. Then, the Stolt mapping the range frequency transformation affects the performance.
relationship becomes For PN , Δr = 0. Only the azimuth frequency transformation
⎧  influences its focusing performance.
⎨ f + f0 = ar1 FTref + pr1 (f, fη )rT 0 + qr1 (f, fη )rR0
+ FRref + ςT r1 (fη )η0T
ref
c − ςRr1 (fη )yR c/v (24) A. Range Frequency Mapping
⎩  ref ref
fη = fηT + fηR = fη .
Expand FTref and FRref in f about f = 0 up to second orders,
Only the frequency transformation along the range dimension respectively
needs to be performed.  ref 2
 ref  f f 2 c2 fηT
5) Tandem Case: In this case, the receiver and transmitter FT ≈ f0 DT fηT +
ref  −   (27)
DT fηT ref 2DT3 fηT ref v 2 f 3
move along the same path with constant and equal velocities. T 0
It is also a special case of 2). The along-track interferometry  ref 2
  f f 2 2
c f
for ground moving target indication and change detection in FRref ≈ f0 DR fηR
ref
+  −  ηR  (28)
DR fηR ref 3
2DR fηR ref v 2 f 3
BSAR usually works in this mode [4]. ar1 = 1, ay1 = 0, py1 = R 0
0, qy1 = 0, ςT y1 = 0, ςRy1 = 0, gy1 = 1/v, and gr1 = 0. Then, 
the Stolt mapping becomes where DT (fηT ref
) = 1 − (cfηT
ref /v f )2 and D (f ref ) =
T 0 R ηR

f  + f0 = FTref + pr1 (f, fη )rT 0 + qr1 (f, fη )rR0 1 − (cfηR
ref /v f )2 .
R 0

+FRref + ςT r1 (fη )η0T


ref
c − ςRr1 (fη )yR c/v. (25) Additionally, expand pr1 (f, fη ) and qr1 (f, fη ) in f about
f = 0 up to second orders, respectively
6) Monostatic Case: In the monostatic case, ar1 = 1, ay1 =
pr1 (f, fη ) ≈ pr1 (0, fη ) + T r1 (fη )f + T r2 (fη )f 2 (29)
0, py1 = 0, qy1 = 0, ςT y1 = 0, ςRy1 = 0, gy1 = 1/v, gr1 = 0.
qr1 (f, fη ) ≈ qr1 (0, fη ) + Rr1 (fη )f + Rr2 (fη )f 2 (30)
fηT = fηR = fη /2, pr1 = 0, qr1 = 0, ςT r1 = 0, and ςRr1 = 0.
Then, the Stolt transformation becomes where T r1 (fη ) and Rr1 (fη ) are the linear coefficients of
 pr1 (f, fη ) and qr1 (f, fη ) with respect to f , respectively.
2
 cfη T r2 (fη ) and Rr2 (fη ) are the quadratic coefficients of
f + f0 = 2 (f + f0 )2 − . (26)
2v pr1 (f, fη ) and qr1 (f, fη ) with respect to f , respectively.
6602 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

Substituting (27)–(30) into the first equation of (21), we 2) The linear term 2 involves a stretching of the range
can get frequency axis by a constant factor. This stretching in
range frequency leads to a linear scaling of the fast-time
 ref   ref  axis. After RFM, the difference between the range gates
f  + f0 ≈ ar1 DT fηT f0 + DR fηR f0 of P0 and PM is
rT 1 r
+ pr1 (0, fη )rT 0 + qr1 (0, fη )rR0 ΔRbi (rR , y0 ; rR0 ) = − T 0 
DT [fηcT (rR , y0 )] D f ref
1 T ηcT
ςRr1 (fη )yR c
ref
+ςT r1 (fη )η0T c − ref
+ gr1 fηT c rR1 rR0
vR + −   (32)
⎡ DR [fηcR (rR , y0 )] D f ref
R ηcR
a 1
+⎣ r1  +   where rT 1 and rR1 are the SRCAs of the transmitter
ref
DT fηcT ref
DR fηcR
and receiver with respect to PM (rR , y0 ), respectively.
⎤ DT [fηcT (rR , y0 )] and DR [fηcR (rR , y0 )] are the migra-
    tion factors at fηcT and fηcR with respect to the target of
ref
+ T r1 fηcT ref
rT 0 + Rr1 fηcR rR0 ⎦ f 2
PM (rR , y0 ), respectively. Moreover, suppose that
⎡  ref   ref 
DT [fηcT (rR , y0 )] ≈ DT fηcT + κT r fηcT Δr (33)
ar1 a  ref   ref 
+⎣  − r1  DR [fηcR (rR , y0 )] ≈ DR fηcR + κRr fηcR Δr (34)
ref
DT fηT ref
DT fηcT
⎤ ref
where κT r (fηcT ) and κRr (fηcRref
) are the linear coeffi-
1 1 cients of DT [fηcT (rR , y0 )] and DR [fηcR (rR , y0 )] with
+  −   ⎦ f 3
DR fηR ref DR fηcR ref respect to Δr, respectively.Then, (32) becomes
  ref  ΔRbi (rR , y0 ; rR0 )
+ T r1 (fη ) − T r1 fηcT rT 0
 ref 
  ref   ar1 rT 0 κT r fηcT
+ Rr1 (fη ) − Rr1 fηcR rR0 f 4 ≈ −
⎡ DT [fηcT (rR , y0 )] DT2 [fηcT (rR , y0 )]
 ref 2
ar1 c2 fηT  ref  

+ T r2 (fη )rT 0 +Rr2 (fη )rR0 −   1 rR0 κRr fηcR
2DT3 fηTref v 2 f 3 + − 2 Δr. (35)
T 0 DR [fηcR (rR , y0 )] DR [fηcR (rR , y0 )]
 ref2 ⎤
c2 fηR Based on the derivations in Appendix II, we have the
−   ⎦ f 25 . (31)
2DR 3 f ref v 2 f 3 following relationships:
ηR R 0
 ref 
κT r fηcT = − DT2 [fηcT (rR , y0 )] T r1 (fηcT ) (36)
Some short remarks concerning (31) will be helpful to under-  ref 
stand the effects of range frequency mapping. κRr fηcR = − DR 2
[fηcR (rR , y0 )] Rr1 (fηcR ). (37)

1) 1 is a constant shift of range frequency and is not Therefore, the factor of Δr in (35) equals the one of
relevant with range frequency. It represents the azimuth f in 2. We can find that, at fηc , the linear term
offset which is caused by RFM and the residual azimuth of range frequency in the residual phase changes from
modulation. As shown in the aforementioned analysis, ΔRbi (rR , y0 ; rR0 )f to Δrf  . Therefore, the effect of this
the PTRS of TV-BSAR can be divided into two parts constant linear term is to scale the range axis which was
corresponding to the effects of the transmitter and re- the bistatic range of Rbi to the closest approach range of
ceiver, respectively. For the sake of simplicity, we use receiver rR .
“Function T” and “Function R” to indicate the influence 3) 3 and 4 are also linear scalings of range frequency,
of the transmitter and receiver on the residual compres- but differently, these scaling factors vary with azimuth
sion, respectively. After RFM, PM is partially focused. frequency. It represents the residual RCM correction. In
ar1 D(fηT ref
)f0 is the residual azimuth compression re- detail, 3 corrects the differential RCM resulting from
sulting from “Function T.” ar1 denotes the projection the range coordinate variation. 4 corrects the differ-
relationship from the transmitter to the direction of rR . ential RCM which is caused by the Doppler parameter
For PM , the SRCA difference of the transmitter is ar1 Δr. variation.
D(fηRref
)f0 is the residual azimuth compression part of 4) The last term 5 means the residual SRC.
ref
“Function R.” gr1 fηT is the differential location between From the above analysis, we can find that the range frequency
P0 and PM along the transmitter’s flight direction result- mapping can eliminate the spatial variation along the rR -
ing from “Function T.” After range frequency mapping, direction, including the differential RCM correction, azimuth
PM and P0 fall into the same azimuth bin. compression, and SRC.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6603

B. Azimuth Frequency Mapping where rT 2 is the SRCA of the transmitter with respect
Substituting (27)–(30) into the second equation of (21), we to PN (rR0 , y). Then, we can find that 2 in (38) can
can get correct this difference. After azimuth frequency mapping,
PN and P0 fall into the same range bin. In fact, the echo of
  ref
fη ≈ ay1 DT fηT f0 vR /c+py1(0, fη )rT 0 vR /c+qy1(0, fη )rR0 TV-BSAR is collected in the domain of (Rbi , t), and Rbi

ref 1 is not orthogonal to t. The range Stolt mapping in (21)
×vR /c+ςT y1(fη)η0T ref
vR −ςRy1(fη)yR+gy1 fηTref
vR+fηR
⎡ ⎤ changes the echo from (Rbi , t) to (rR , t). The azimuth
vR⎣ ay1  ref   ref  Stolt mapping in (21) changes the echo from (rR , t) to
+   +T y1 fηcT rT 0 +Ry1 fηcR rR0⎦f 2 (rR , y). We can get the SAR image in an orthogonal space
c D f ref
T ηcT
⎡ ⎤ of (rR , y). The nonorthogonality is corrected.
vR ⎣ ay1 a 5) 3 and 4 represent the residual RCM correction. In
+  − y1  ⎦ f 3 detail, 3 corrects the differential RCM which results
c D f ref DT fηcT ref
T ηT from the range coordinate variation. 4 corrects the dif-
vR   ref  ferential RCM which is caused by the Doppler parameter
+ T y1 (fη ) − T y1 fηcT rT 0
c   ref   variation.
+ Ry1 (fη ) − Ry1 fηcR rR0 f 4 6) The last term 5 means the residual SRC.

vR ⎣ From the above analysis, we can find that the azimuth
+ T y2 (fη )rT 0 + Ry2 (fη )rR0 frequency mapping can eliminate the spatial variation along the
c
y-direction. Combined with the range frequency mapping, the
 ref 2 ⎤
ay1 c2 fηT 2-D Stolt mapping in this paper can deal with the 2-D spatial
−   ⎦ f 25 . (38) variation problem in TV-BSAR.
2DT3 fηT ref v 2 f 3
T 0

Here, we use PN to discuss the effect of azimuth frequency C. Residual Phase Analysis and Compensation
mapping. The main approximation in the aforementioned method is
1) 1 is not relevant with range frequency. It represents the az- the linear expansion of rT (rR ), FT (f, fη ), FR (f, fη ), fηT (fη ),
imuth offset and the residual azimuth modulation of PN . and fηR (fη ). The approximation error is the sum of the
2) As for PN , after RFM, the azimuth compression of quadratic and higher order terms in the Taylor series expansion.
“Function R” has been compensated for totally because Because the quadratic term is the most dominant component,
PN and P0 have the same value of rR . The frequency these truncated error terms can be approximately determined
transformation just needs to compensate for the residual by the quadratic term
azimuth compression of “Function T” and the Doppler
variation. φE (f, fη ; rR , rR0 , y, y0 )
3) ay1 DT (fηT ref
)f0 vR /c is the residual azimuth compression = φEr (f, fη ) + φEy (f, fη )

factor resulting from the transmitter. ay1 denotes the ar1 pr1 (f, fη ) + qr1 (f, fη ) + ar2 FTref
= 2πΔr2
projection relationship between y and rT . For PN , the c
SRCA difference of the transmitter is ay1 Δy. gy1 fηref vR rT 0 pr2 + rR0 qr2
is the differential location between PN and P0 along the + + ςT r1 (fη )gr1
c 
transmitter’s flight direction. ref ref ref
4) 2 is a linear scaling of range frequency. It denotes the lo- + ςT r2 (fη )η0T + gr2 fηT + ςRr2 (fη )η0R
cation correction along Δr resulting from the transmitter’s 
ay1 py1 (f, fη ) + ay2 FTref + rT 0 py2 + rR0 qy2
different flight path and different velocity from the receiver. + 2πΔy 2
c
Because of the azimuth spatial variation, after RFM, P0
and PN fall into different range gates. After RFM, the ςRy1 (fη ) ref
+ ςT y1 (fη )gy1 + + ςT y2 (fη )η0T
difference between the range gates of P0 and PN is vR 
ref ref
+ gy2 fηT + ςRy2 (fη )fηR (40)
ΔRbi (rR0 , y; y0 )
rT 2 r
= − T 0 
DT [fηcT (rR0 , y)] D f ref where ar2 , pr2 (f, fη ), qr2 (f, fη ), ςT r2 (fη ), ςRr2 (fη ), and gr2
T ηcT
rR0 rR0 are the second-order Taylor expansion coefficients of rT (rR ),
+ −   FT (f, fη ), FR (f, fη ), fηT (fη ), fηR (fη ), and η0T with re-
DR [fηcR (rR0 , y)] D f ref
R
 ref  ηcR
 ref  spect to rR , respectively. ay2 , py2 (f, fη ), qy2 (f, fη ), ςT y2 (fη ),
ay1 rT 0 κT y fηcT rR0 κRy fηcR ςRy2 (fη ), and gy2 are the second-order Taylor expansion co-
=  −   −   efficients of rT , FT (f, fη ), FR (f, fη ), fηT (fη ), fηR (fη ), and
DT fηcTref DT2 fηcTref DR2 f ref
⎛ ηcR
⎞ η0T with respect to y, respectively. The detail expressions of
a     these coefficients are given in Appendix II. In practice, the
=⎝  ref ⎠
y1
 +rT 0 T y1 fηcTref
+rR0 Ry1 fηcR Δy (39) second-order Taylor expansion term consists of three parts:
DT fηcR ref
∂ 2 ΦG /∂rR2
, ∂ 2 ΦG /∂y 2 , and ∂ 2 ΦG /∂rR ∂y. It is very difficult
6604 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

D. Computational Complexity
In comparison with the Omega-K algorithm for traditional
monostatic SAR and TI-BSAR, the method in this paper needs
an additional azimuth frequency transformation. Therefore, an
interpolation operation along the azimuth frequency should be
performed.
The aforementioned algorithm uses two times of FFT and
IFFT on the azimuth direction and four times of FFT and
IFFT on the range direction. The major operations are a 2-D
interpolation and three times of 2-D phase factor multiplication.
Suppose that the range sample number is Nr while the azimuth
sample number is Na . The total number of real floating-point
operations would now be

10Na Nr log2 (Na ) + 20Na Nr log2 (Nr )

Fig. 3. Block diagram of the proposed generalized Omega-K algorithm for + 18Na Nr + 4(2Mker − 1)Nr Na (43)
TV-BSAR.
where Mker is the number of neighbor samples used for the
to compensate for ∂ 2 φG /∂rR ∂yΔrΔy in any 2-D domain. In range and azimuth interpolation. Therefore, the computational
(40), this mixed second-order partial derivative is neglected. complexity is of order O(N 2 log2 N ), where N is the 1-D size
This phase error leads to residual RCM, residual azimuth of the data.
compression, residual SRC, and residual location error. It is not
easy to compensate for this phase error by phase multiplication
in any 2-D domain directly because it is a function of f and fη E. Limitations
and changes with Δr and Δy as well. In addition, because of The main approximation of this algorithm comes from the
the 2-D frequency transformation, the frequency variables have linearization of the transmitter’s SRCA using the receiver’s.
changed from (f, fη ) to (f  , fη ). We need to express the phase Generally, for TI side-looking and squint-looking modes, this
error using (f  , fη ) if we want to compensate for it. algorithm can work well. When the receiver works in forward-
Substituting (27) and (28) into (21) and rearranging the looking mode [35], the quadratic component with respect to
result, we can get the following relation: the receiver’s closet approach slant range of the bistatic PTRS
would play a more important role. New linearization methods
m0 + m 1 f + m 2 f η = f  (41) should be found to solve this problem.
n0 + n1 f + n2 fη = fη (42) In addition, similar to the Omega-K algorithm for monostatic
SAR, in this algorithm, the two platforms are assumed to
where m1 , m2 , n1 , and n2 are the first-order coefficients of f move with constant velocities. If there is any motion error, the
and fη , respectively. m0 and n0 are constant. combination of the algorithm with the motion compensation
Solving the binary linear equation composed by (II.54) and method should be researched.
(II.55), we can express f and fη using f  and fη . Substitute
the results into (II.53), and suppose that the middle values of
V. N UMERICAL S IMULATION AND D ISCUSSION
f  and fη are fc and fηc 
, respectively. Then, the φEr (f, fη )
and φEy (f, fη ) can be approximated as φEr [f (fc , fη ), fη (fc , To verify the effectiveness of the proposed imaging algo-
fη )] and φEy [f (f, fηc  
), fη (f, fηc )], respectively. Then, rithm, we carry out numerical simulations in this section. In
     
φEr [f (fc , fη ), fη (fc , fη )] and φEy [f (f, fηc ), fη (f, fηc )] can these simulations, the velocities of the transmitter and receiver
be compensated for in the domain of (Δr, fη ) and (f  , Δy),

are always constant and different.
respectively. Furthermore, φEr [f (fc , fη ), fη (fc , fη )] denotes In the first and second cases, the receiver and transmitter are
the residual azimuth compression and azimuth location error. mounted on two airplanes. The simulation parameters are listed
 
φEy [f (f, fηc ), fη (f, fηc )] is the residual SRC and range in Table I. In case I, the angle between the two flight trajectories
location error. is 27◦ . The transmitter has a squint angle of 30◦ while the
Fig. 3 shows the flowchart of the proposed generalized receiver has a squint angle of 13◦ . In case II, the transmitter and
Omega-K algorithm for TV-BSAR. After the 2-D Stolt trans- receiver travel along parallel tracks with constant but different
formation, the data are inverse Fourier transformed to the velocities. The transmitter has a squint angle of 42◦ , while
domain of range space and azimuth frequency to compensate the receiver points backward and has a squint angle of −24◦ .
for exp{jφEr }. After that, we transform the data into the In case III, the transmitter is mounted on a satellite while
domain of azimuth space and range frequency to compensate the receiver is on an airplane. The simulation parameters are
for exp{jφEy }. Finally, an azimuth IFFT is performed to get listed in Table II. The system parameters of TerraSAR-X are
the final image. used for the spaceborne transmitter. TerraSAR-X works in
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6605

TABLE I
S IMULATION PARAMETERS OF A IRBORNE C ASES

TABLE II
S IMULATION PARAMETERS OF H YBRID S PACEBORNE /A IRBORNE C ASE

Fig. 5. Two-dimensional spectrum variation and spatial domain results during


the processing in case I. (a) Two-dimensional spectrum before Stolt mapping.
(b) Spatial domain before Stolt mapping. (c) Two-dimensional spectrum after
azimuth Stolt mapping. (d) Spatial domain after azimuth mapping. (e) Two-
dimensional spectrum after range Stolt mapping. (f) Spatial domain after range
Stolt mapping. (g) Two-dimensional spectrum after Stolt mapping. (h) Spatial
domain after Stolt mapping.
Fig. 4. Target area used in the simulation.
A. Case I: Airborne With Different Velocity Directions
the sliding-spotlight mode. To extend the imaging time, the Based on this case, we first illustrate the processing proce-
airborne receiver is in the inverse sliding-spotlight mode, which dures of the proposed Omega-K algorithm. Second, the imaging
is a special case of the sliding-spotlight mode. In this case, there performance of this method is analyzed. The comparison with
is an angle of 14◦ between the transmitter and receiver flight the chirp-scaling method in [17] is done as well. For the sake
paths. of simplicity, we use “Ender’s method” to indicate the method
In order to highlight the capacity of processing, in the scene, proposed by Ender in [17].
we set 15 point targets, which form a 3 × 5 matrix as shown in 1) Illustration of Processing Procedures: The 2-D Stolt
Fig. 4. The distance between every two adjacent targets along mapping is the major procedure in this proposed algorithm.
the x-direction is 500 m while the adjacent distance along the Fig. 5(a) is the 2-D spectrum before the 2-D Stolt mapping.
y-axis is 100 m. P1 and P2 are two targets located at the corners The spectrum of the data after RFM and before Stolt transfor-
of the scene. Their coordinates are (−1000, −100) m and (1000, mation shows as a parallelogram. The Doppler bandwidth of
100) m, respectively. Target O is set to be the reference target. the reference target is 455 Hz. However, because of the spatial
6606 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

location errors of the final result along the x- and y-directions


of P1 become −1.5 m and −0.75 m, respectively. For P2 , the
location errors along the x- and y-directions are −1.5 m and
−0.5 m, respectively. These location errors can be corrected
when geometry registration is performed.
2) Focusing Quality: To quantify the precision of the pro-
posed method, points P1 , O, and P2 are analyzed in more detail.
Fig. 8 shows the contours of points P1 , O, and P2 . To show the
details, the results are interpolated nine times. We can find that
the proposed method can focus these targets simultaneously.
Fig. 6. Effect of azimuth Stolt mapping. (a) Spatial domain result before
azimuth Stolt mapping. (b) Spatial domain result after azimuth Stolt mapping. The imaging results display the good shape of a 2-D sinc func-
tion which is the point spread function of a SAR system. This
is because the method in this paper can consider the azimuth
spatial variation and the higher order couplings very well. When
obtaining the Stolt mapping relationship, we just linearize the
2-D spectrum with the spatial variables. The higher order terms
of frequency, i.e., the higher order couplings, are reserved in
the Stolt mapping. It can also consider the spatial variations of
the Doppler parameters. Fig. 9 gives the contours of points P1 ,
O, and P2 processed by “Ender’s method.” These targets are
partially unfocused. The defocus of O is because the “Ender’s
method” neglects the higher order terms of range–azimuth
coupling. This is the inherent drawback of the chirp-scaling
algorithm [36]. This neglects results in the broadening along
the range direction. In addition, the azimuth spatial variation in
Fig. 7. Imaging result after phase error compensation.
“Ender’s method” can only be regarded as a scaling of azimuth
variation of Doppler centroids, the total Doppler bandwidth of frequency. The azimuth spatial variation of RCM is neglected.
the imaging scenario is about 480 Hz. The tilt of the spectrum is Therefore, P1 and P2 cannot be focused at all.
because the Doppler centroid is a function of range frequency. The cross-sectional profiles of the processing results by
To show the effects of the azimuth frequency mapping and the method in this paper and “Ender’s method” are shown
range frequency mapping separately, Fig. 5(c) and (e) indicates in Figs. 10 and 11. The impulse-response width (IRW), peak
the 2-D frequency spectra after just the azimuth Stolt mapping sidelobe ratio (PSLR), and integrated sidelobe ratio (ISLR) are
and after just the range Stolt mapping, respectively. In this case, listed in these figures. Rectangular windows are used in the
the azimuth frequency mapping changes the tilt degree of the range and azimuth processing. As for the range IRW, since
spectrum slightly. In contrast, the range frequency mapping the focusing results of the method in this paper are distributed
changes the spectrum shape from a parallelogram to a curved along the receiver closest slant range, using the resolution
shape. This is very similar to the effect of Stolt mapping in theory based on gradient, we can get that the range IRW
monostatic SAR [28]. Fig. 5(g) gives the spectrum after the along the receiver closest slant range is 1.09 m, while the
range and azimuth frequency mapping. It has a similar shape theoretical azimuth resolution is 0.31 m. Differently, as for
to that of Fig. 5(e). “Ender’s method,” the focusing results are distributed along
Fig. 5(b), (d), (f), and (h) gives the spatial domain results the bistatic slant range sum. The theoretical range resolution
corresponding to the spectra in Fig. 5(a), (c), (e), and (g), is 0.886(c/Br ) = 2.22 m, where Br is the range signal band-
respectively. After RFM, the shape of the scenario changes to a width. The theoretical azimuth resolution is 0.49 m.
nonorthogonal shape from a rectangle in Fig. 5(b). The location From Figs. 10 and 11, it can be seen that, for the proposed
of the RFM result is displaced severely along the azimuth and Omega-K, the measured range resolutions of these three targets
tilts a little along the range direction. This is because of the agree well with the theoretical value. The azimuth resolution has
2-D spatial variation. Azimuth Stolt mapping mainly finishes a maximum broadening of about 2%. PSLR and ISLR both ag-
the location correction along the range direction. The result is ree well with the theoretical values of −13.27 dB and −10.24 dB,
shown in Fig. 5(d). This effect is enlarged in Fig. 6. In Fig. 6, the respectively. As for “Ender’s method,” at the edges of the imag-
targets which have the same x coordinate with O are extracted. ing scene, range PSLR has a degradation of more than 8.3 dB,
We can find that the azimuth frequency mapping can correct the and ISLR has a degradation of about 5.0 dB. The IRW has a
range locations to make these targets fall into the same range broadening of two times. Azimuth PSLR has a degradation of
gate. As for the range frequency mapping, it mainly finishes more than 6.8 dB, and ISLR has a degradation of about 4.9 dB.
the residual azimuth compression and the azimuth location In this simulation, the range sample number Nr is 4096,
correction. while the azimuth sample number Na is 6144. The number of
Because of the quadratic error in (40), the spatial result after neighbor samples used in the range and azimuth interpolation
2-D Stolt mapping has a quadratic location error. After the Mker = 8. Using (43), we can get the total real floating-point
phase error compensation, the result is shown in Fig. 7. The operation: 1.117e10.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6607

Fig. 8. Focusing results by the proposed Omega-K algorithm: Case I. (a) Target P1 . (b) Target O. (c) Target P2 .

Fig. 9. Processing results by “Ender’s method”: Case I. (a) Target P1 . (b) Target O. (c) Target P2 .

Fig. 10. Focusing results by the proposed Omega-K algorithm: Case I. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.
6608 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

Fig. 11. Focusing results by “Ender’s method”: Case I. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth profile.
(e) P2 : Range profile. (f) P2 : Azimuth profile.

Fig. 12. Focusing results by the proposed Omega-K algorithm: Case II. (a) Target P1 . (b) Target O. (c) Target P2 .

B. Case II: Airborne With Identical Velocity Direction Fig. 14 gives the imaging result of O by using “Ender’s
method.” In this case, the transmitter works in the high-squint
In this case, the transmitter and receiver have the same case, and then, the coupling between azimuth and range di-
moving direction but different velocities. They also work in rections is very serious. Because of the residual couplings, the
squint mode. The bistatic angle is as large as 66◦ . Fig. 12 shows result has no focus effect along the range direction. As “Ender’s
the imaging results of P1 , O, and P2 by using the proposed method” cannot focus the raw data at all, the quality parameters
method. The quality measurements of these results are listed in of this method are not listed.
Fig. 13. It can be found that this method can still focus the raw In this simulation, the range sample number Nr and the
data very well when the two platforms move in parallel. The azimuth sample number both are 4096. The number of neighbor
theoretical range and azimuth resolutions are 1.38 and 0.47 m, samples used in the range and azimuth interpolation Mker = 8.
respectively. The quality parameters have a little deviation from Using (43), we can get the total real floating-point operation:
the theoretical values. 7.3484e9.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6609

Fig. 13. Focusing results by the proposed Omega-K algorithm: Case II. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.

The quality parameters of P1 , O, and P2 are listed in Fig. 17.


The theoretical range and azimuth resolutions are 0.81 and
1.45 m, respectively. The results of the proposed Omega-K has
a maximum broadening of 2.6%, and the other quality values
all agree with the theoretical values.

VI. E XPERIMENTAL DATA P ROCESSING


In 2006–2007, we performed some airborne BSAR exper-
iments, which were the first airborne bistatic experiments in
Asia. In one of these experiments, the two platforms operated
in TV mode and were mounted on two airplanes of Yun-5. The
antennas of the system were two horn antennas, the azimuth
Fig. 14. Processing result by “Ender’s method”: Target O in case II. beamwidths of which both were 20◦ . The overlap of the antenna
footprints was guaranteed by careful flight planning and pilots’
skill. The system worked in X-band. The bandwidth of the
C. Case III: Hybrid Spaceborne/Airborne Case
transmitted chirp signal was 80 MHz. Pulse Repetition Fre-
In this case, we simulate the performance of the proposed quency (PRF) was 500 Hz. The velocities of the transmitter and
method for the spaceborne/airborne case. Different from the receiver were 50 and 42 m/s, respectively. The heights of the
working modes of current hybrid spaceborne/airborne BSAR transmitter and receiver both were 400 m, respectively. In order
[21], [32], the receiver has an x-component velocity of 50 m/s. to capture echo and achieve time synchronization, a technique
This is more general. Therefore, the angle between the two named echo glide window CFAR detection was utilized after
tracks is 14◦ . Figs. 15 and 16 are the processing results of P1 , the A/D device [38]. The frequency and phase synchronization
O, and P2 by using these two methods, respectively. We can was guaranteed by stable local oscillators. Fig. 18 gives the
find that, by using the proposed Omega-K algorithm, these three experimental setup of this experiment.
targets can still be well focused. However, “Ender’s method” Fig. 19 is the raw data after PRF alignment. Fig. 20 shows a
fails to finish the focusing. BSAR image processed by the described Omega-K algorithm.
6610 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

Fig. 15. Focusing results by the proposed Omega-K algorithm: Case III. (a) Target P1 . (b) Target O. (c) Target P2 .

Fig. 16. Processing results by “Ender’s method”: Case III. (a) Target P1 . (b) Target O. (c) Target P2 .

Fig. 17. Focusing results by the proposed Omega-K algorithm: Case III. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6611

Fig. 21. Optical image of the imaging scene.

Fig. 18. System setup in the airborne experiment.

Fig. 19. Raw data after PRF alignment.

Fig. 22. Profiles of two isolated targets. (a) Target PA : Range profile.
(b) Target PA : Azimuth profile. (c) Target PB : Range profile. (d) Target PB :
Azimuth profile.

Fig. 20. Experimental data processed by the proposed method.

For comparison, the optical image of the processed scene is


shown in Fig. 21. It was taken on the receiving aircraft. The
horizontal direction is the slant range while the vertical direc-
tion is the azimuth. It can be seen that the proposed algorithm
can focus the experimental raw data. Fig. 23. Experimental data processed by “Ender’s method.”
Fig. 22 gives the profiles of two isolated targets. For PA , its
measured IRW along the receiver closest slant range is 1.9 m, the processed result by “Ender’s method.” Comparing with
and the azimuth IRW is 0.5 m. For PB , the measured IRW along Fig. 20, we can find that the method in this paper can obtain a
the receiver closest slant range is 1.87 m, and the azimuth IRW little better performance than “Ender’s method.” However, due
is 0.5 m. to the motion errors of the two platforms, using this method,
To compare the imaging performance of the Omega-K al- we cannot get the same precision as the simulation in the last
gorithm in this paper and “Ender’s method,” Fig. 23 shows section.
6612 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

VII. C ONCLUSION Then, the first-order derivatives of FT (f, fη ) and FR (f, fη )


to rR can be written as
A generalized Omega-K algorithm for TV-BSAR data is
presented in this paper. The proposed algorithm is based on the c2 fηT
ref
ςT r1 (fη )
generalized Loffeld’s bistatic formula. The 2-D Stolt frequency pr1 (f, fη ) = − (I.49)
vT2 FTref (f, fη )
mapping relationship is derived, which is the foundation of the
proposed algorithm. The phase error compensation is discussed c2 fηR
ref
ςRr1 (fη )
qr1 (f, fη ) = − 2 F ref (f, f )
(I.50)
as well. The proposed algorithm can consider the 2-D spatial vR R η
variation and the higher order coupling of TV-BSAR. Simula-
c2 fηT
ref
ςT y1 (fη )
tion results show that the proposed algorithm has an effective py1 (f, fη ) = − (I.51)
ability to focus the TV bistatic data with different trajectories, vT2 FTref (f, fη )
different velocities, high squint angles, and extreme bistatic c2 fηR
ref
ςRy1 (fη )
configurations. Furthermore, this generalized bistatic Omega-K qy1 (f, fη ) = − 2 F ref (f, f )
. (I.52)
vR R η
algorithm is tested successfully with real bistatic data. Through
the modeling of the velocities in 3-D space, this algorithm can
be extended to the cases in which the transmitter or/and the
receiver have a z-component velocity. A PPENDIX B
C OEFFICIENTS U SED IN R ESIDUAL P HASE C OMPENSATION

A PPENDIX A In this appendix, we give the coefficients used in the residual


C OEFFICIENTS U SED IN 2-D S TOLT M APPING phase compensation. These coefficients include ar2 , ay2 , gr2 ,
gy2 , pr2 , py2 , qr2 , qy2 , T r1 (fη ), Rr1 (fη ), T y1 (fη ), and
In this appendix, we give the coefficients used in the 2-D Ry1 (fη ).
Stolt mapping. These coefficients include ar1 , ay1 , gr1 , gy1 , In (40), ar2 is the second-order coefficient of rT to rR
pr1 , py1 , qr1 , qy1 , ςT r1 (fη ), ςRr1 (fη ), ςT y1 (fη ), and ςRy1 (fη ) #
# # 1 ∂ 2 rT (rR , y) ##
∂rT (rR , y) ## ∂rT ∂x ∂x ## ar2 =
2 ∂rR 2 #
ar1 = # =
∂x ∂x ∂rR #rR =rR0 ,y=y0
rR =rR0 ,y=y0
∂rR #
1 ∂ ∂x ∂x ∂rR ∂x ∂x ##
rR =rR0 ,y=y0 ∂rT ∂x ∂x
−xT rR0 = #
=$ cos α $ (I.44) 2 ∂x ∂x ∂rR #
x2
+ h2T rR0 −
2 h2R rR =rR0 ,y=y0
T
# # % &
∂rT (rR , y) ## ∂rT ∂x ## 1 2
rR0 hT 2
2 h2R xR xT
ay1 = # =
∂x ∂y #rR =rR0 ,y=y0
= 2 −h2 r 3 cos α− 2 2 cos α
∂y rR =rR0 ,y=y0 2 rR0 R T0 (rR0 −h2R ) rT 0
xT #
=$ sin α (I.45) 1 ∂ 2 rT (rR , y) ##
x2 2 ay2 = #
T + hT 2 ∂y 2
# # rR =rR0 ,y=y0
∂η0T (y  ) ## ∂η0T ∂y  ∂x ## #
1 ∂rT ∂x  #
gr1 = = 2 ∂ ∂x ∂y ∂x # h2
∂rR #rR =rR0 ,y=y0 ∂y  ∂x ∂rR #rR =rR0 ,y=y0
1
= # = sin α2 3T (II.53)
∂x  ∂y # 2 rT 0
1 rR0 rR =rR0 ,y=y0
= sin α $ 2 (I.46) #
vT rR0 − h2R 1 ∂ 2 η0T (rR ) ##
# # gr2 = #
∂η0T (y  ) ## ∂η0T ∂y  ## 2 ∂y 2 rR =rR0 ,y=y0
gy1 = =
∂y #rR =rR0 ,y=y0 ∂y  ∂y #rR =rR0 ,y=y0 sin α h2R
=− (II.54)
1 2vT (r2 − h2 )3/2
= cos α. (I.47) R0
#
R
vT
1 ∂ 2 η0T (y  ) ##
gy2 = = 0. (II.55)
Suppose that 2 ∂y 2 #rR =rR0 ,y=y0
#
∂fηT ## Suppose that
= ςT r1 (fη )
∂rR #rR =rR0 ,y=y0 #
# ∂ 2 fηT ##
∂fηR ## 2 # = ςT r2 (fη )
= ςRr1 (fη ) ∂rR
∂rR #rR =rR0 ,y=y0 rR =rR0 ,y=y0
# #
∂fηT ## ∂ 2 fηR ##
= ςRr2 (fη )
∂y #rR =rR0 ,y=y0
= ςT y1 (fη ) ∂r2 #
R rR =rR0 ,y=y0
# #
∂fηR ## ∂ 2 fηT ##
= ςRy1 (fη ). (I.48) = ςT y2 (fη )
∂y # rR =rR0 ,y=y0 ∂y 2 #rR =rR0 ,y=y0
#
They can be obtained by the derivatives of the Doppler coeffi- ∂ 2 fηR ##
= ςRy2 (fη ). (II.56)
cients by using (11) and (12). ∂y 2 #rR =rR0 ,y=y0
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6613

They can be obtained by the derivatives of the Doppler coeffi- R EFERENCES


cients using (24) and (25). [1] G. Yates, A. M. Home, A. P. Blake, and R. Middleton, “Bistatic SAR
The second-order derivatives of FT (f, fη ) and FR (f, fη ) to image formation,” Proc. Inst. Elect. Eng.–Radar Sonar Navig., vol. 153,
rR can be written as no. 3, pp. 208–213, Jun. 2006.
[2] P. Dubois-Fernandez, H. Cantalloube, B. Vaizan, G. Krieger, R. Horn,
 ref  M. Wendler, and V. Giroux, “ONERA-DLR bistatic SAR campaign:
c2 fηT ςT r2 (fη ) + ςT2 r1 (fη ) Planning, data acquisition, and first analysis of bistatic scattering be-
pr2 (f, fη ) = − haviour of natural and urban targets,” Proc. Inst. Electr. Eng.–Radar,
2vT2 FTref (f, fη ) Sonar Navig., vol. 153, no. 3, pp. 214–223, Jun. 2006.
[3] I. Walterscheid, A. R. Brenner, and J. H. G. Ender, “Geometry and system
aspects for a bistatic airborne SAR experiment,” in Proc. EUSAR, Ulm,
c2 fηT
ref
ςT r1 (fη )pr1 Germany, 2004, pp. 567–570.
+ (II.57)
2vT2 [4] M. Rodriguez-Cassola, P. Prats, D. Schulze, N. Tous-Ramon,
U. Steinbrecher, L. Marotti, M. Nannini, M. Younis, P. Lopez-Dekker,
 ref  M. Zink, A. Reigber, G. Krieger, and A. Moreira, “First bistatic
c2 fηR 2
ςRr2 (fη ) + ςRr1 (fη ) spaceborne SAR experiments with TanDEM-X,” IEEE Geosci. Remote
qr2 (f, fη ) = − 2 F ref (f, f ) Sens. Lett., vol. 9, no. 1, pp. 33–37, Jan. 2012.
2vR R η
[5] R. Girard, P. F. Lee, and K. James, “The RADARSAT-2/3 topographic
mission: An overview,” in Proc. IGARSS, Toronto, ON, Canada, Jul. 2002,
c2 fηR
ref
ςRr1 (fη )qr1 pp. 1477–1479.
+ 2 (II.58) [6] J. H. G. Ender, “A step to bistatic SAR processing,” in Proc. EUSAR, Ulm,
2vR
Germany, May 2004, pp. 359–364.
 ref  [7] Y. L. Neo, F. H. Wong, and I. G. Cumming, “Processing of azimuth-
c2 fηT ςT y2 (fη ) + ςT2 y1 (fη ) invariant bistatic SAR data using the range Doppler algorithm,” IEEE
py2 (f, fη ) = − Trans. Geosci. Remote Sens., vol. 46, no. 1, pp. 14–21, Jan. 2008.
2vT2 FTref (f, fη ) [8] R. Wang, O. Loffeld, H. Nies, S. Knedlik, and J. Ender, “Chirp-
scaling algorithm for bistatic SAR data in the constant-offset configura-
c2 fηT
ref
ςT y1 (fη )py1 tion,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 3, pp. 952–964,
+ (II.59) Mar. 2009.
2vT2 [9] Z. Hua and L. Xingzhao, “An extended nonlinear chirp-scaling algorithm
 ref  for focusing large-baseline azimuth-invariant bistatic SAR data,” IEEE
c2 fηR 2
ςRy2 (fη ) + ςRy1 (fη ) Geosci. Remote Sens. Lett., vol. 6, no. 3, pp. 548–552, Jul. 2009.
qy2 (f, fη ) = − 2 F ref (f, f )
[10] Z. H. Zhang, M. D. Xing, J. S. Ding, and Z. Bao, “Focusing parallel
2vR R η bistatic SAR data using the analytic transfer function in the wavenumber
domain,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 11, pp. 3633–
c2 fηR
ref
ςRy1 (fη )qy1 3645, Nov. 2007.
+ . (II.60) [11] I. Walterscheid, J. Ender, A. Brenner, and O. Loffeld, “Bistatic SAR
2
2vR processing and experiments,” IEEE Trans. Geosci. Remote Sens., vol. 44,
no. 10, pp. 2710–2717, Oct. 2006.
[12] V. Giroux, H. Cantalloube, and F. Daout, “An omega-k algorithm for
The first-order derivatives of pr1 (f, fη ) and qr1 (f, fη ) to SAR bistatic systems,” in Proc. IGARSS, Seoul, Korea, Jul. 2005,
f are pp. 1060–1063.
[13] X. Qiu, D. Hu, and C. Ding, “An Omega-K algorithm with phase er-
# ror compensation for bistatic SAR of a translational invariant case,”
∂pr1 (f, fη ) ## IEEE Trans. Geosci. Remote Sens., vol. 46, no. 8, pp. 2224–2232,
T r1 (fη ) = # Aug. 2008.
∂f f =0 [14] B. Liu, T. Wang, Q. Wu, and Z. Bao, “Bistatic SAR data focusing using
an Omega-K algorithm based on method of series reversion,” IEEE Trans.
c2 ref f0 Geosci. Remote Sens., vol. 47, no. 8, pp. 2899–2912, Aug. 2009.
= f ςT r1 (fη )  3 [15] J. Wu, Z. Li, Y. Huang, J. Yang, and Q. H. Liu, “An Omega-K algorithm
vT2 ηT ref
F (0, fη ) for translational invariant bistatic SAR based on generalized Loffelds
T
bistatic formula,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 10,
#
∂qr1 (f, fη ) ## pp. 6699–6714, Oct. 2014.
[16] K. Natroshvili, O. Loffeld, H. Nies, A. M. Ortiz, and S. Knedlik, “Fo-
Rr1 (fη ) = #
∂f f =0
cusing of general bistatic SAR configuration data with 2-D inverse scaled
FFT,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 10, pp. 2718–2727,
Oct. 2006.
c2 ref f0 [17] R. Wang, Y. K. Deng, O. Loffeld, H. Nies, I. Walterscheid, T. Espeter,
= 2 fηR ςRr1 (fη )  ref
vR
3 (II.61)
J. Klare, and J. Ender, “Processing the azimuth-variant bistatic SAR data
F (0, fη )
R by using monostatic imaging algorithms based on two-dimensional prin-
# ciple of stationary phase,” IEEE Trans. Geosci. Remote Sens., vol. 49,
∂py1 (f, fη ) ## no. 10, pp. 3504–3520, Oct. 2011.
T y1 (fη ) = # [18] M. Antoniou, R. Saini, and M. Cherniakov, “Results of a space-surface
∂f f =0 bistatic SAR image formation algorithm,” IEEE Trans. Geosci. Remote
Sens., vol. 45, no. 11, pp. 3359–3371, Nov. 2007.
c2 ref f0 [19] F. H. Wong, I. G. Cumming, and Y. L. Neo, “Focusing bistatic SAR data
= f ςT y1 (fη )  3 using the nonlinear chirp scaling algorithm,” IEEE Trans. Geosci. Remote
vT2 ηT ref
F (0, fη ) Sens., vol. 46, no. 9, pp. 2493–2505, Sep. 2008.
T
# [20] R. Wang, O. Loffeld, H. Nies, and J. Ender, “Focusing spaceborne/
∂qy1 (f, fη ) ## airborne hybrid bistatic SAR data using wavenumber-domain algorithm,”
Ry1 (fη ) = # IEEE Trans. Geosci. Remote Sens., vol. 47, no. 7, pp. 2275–2283, Jul.
∂f f =0 2009.
[21] M. Rodriguez-Cassola, S. V. Baumgartner, G. Krieger, and A. Moreira,
c2 ref f0 “Bistatic TerraSAR-X/F-SAR spaceborne-airborne SAR experiment: De-
= 2 fηR ςRy1 (fη )  3 . (II.62) scription, data processing, and results,” IEEE Trans. Geosci. Remote
vR ref
F (0, fη ) Sens., vol. 48, no. 2, pp. 781–794, Feb. 2010.
R
6614 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014

[22] O. Loffeld, H. Nies, V. Peters, and S. Knedlik, “Models and useful re- Zhongyu Li (S’12) received the B.S. degree in elec-
lations for bistatic SAR processing,” IEEE Trans. Geosci. Remote Sens., tronic engineering from the University of Electronic
vol. 42, no. 10, pp. 2031–2038, Oct. 2004. Science and Technology of China, Chengdu, China,
[23] R. Wang, O. Loffeld, Q. Ul-Ann, H. Nies, A. Medrano Ortiz, and in 2011, where he is currently working toward the
A. Samarah, “A bistatic point target reference spectrum for general bistatic Ph.D. degree.
SAR processing,” IEEE Geosci. Remote Sens. Lett., vol. 5, no. 3, pp. 517– His research interest is synthetic aperture radar
521, Jul. 2008. imaging.
[24] R. Wang, O. Loffeld, Y. Neo, H. Nies, and Z. Dai, “Extending Loffeld’s
bistatic formula for the general bistatic SAR configuration,” IET Radar
Sonar Navig., vol. 4, no. 1, pp. 74–84, Feb. 2010.
[25] J. Wu, J. Yang, Y. Huang, Z. Liu, and H. Yang, “A new look at the point
target reference spectrum for bistatic SAR,” Progr. Electromagn. Res.,
vol. 119, pp. 363–379, 2011.
[26] Y. L. Neo, F. Wong, and I. G. Cumming, “A two-dimensional spectrum Yulin Huang (S’06–M’08) received the B.S. and
for bistatic SAR processing using series reversion,” IEEE Geosci. Remote Ph.D. degrees in electronic engineering from the
Sens. Lett., vol. 4, no. 1, pp. 93–96, Jan. 2007. University of Electronic Science and Technology of
[27] R. Bamler, “A comparison of range-Doppler and wavenumber domain China (UESTC), Chengdu, China, in 2002 and 2008,
SAR focusing algorithms,” IEEE Trans. Geosci. Remote Sens., vol. 30, respectively.
no. 4, pp. 706–713, Jul. 1992. He is currently an Associate Professor with
[28] I. G. Cumming and F. H. Wong, Digital Processing of Synthetic Aperture UESTC. His research interests include signal pro-
Radar Data: Algorithms and Implementation. Norwood, MA, USA: cessing and radar imaging.
Artech House, 2005.
[29] H. Shin and J. Lim, “Omega-k algorithm for airborne spatial invariant
bistatic spotlight SAR imaging,” IEEE Trans. Geosci. Remote Sens.,
vol. 47, no. 1, pp. 238–250, Jan. 2009.
[30] W. G. Carrara, R. S. Goodman, and R. M. Majewski, Spotlight Synthetic
Aperture Radar: Signal Processing Algorithms. Boston, MA, USA: Jianyu Yang (M’06) received the B.S. degree in
Artech House, 1995. electronic engineering from the National Univer-
[31] C. Cafforio, C. Prati, and F. Rocca, “SAR data focusing using seismic sity of Defense Technology, Changsha, China, in
migration techniques,” IEEE Trans. Aerosp. Electon. Syst., vol. 27, no. 2, 1984 and the M.S. and Ph.D. degrees in electronic
pp. 194–207, Mar. 1991. engineering from the University of Electronic Sci-
[32] I. Walterscheid, T. Espeter, A. R. Brenner, J. Klare, J. H. G. Ender, ence and Technology of China (UESTC), Chengdu,
H. Nies, R. Wang, and O. Loffeld, “Bistatic SAR experiments with China, in 1987 and 1991, respectively.
PAMIR and TerraSAR-X—Setup, processing, and image results,” IEEE He is currently a Professor with the School of
Trans. Geosci. Remote Sens., vol. 48, no. 8, pp. 3268–3279, Aug. 2010. Electronic Engineering, UESTC. His research in-
[33] M. Antoniou, M. Cherniakov, and C. Hu, “Space-surface bistatic SAR terests are mainly in synthetic aperture radar and
image formation algorithms,” IEEE Trans. Geosci. Remote Sens., vol. 47, statistical signal processing.
no. 6, pp. 1827–1843, Jun. 2009.
[34] L. Cazzani, C. Colesanti, and D. Leva, “A ground-based parasitic SAR
experiment,” IEEE Trans. Geosci. Remote Sens., vol. 38, no. 5, pp. 2132–
2141, Sep. 2000. Qing Huo Liu (S’88–M’89–SM’94–F’05) received
[35] J. Wu, J. Yang, Y. Huang, H. Yang, and H. Wang, “Bistatic forward- the B.S. and M.S. degrees in physics from Xiamen
looking SAR: Theory and challenges,” in Proc. RADAR, Pasadena, CA, University, Xiamen, China, in 1983 and 1986,
USA, May 2009, pp. 1–9. respectively, and the Ph.D. degree in electrical en-
[36] G. W. Davidson, I. G. Cumming, and M. R. Ito, “A chirp-scaling approach gineering from the University of Illinois at Urbana-
for processing squint mode SAR data,” IEEE Trans. Aerosp. Electron. Champaign, Champaign, IL, USA, in 1989.
Syst., vol. 32, no. 1, pp. 121–133, Jan. 1996. He was with the University of Illinois at Urbana-
[37] G. P. Cardillo, “On the use of the gradient to determine bistatic SAR Champaign as a Research Assistant from September
resolution,” in Proc. IEEE Antennas Propag. Soc. Int. Symp., 1990, 1986 to December 1988 and as a Postdoctoral Re-
pp. 1032–1035. search Associate from January 1989 to February
[38] Y. Huang, J. Yang, J. Wu, L. Xian, H. Yang, and Z. Tian, “Vehicleborne 1990. He was a Research Scientist and Program
bistatic synthetic aperture radar imaging,” in Proc. IGARSS, Barcelona, Leader with Schlumberger-Doll Research, Ridgefield, CT, USA, from 1990
Spain, Jul. 2007, pp. 2164–2166. to 1995. From 1996 to May 1999, he was an Associate Professor with New
Mexico State University, Las Cruces, NM, USA. Since June 1999, he has
been with Duke University, Durham, NC, USA, where he is currently a Pro-
fessor of Electrical and Computer Engineering. His research interests include
Junjie Wu (S’06–M’13) received the B.S., M.S., electromagnetics, acoustics, inverse problems, geophysical subsurface sensing,
and Ph.D. degrees in electronic engineering from the nanophotonics, and biomedical imaging. He has published over 200 papers in
University of Electronic Science and Technology of refereed journals in these areas.
China (UESTC), Chengdu, China, in 2004, 2007, Dr. Liu is a Fellow of the Acoustical Society of America, a member of
and 2013, respectively. Phi Kappa Phi and Tau Beta Pi, and a full member of the U.S. National
He is currently an Associate Professor with the Committee of Union of Radio Science Commissions B and F. Currently, he
UESTC. From January 2012 to January 2013, he serves as the Deputy Editor in Chief of Progress in Electromagnetics Research,
was a visiting student with the Department of Elec- an Associate Editor for the IEEE T RANSACTIONS ON G EOSCIENCE AND
trical and Computer Engineering, Duke University, R EMOTE S ENSING, and an Editor for the Journal of Computational Acoustics.
Durham, NC, USA. His research interests include He is also a Guest Editor in Chief of the P ROCEEDINGS OF THE IEEE for
synthetic aperture radar (SAR) imaging (with partic- a special issue on large-scale computational electromagnetics published in
ular emphasis on bistatic SAR). He is a reviewer of Institution of Engineering February 2013. He was the recipient of the 1996 Presidential Early Career
and Technology Radar, Sonar and Navigation, IEEE Geoscience and Remote Award for Scientists and Engineers from the White House, the 1996 Early
Sensing letters, European Association for Signal Processing Journal on Wire- Career Research Award from the Environmental Protection Agency, and the
less Communications and Networking, and so on. 1997 CAREER Award from the National Science Foundation.

You might also like