A Generalized Omega-K Algorithm To Process
A Generalized Omega-K Algorithm To Process
Abstract—In translationally variant (TV) bistatic synthetic vectors of the transmitter and receiver are not identical. In this
aperture radar (BSAR), 2-D spatial variation is a major problem mode, the geometry relationship between the transmitter and
to be tackled. In this paper, a generalized Omega-K imaging algo- receiver is not stationary. It is more general than TI mode for
rithm to deal with this problem is proposed. The method utilizes
a point target reference spectrum of the generalized Loffeld’s the real applications.
bistatic formula (LBF) (GLBF). Without the bistatic-deformation For TI-BSAR, many imaging algorithms have been pro-
term, GLBF is the latest development of LBF. Similar to the posed, including the range-Doppler algorithm [7], chirp-scaling
monostatic case, it has a much simpler form than other point algorithm [8], [9], and Omega-K algorithm [10]–[14]. Different
target reference spectra. Based on the spatial linearization of from TI-BSAR, TV-BSAR is spatially variant in azimuth be-
GLBF, the Stolt mapping relationship is derived. Different from
the traditional Omega-K algorithms for monostatic SAR and cause the relative position between the transmitter and receiver
translationally invariant BSAR, this approach uses a 2-D Stolt fre- varies with slow time. Thus, the targets in a given range bin
quency transformation. Through this transformation, the method suffer from different range cell migrations (RCMs) and Doppler
can deal with the 2-D spatial variation. It can also consider the reference functions. Consequently, the imaging of TV-BSAR is
linear spatial variation of Doppler parameters, which is usually severely azimuth dependent. For this mode, only a few works
not considered in the previous publications on bistatic Omega-K
algorithms. This method can handle the cases of TV-BSAR with have been published, including scaled inverse FFT [16], range-
different trajectories, different velocities, high squint angles, and Doppler [17], [18], chirp scaling [17], and nonlinear chirp
large bistatic angles. In addition, a compensation method for the scaling [19]. However, all of these methods neglect the variation
phase error caused by the linearization is discussed. Numerical of RCM along the azimuth direction. In addition, these methods
simulations and experimental data processing verify the effective- also ignore the higher order couplings between range and
ness of the proposed method.
azimuth. In [20], an Omega-K for hybrid spaceborne/airborne
Index Terms—Bistatic synthetic aperture radar (BSAR), gener- BSAR is proposed. It uses a 1-D frequency transformation like
alized Loffeld’s bistatic formula (LBF) (GLBF), Omega-K algo- the one in the monostatic case and neglects the variation of
rithm, 2-D spatial variation, 2-D Stolt mapping.
RCM along the azimuth direction as well. In [21], a back-
projection method for TV-BSAR is presented. However, its
I. I NTRODUCTION
focusing in the time domain is quite time consuming.
0196-2892 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
6598 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014
The 2-D PTRS can be obtained by using POSP on azimuth III. O MEGA -K A LGORITHM FOR TV-BSAR
In the Omega-K algorithm, the key is to transform the PTRS
S2df (f, fη ; x, y) = S0 (f ) exp {−jφb (η, fη ; x, y)} dη (5)
phase term in (22) into a style which is linear with both spatial
domain and frequency domain coordinates
where
ΦG (f, fη ) =⇒ ΦG (fA , fB ) = 2π(AfA + BfB ) (13)
f +f0
φb (η, fη ; x, y) = 2π [RT (η; x, y)+RR (η; x, y)]+fη η
c where A and B are the coordinates in the final image along
(6) two dimensions, respectively. fA and fB are the frequencies
and fη is the azimuth frequency. To solve the above integral, corresponding to A and B, respectively. After that, an image
the bistatic point of stationary phase (PSP) is required. Because can be generated by a 2-D inverse fast Fourier transform (IFFT).
two square root terms are involved in (6), it is difficult to obtain
the analytical PSP directly. A. Spatial Domain Linearization
Based on the generalized LBF (GLBF) [15], [24], [25], we
can get the PTRS of TV-BSAR as To obtain the style of (13), first, the spatial variables should
be separated from the 2-D PTRS. This separation can be
S2df (f, fη ; x, y) = S0 (f ) exp {−jΦG (f, fη ; x, y)} (7) achieved by a spatial domain linearization. First, the two lin-
earization variables should be chosen. Considering that the im-
where age would be generated in the receiver-referenced coordinates
ΦG (f, fη ; x, y) = φT (ηP T ) + φR (ηP R ) generally, in this paper, we use rR and y to linearize the PTRS.
In comparison with the linearization in terms of bistatic slant
2π range sum in [14], this linearization has smaller approximation
= [rT FT (f, fη ) + rR FR (f, fη )]
c error and can avoid the complicated geometry registration. This
+ 2π [fηT (fη )η0T + fηR (fη )η0R ] (8) is because rR and y are orthogonal to each other and two parts
in (23) are already linear with rR and y.
2
cfηT (fη ) 1) Linearization of rT : rT is linearized by rR and y at
FT (f, fη ) = (f + f0 )2 − (9)
vT (rR0 , y0 ) by 2-D Taylor expansion
cfηR (fη )
2 rT (rR , y) = rT 0 + ar1 Δr + ay1 Δy (14)
FR (f, fη ) = (f + f0 )2 − . (10)
vR where rT 0 = rT (rR0 , y0 ), Δr = rR − rR0 , Δy = y − y0 , and
ar1 and ay1 are linear coefficients and are listed in Appendix I.
In (8)–(10), fηT (fη ) and fηR (fη ) are the Doppler contributions
2) Linearization of FT and FR : Generally, because θsT
of the transmitter and receiver, respectively
and θsR change with the spatial coordinates, FT (f, fη ) and
fηrT FR (f, fη ) are also spatially variant with the spatial coordinates.
fηT (fη ) = fηcT + (fη − fηc )
fηr In the current publications about BSAR imaging, these spatial
variations are always neglected. In this paper, we use a 2-D
fηrT fη3 − fη3T fηr
− 3
(fη − fηc )2 . . . (11) linear expansion to reflect this spatial variation
fηr
fηrR FT (f, fη ) ≈ FT (f, fη ; rR0 , y0 )+pr1 (f, fη )Δr+py1 (f, fη )Δy
fηR (fη ) = fηcR + (fη − fηc ) FR (f, fη ) ≈ FR (f, fη ; rR0 , y0 )+qr1 (f, fη )Δr+qy1 (f, fη )Δy
fηr
(15)
fηrR fη3 − fη3R fηr
− 3
(fη − fηc )2 . . . (12)
fηr where pr1 (f, fη ) and qr1 (f, fη ) are the first-order Taylor coef-
ficients of FT (f, fη ) and FR (f, fη ) with respect to Δr, respec-
where tively, and py1 (f, fη ) and qy1 (f, fη ) are the first-order Taylor
fηcT , fηrT , fη3T centroid, frequency-modulated rate, and coefficients of FT (f, fη ) and FR (f, fη ) with respect to y,
the third-order term of the transmitter’s respectively. Their expressions can be found in Appendix I.
Doppler contribution, respectively; 3) Linearization of fηT and fηR : Expand fηT and fηR in
fηcR , fηrR , fη3R centroid, frequency-modulated rate, and the terms of Δr and Δy
third-order term of the receiver’s Doppler
contribution, respectively; fηT (fη ) ≈fηT (fη ; rR0 , y0 ) + ςT r1 (fη )Δr + ςT y1 (fη )Δy
fηc , fηr , fη3 centroid, frequency-modulated rate, and fηR (fη ) ≈fηR (fη ; rR0 , y0 )+ςRr1 (fη )Δr+ςRy1 (fη )Δy (16)
the third-order term of the total Doppler
history. fηc = fηcT + fηcR , fηr = fηrT + where ςT r1 (fη ) and ςRr1 (fη ) are the first-order Taylor coeffi-
fηrR , and fη3 = fη3T + fη3R . cients of fηT (fη ) and fηR (fη ) with respect to Δr, respectively.
The number of terms used earlier is controlled by the ne- ςT y1 (fη ) and ςRy1 (fη ) are the first-order Taylor coefficients of
glected BD term which should be less than π/4. In [24], higher fηT (fη ) and fηR (fη ) with respect to Δy, respectively. Their
order terms than the linear one are disregarded. expressions can be found in Appendix I.
6600 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014
4) Linearization of η0T : Expand η0T in terms of Δr and Δy pression, bulk azimuth compression, bulk RCM correction, and
bulk SRC. Therefore, the RFM function can be written as
η0T ≈ η0T (f, fη ; rR0 , y0 ) + gr1 (fη )Δr + gy1 (fη )Δy (17)
SRF M (f, fη ; rR0 ) = S0∗ (f )
where gr1 and gy1 are the first-order coefficients of η0T in terms
ref
of Δr and Δy, respectively. They are also listed in Appendix I. rT 0 FTref + rR0 FRref yR fηR
× exp j2π η0T −
ref ref
+ fηT .
Then, we have c vR
ΦG (f, fη ) (19)
ref 1 After that, the reference target P (rR0 , y0 ) is correctly focused
rT 0 FTref + rR0 FRref yR fηR
≈ 2π + fηT η0T −
ref ref
by RFM, but targets away from P (rR0 , y0 ) are only partially
c vR
focused. Temporarily disregarding the second-order phase term
and the coupling term of Δr and Δy, the residual phase of
ar1 FTref +pr1 (f, fη )rT 0 +qr1 (f, fη )rR0 + FRref
+ 2πΔr P (x, y) is
c
2 φRES (f, fη ; rR , y, rR0 , y0 )
ςRr1 (fη )yR
ref
+ςT r1 (fη )η0T − + gr1 fηTref
vR Δr
=−2π ar1 FTref +pr1 (f, fη )rT 0 +qr1 (f, fη )rR0 +FRref
ay1 FTref + py1 (f, fη )rT 0 + qy1 (f, fη )rR0 c
+ 2πΔy
c ref ςRr1 (fη )yR c ref
+ςT r1 (fη )η0T c− + gr1 fηT c
3 vR
ref
ςRy1 (fη)yR fηR
ref
+ςT y1 (fη )η0T − ref
+gy1 fηT + Δy ay1 FTref +py1 (f, fη )rT 0 +qy1 (f, fη )rR0 vR
vR vR −2π
vR c
4
2 ar1 pr1 (f, fη ) + qr1 (f, fη )
+ 2πΔr + ςT r1 (fη )gr1
c ref
+ςT y1(fη)η0T vR−ςRy1(fη)yR+gy1 fηT
ref ref
vR+fηR .
5
ay1 py1 (f, fη ) ςRy1 (fη )
+ 2πΔy 2 + ςT y1 (fη )gy1 + (20)
c vR
ar1 py1 (f, fη ) + ay1 pr1 (f, fη ) + qy1 C. Stolt Mapping
+ 2πΔxΔy + ςT r1
c
6 The residual phase in (20) is linear with Δr and Δy. Letting
ςRr1 (fη ) the coefficients of Δr and Δy in (20) be equal to the new range
×(fη )gy1 + ςT y1 (fη )gr1 + (18)
vR and azimuth frequencies, we can get the 2-D Stolt frequency
mapping relationship as follows:
where FTref = FT (f, fη ; rR0 , y0 ), FRref = FR (f, fη ; rR0 , y0 ), ⎧
ref ref ref ⎪ f +f0 = ar1 FTref + pr1 (f, fη )rT 0 + qr1 (f, fη )rR0 + FRref
fηT = fηT (fη ; rR0 , y0 ), fηR = fηR (fη ; rR0 , y0 ), and η0T = ⎪
⎨
η0T (f, fη ; rR0 , y0 ). In the remaining part of this paper, sym-
ref
+ ςT r1 (fη )η0T c − ςRr1 (fη )yR c/vR + gr1 fηT
ref
c
bols with the superscript of “ref ” represent the values of the ⎪ fη = ay1 FT vR /c+py1(f, fη )rT 0 vR /c+qy1(f, fη )rR0 vR /c
⎪
ref
⎩
reference target. In (18), 1 is constant with spatial variables. ref
+ςT y1 (fη )η0T vR − ςRy1 (fη )yR + gy1 fηTref ref
vR + fηR
Therefore, it represents the spatially invariant part, including (21)
bulk RCM, bulk azimuth compression, and bulk second range where f and fη are the mapped frequencies along range and
compression (SRC). It can be compensated for by reference azimuth directions, respectively.
function multiplication (RFM) in the 2-D frequency domain. After the frequency mapping, we have
2 is the linear spatially variant term in terms of Δr, while 3
is the linear spatially variant term in terms of Δy. 4 and 5 Δr Δy
φRES f , fη ; rR , y, rR0 , y0 ≈ −2π(f + f0 ) − 2πfη .
are the second-order terms of Δr and Δy, respectively. The last c vR
part 6 is the coupling phase between Δr and Δy. Up to this (22)
point, we have separated the spatial variables from the PTRS. It is linear with both the new spatial domain and frequency
The coefficients of Δr and Δy in (18) are only the functions of domain coordinates. Then, the data can be transformed into the
frequency variables. In the residual part, we use (rR , y) instead 2-D spatial domain by using 2-D IFFT to get the focused image.
of (x, y) to represent the coordinates of the targets. Next, we
will discuss the processing of 1, 2, and 3.
D. Stolt Mapping for Some Special Cases
TV-BSAR can be viewed as a general case of BSAR to some
B. RFM
extent. Therefore, the imaging algorithm for TV-BSAR should
First, the raw data are transformed into the 2-D frequency be suitable for some special cases as well. Here, the forms of
domain. After that, RFM is performed to achieve range com- the Stolt mapping for some special cases are given.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6601
Substituting (27)–(30) into the first equation of (21), we 2) The linear term 2 involves a stretching of the range
can get frequency axis by a constant factor. This stretching in
range frequency leads to a linear scaling of the fast-time
ref ref axis. After RFM, the difference between the range gates
f + f0 ≈ ar1 DT fηT f0 + DR fηR f0 of P0 and PM is
rT 1 r
+ pr1 (0, fη )rT 0 + qr1 (0, fη )rR0 ΔRbi (rR , y0 ; rR0 ) = − T 0
DT [fηcT (rR , y0 )] D f ref
1 T ηcT
ςRr1 (fη )yR c
ref
+ςT r1 (fη )η0T c − ref
+ gr1 fηT c rR1 rR0
vR + − (32)
⎡ DR [fηcR (rR , y0 )] D f ref
R ηcR
a 1
+⎣ r1 + where rT 1 and rR1 are the SRCAs of the transmitter
ref
DT fηcT ref
DR fηcR
and receiver with respect to PM (rR , y0 ), respectively.
⎤ DT [fηcT (rR , y0 )] and DR [fηcR (rR , y0 )] are the migra-
tion factors at fηcT and fηcR with respect to the target of
ref
+ T r1 fηcT ref
rT 0 + Rr1 fηcR rR0 ⎦ f 2
PM (rR , y0 ), respectively. Moreover, suppose that
⎡ ref ref
DT [fηcT (rR , y0 )] ≈ DT fηcT + κT r fηcT Δr (33)
ar1 a ref ref
+⎣ − r1 DR [fηcR (rR , y0 )] ≈ DR fηcR + κRr fηcR Δr (34)
ref
DT fηT ref
DT fηcT
⎤ ref
where κT r (fηcT ) and κRr (fηcRref
) are the linear coeffi-
1 1 cients of DT [fηcT (rR , y0 )] and DR [fηcR (rR , y0 )] with
+ − ⎦ f 3
DR fηR ref DR fηcR ref respect to Δr, respectively.Then, (32) becomes
ref ΔRbi (rR , y0 ; rR0 )
+ T r1 (fη ) − T r1 fηcT rT 0
ref
ref ar1 rT 0 κT r fηcT
+ Rr1 (fη ) − Rr1 fηcR rR0 f 4 ≈ −
⎡ DT [fηcT (rR , y0 )] DT2 [fηcT (rR , y0 )]
ref 2
ar1 c2 fηT ref
⎣
+ T r2 (fη )rT 0 +Rr2 (fη )rR0 − 1 rR0 κRr fηcR
2DT3 fηTref v 2 f 3 + − 2 Δr. (35)
T 0 DR [fηcR (rR , y0 )] DR [fηcR (rR , y0 )]
ref2 ⎤
c2 fηR Based on the derivations in Appendix II, we have the
− ⎦ f 25 . (31)
2DR 3 f ref v 2 f 3 following relationships:
ηR R 0
ref
κT r fηcT = − DT2 [fηcT (rR , y0 )] T r1 (fηcT ) (36)
Some short remarks concerning (31) will be helpful to under- ref
stand the effects of range frequency mapping. κRr fηcR = − DR 2
[fηcR (rR , y0 )] Rr1 (fηcR ). (37)
1) 1 is a constant shift of range frequency and is not Therefore, the factor of Δr in (35) equals the one of
relevant with range frequency. It represents the azimuth f in 2. We can find that, at fηc , the linear term
offset which is caused by RFM and the residual azimuth of range frequency in the residual phase changes from
modulation. As shown in the aforementioned analysis, ΔRbi (rR , y0 ; rR0 )f to Δrf . Therefore, the effect of this
the PTRS of TV-BSAR can be divided into two parts constant linear term is to scale the range axis which was
corresponding to the effects of the transmitter and re- the bistatic range of Rbi to the closest approach range of
ceiver, respectively. For the sake of simplicity, we use receiver rR .
“Function T” and “Function R” to indicate the influence 3) 3 and 4 are also linear scalings of range frequency,
of the transmitter and receiver on the residual compres- but differently, these scaling factors vary with azimuth
sion, respectively. After RFM, PM is partially focused. frequency. It represents the residual RCM correction. In
ar1 D(fηT ref
)f0 is the residual azimuth compression re- detail, 3 corrects the differential RCM resulting from
sulting from “Function T.” ar1 denotes the projection the range coordinate variation. 4 corrects the differ-
relationship from the transmitter to the direction of rR . ential RCM which is caused by the Doppler parameter
For PM , the SRCA difference of the transmitter is ar1 Δr. variation.
D(fηRref
)f0 is the residual azimuth compression part of 4) The last term 5 means the residual SRC.
ref
“Function R.” gr1 fηT is the differential location between From the above analysis, we can find that the range frequency
P0 and PM along the transmitter’s flight direction result- mapping can eliminate the spatial variation along the rR -
ing from “Function T.” After range frequency mapping, direction, including the differential RCM correction, azimuth
PM and P0 fall into the same azimuth bin. compression, and SRC.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6603
B. Azimuth Frequency Mapping where rT 2 is the SRCA of the transmitter with respect
Substituting (27)–(30) into the second equation of (21), we to PN (rR0 , y). Then, we can find that 2 in (38) can
can get correct this difference. After azimuth frequency mapping,
PN and P0 fall into the same range bin. In fact, the echo of
ref
fη ≈ ay1 DT fηT f0 vR /c+py1(0, fη )rT 0 vR /c+qy1(0, fη )rR0 TV-BSAR is collected in the domain of (Rbi , t), and Rbi
ref 1 is not orthogonal to t. The range Stolt mapping in (21)
×vR /c+ςT y1(fη)η0T ref
vR −ςRy1(fη)yR+gy1 fηTref
vR+fηR
⎡ ⎤ changes the echo from (Rbi , t) to (rR , t). The azimuth
vR⎣ ay1 ref ref Stolt mapping in (21) changes the echo from (rR , t) to
+ +T y1 fηcT rT 0 +Ry1 fηcR rR0⎦f 2 (rR , y). We can get the SAR image in an orthogonal space
c D f ref
T ηcT
⎡ ⎤ of (rR , y). The nonorthogonality is corrected.
vR ⎣ ay1 a 5) 3 and 4 represent the residual RCM correction. In
+ − y1 ⎦ f 3 detail, 3 corrects the differential RCM which results
c D f ref DT fηcT ref
T ηT from the range coordinate variation. 4 corrects the dif-
vR ref ferential RCM which is caused by the Doppler parameter
+ T y1 (fη ) − T y1 fηcT rT 0
c ref variation.
+ Ry1 (fη ) − Ry1 fηcR rR0 f 4 6) The last term 5 means the residual SRC.
⎡
vR ⎣ From the above analysis, we can find that the azimuth
+ T y2 (fη )rT 0 + Ry2 (fη )rR0 frequency mapping can eliminate the spatial variation along the
c
y-direction. Combined with the range frequency mapping, the
ref 2 ⎤
ay1 c2 fηT 2-D Stolt mapping in this paper can deal with the 2-D spatial
− ⎦ f 25 . (38) variation problem in TV-BSAR.
2DT3 fηT ref v 2 f 3
T 0
Here, we use PN to discuss the effect of azimuth frequency C. Residual Phase Analysis and Compensation
mapping. The main approximation in the aforementioned method is
1) 1 is not relevant with range frequency. It represents the az- the linear expansion of rT (rR ), FT (f, fη ), FR (f, fη ), fηT (fη ),
imuth offset and the residual azimuth modulation of PN . and fηR (fη ). The approximation error is the sum of the
2) As for PN , after RFM, the azimuth compression of quadratic and higher order terms in the Taylor series expansion.
“Function R” has been compensated for totally because Because the quadratic term is the most dominant component,
PN and P0 have the same value of rR . The frequency these truncated error terms can be approximately determined
transformation just needs to compensate for the residual by the quadratic term
azimuth compression of “Function T” and the Doppler
variation. φE (f, fη ; rR , rR0 , y, y0 )
3) ay1 DT (fηT ref
)f0 vR /c is the residual azimuth compression = φEr (f, fη ) + φEy (f, fη )
factor resulting from the transmitter. ay1 denotes the ar1 pr1 (f, fη ) + qr1 (f, fη ) + ar2 FTref
= 2πΔr2
projection relationship between y and rT . For PN , the c
SRCA difference of the transmitter is ay1 Δy. gy1 fηref vR rT 0 pr2 + rR0 qr2
is the differential location between PN and P0 along the + + ςT r1 (fη )gr1
c
transmitter’s flight direction. ref ref ref
4) 2 is a linear scaling of range frequency. It denotes the lo- + ςT r2 (fη )η0T + gr2 fηT + ςRr2 (fη )η0R
cation correction along Δr resulting from the transmitter’s
ay1 py1 (f, fη ) + ay2 FTref + rT 0 py2 + rR0 qy2
different flight path and different velocity from the receiver. + 2πΔy 2
c
Because of the azimuth spatial variation, after RFM, P0
and PN fall into different range gates. After RFM, the ςRy1 (fη ) ref
+ ςT y1 (fη )gy1 + + ςT y2 (fη )η0T
difference between the range gates of P0 and PN is vR
ref ref
+ gy2 fηT + ςRy2 (fη )fηR (40)
ΔRbi (rR0 , y; y0 )
rT 2 r
= − T 0
DT [fηcT (rR0 , y)] D f ref where ar2 , pr2 (f, fη ), qr2 (f, fη ), ςT r2 (fη ), ςRr2 (fη ), and gr2
T ηcT
rR0 rR0 are the second-order Taylor expansion coefficients of rT (rR ),
+ − FT (f, fη ), FR (f, fη ), fηT (fη ), fηR (fη ), and η0T with re-
DR [fηcR (rR0 , y)] D f ref
R
ref ηcR
ref spect to rR , respectively. ay2 , py2 (f, fη ), qy2 (f, fη ), ςT y2 (fη ),
ay1 rT 0 κT y fηcT rR0 κRy fηcR ςRy2 (fη ), and gy2 are the second-order Taylor expansion co-
= − − efficients of rT , FT (f, fη ), FR (f, fη ), fηT (fη ), fηR (fη ), and
DT fηcTref DT2 fηcTref DR2 f ref
⎛ ηcR
⎞ η0T with respect to y, respectively. The detail expressions of
a these coefficients are given in Appendix II. In practice, the
=⎝ ref ⎠
y1
+rT 0 T y1 fηcTref
+rR0 Ry1 fηcR Δy (39) second-order Taylor expansion term consists of three parts:
DT fηcR ref
∂ 2 ΦG /∂rR2
, ∂ 2 ΦG /∂y 2 , and ∂ 2 ΦG /∂rR ∂y. It is very difficult
6604 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014
D. Computational Complexity
In comparison with the Omega-K algorithm for traditional
monostatic SAR and TI-BSAR, the method in this paper needs
an additional azimuth frequency transformation. Therefore, an
interpolation operation along the azimuth frequency should be
performed.
The aforementioned algorithm uses two times of FFT and
IFFT on the azimuth direction and four times of FFT and
IFFT on the range direction. The major operations are a 2-D
interpolation and three times of 2-D phase factor multiplication.
Suppose that the range sample number is Nr while the azimuth
sample number is Na . The total number of real floating-point
operations would now be
Fig. 3. Block diagram of the proposed generalized Omega-K algorithm for + 18Na Nr + 4(2Mker − 1)Nr Na (43)
TV-BSAR.
where Mker is the number of neighbor samples used for the
to compensate for ∂ 2 φG /∂rR ∂yΔrΔy in any 2-D domain. In range and azimuth interpolation. Therefore, the computational
(40), this mixed second-order partial derivative is neglected. complexity is of order O(N 2 log2 N ), where N is the 1-D size
This phase error leads to residual RCM, residual azimuth of the data.
compression, residual SRC, and residual location error. It is not
easy to compensate for this phase error by phase multiplication
in any 2-D domain directly because it is a function of f and fη E. Limitations
and changes with Δr and Δy as well. In addition, because of The main approximation of this algorithm comes from the
the 2-D frequency transformation, the frequency variables have linearization of the transmitter’s SRCA using the receiver’s.
changed from (f, fη ) to (f , fη ). We need to express the phase Generally, for TI side-looking and squint-looking modes, this
error using (f , fη ) if we want to compensate for it. algorithm can work well. When the receiver works in forward-
Substituting (27) and (28) into (21) and rearranging the looking mode [35], the quadratic component with respect to
result, we can get the following relation: the receiver’s closet approach slant range of the bistatic PTRS
would play a more important role. New linearization methods
m0 + m 1 f + m 2 f η = f (41) should be found to solve this problem.
n0 + n1 f + n2 fη = fη (42) In addition, similar to the Omega-K algorithm for monostatic
SAR, in this algorithm, the two platforms are assumed to
where m1 , m2 , n1 , and n2 are the first-order coefficients of f move with constant velocities. If there is any motion error, the
and fη , respectively. m0 and n0 are constant. combination of the algorithm with the motion compensation
Solving the binary linear equation composed by (II.54) and method should be researched.
(II.55), we can express f and fη using f and fη . Substitute
the results into (II.53), and suppose that the middle values of
V. N UMERICAL S IMULATION AND D ISCUSSION
f and fη are fc and fηc
, respectively. Then, the φEr (f, fη )
and φEy (f, fη ) can be approximated as φEr [f (fc , fη ), fη (fc , To verify the effectiveness of the proposed imaging algo-
fη )] and φEy [f (f, fηc
), fη (f, fηc )], respectively. Then, rithm, we carry out numerical simulations in this section. In
φEr [f (fc , fη ), fη (fc , fη )] and φEy [f (f, fηc ), fη (f, fηc )] can these simulations, the velocities of the transmitter and receiver
be compensated for in the domain of (Δr, fη ) and (f , Δy),
are always constant and different.
respectively. Furthermore, φEr [f (fc , fη ), fη (fc , fη )] denotes In the first and second cases, the receiver and transmitter are
the residual azimuth compression and azimuth location error. mounted on two airplanes. The simulation parameters are listed
φEy [f (f, fηc ), fη (f, fηc )] is the residual SRC and range in Table I. In case I, the angle between the two flight trajectories
location error. is 27◦ . The transmitter has a squint angle of 30◦ while the
Fig. 3 shows the flowchart of the proposed generalized receiver has a squint angle of 13◦ . In case II, the transmitter and
Omega-K algorithm for TV-BSAR. After the 2-D Stolt trans- receiver travel along parallel tracks with constant but different
formation, the data are inverse Fourier transformed to the velocities. The transmitter has a squint angle of 42◦ , while
domain of range space and azimuth frequency to compensate the receiver points backward and has a squint angle of −24◦ .
for exp{jφEr }. After that, we transform the data into the In case III, the transmitter is mounted on a satellite while
domain of azimuth space and range frequency to compensate the receiver is on an airplane. The simulation parameters are
for exp{jφEy }. Finally, an azimuth IFFT is performed to get listed in Table II. The system parameters of TerraSAR-X are
the final image. used for the spaceborne transmitter. TerraSAR-X works in
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6605
TABLE I
S IMULATION PARAMETERS OF A IRBORNE C ASES
TABLE II
S IMULATION PARAMETERS OF H YBRID S PACEBORNE /A IRBORNE C ASE
Fig. 8. Focusing results by the proposed Omega-K algorithm: Case I. (a) Target P1 . (b) Target O. (c) Target P2 .
Fig. 9. Processing results by “Ender’s method”: Case I. (a) Target P1 . (b) Target O. (c) Target P2 .
Fig. 10. Focusing results by the proposed Omega-K algorithm: Case I. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.
6608 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 52, NO. 10, OCTOBER 2014
Fig. 11. Focusing results by “Ender’s method”: Case I. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth profile.
(e) P2 : Range profile. (f) P2 : Azimuth profile.
Fig. 12. Focusing results by the proposed Omega-K algorithm: Case II. (a) Target P1 . (b) Target O. (c) Target P2 .
B. Case II: Airborne With Identical Velocity Direction Fig. 14 gives the imaging result of O by using “Ender’s
method.” In this case, the transmitter works in the high-squint
In this case, the transmitter and receiver have the same case, and then, the coupling between azimuth and range di-
moving direction but different velocities. They also work in rections is very serious. Because of the residual couplings, the
squint mode. The bistatic angle is as large as 66◦ . Fig. 12 shows result has no focus effect along the range direction. As “Ender’s
the imaging results of P1 , O, and P2 by using the proposed method” cannot focus the raw data at all, the quality parameters
method. The quality measurements of these results are listed in of this method are not listed.
Fig. 13. It can be found that this method can still focus the raw In this simulation, the range sample number Nr and the
data very well when the two platforms move in parallel. The azimuth sample number both are 4096. The number of neighbor
theoretical range and azimuth resolutions are 1.38 and 0.47 m, samples used in the range and azimuth interpolation Mker = 8.
respectively. The quality parameters have a little deviation from Using (43), we can get the total real floating-point operation:
the theoretical values. 7.3484e9.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6609
Fig. 13. Focusing results by the proposed Omega-K algorithm: Case II. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.
Fig. 15. Focusing results by the proposed Omega-K algorithm: Case III. (a) Target P1 . (b) Target O. (c) Target P2 .
Fig. 16. Processing results by “Ender’s method”: Case III. (a) Target P1 . (b) Target O. (c) Target P2 .
Fig. 17. Focusing results by the proposed Omega-K algorithm: Case III. (a) P1 : Range profile. (b) P1 : Azimuth profile. (c) O: Range profile. (d) O: Azimuth
profile. (e) P2 : Range profile. (f) P2 : Azimuth profile.
WU et al.: GENERALIZED OMEGA-K ALGORITHM TO PROCESS TV-BSAR DATA BASED ON STOLT MAPPING 6611
Fig. 22. Profiles of two isolated targets. (a) Target PA : Range profile.
(b) Target PA : Azimuth profile. (c) Target PB : Range profile. (d) Target PB :
Azimuth profile.
[22] O. Loffeld, H. Nies, V. Peters, and S. Knedlik, “Models and useful re- Zhongyu Li (S’12) received the B.S. degree in elec-
lations for bistatic SAR processing,” IEEE Trans. Geosci. Remote Sens., tronic engineering from the University of Electronic
vol. 42, no. 10, pp. 2031–2038, Oct. 2004. Science and Technology of China, Chengdu, China,
[23] R. Wang, O. Loffeld, Q. Ul-Ann, H. Nies, A. Medrano Ortiz, and in 2011, where he is currently working toward the
A. Samarah, “A bistatic point target reference spectrum for general bistatic Ph.D. degree.
SAR processing,” IEEE Geosci. Remote Sens. Lett., vol. 5, no. 3, pp. 517– His research interest is synthetic aperture radar
521, Jul. 2008. imaging.
[24] R. Wang, O. Loffeld, Y. Neo, H. Nies, and Z. Dai, “Extending Loffeld’s
bistatic formula for the general bistatic SAR configuration,” IET Radar
Sonar Navig., vol. 4, no. 1, pp. 74–84, Feb. 2010.
[25] J. Wu, J. Yang, Y. Huang, Z. Liu, and H. Yang, “A new look at the point
target reference spectrum for bistatic SAR,” Progr. Electromagn. Res.,
vol. 119, pp. 363–379, 2011.
[26] Y. L. Neo, F. Wong, and I. G. Cumming, “A two-dimensional spectrum Yulin Huang (S’06–M’08) received the B.S. and
for bistatic SAR processing using series reversion,” IEEE Geosci. Remote Ph.D. degrees in electronic engineering from the
Sens. Lett., vol. 4, no. 1, pp. 93–96, Jan. 2007. University of Electronic Science and Technology of
[27] R. Bamler, “A comparison of range-Doppler and wavenumber domain China (UESTC), Chengdu, China, in 2002 and 2008,
SAR focusing algorithms,” IEEE Trans. Geosci. Remote Sens., vol. 30, respectively.
no. 4, pp. 706–713, Jul. 1992. He is currently an Associate Professor with
[28] I. G. Cumming and F. H. Wong, Digital Processing of Synthetic Aperture UESTC. His research interests include signal pro-
Radar Data: Algorithms and Implementation. Norwood, MA, USA: cessing and radar imaging.
Artech House, 2005.
[29] H. Shin and J. Lim, “Omega-k algorithm for airborne spatial invariant
bistatic spotlight SAR imaging,” IEEE Trans. Geosci. Remote Sens.,
vol. 47, no. 1, pp. 238–250, Jan. 2009.
[30] W. G. Carrara, R. S. Goodman, and R. M. Majewski, Spotlight Synthetic
Aperture Radar: Signal Processing Algorithms. Boston, MA, USA: Jianyu Yang (M’06) received the B.S. degree in
Artech House, 1995. electronic engineering from the National Univer-
[31] C. Cafforio, C. Prati, and F. Rocca, “SAR data focusing using seismic sity of Defense Technology, Changsha, China, in
migration techniques,” IEEE Trans. Aerosp. Electon. Syst., vol. 27, no. 2, 1984 and the M.S. and Ph.D. degrees in electronic
pp. 194–207, Mar. 1991. engineering from the University of Electronic Sci-
[32] I. Walterscheid, T. Espeter, A. R. Brenner, J. Klare, J. H. G. Ender, ence and Technology of China (UESTC), Chengdu,
H. Nies, R. Wang, and O. Loffeld, “Bistatic SAR experiments with China, in 1987 and 1991, respectively.
PAMIR and TerraSAR-X—Setup, processing, and image results,” IEEE He is currently a Professor with the School of
Trans. Geosci. Remote Sens., vol. 48, no. 8, pp. 3268–3279, Aug. 2010. Electronic Engineering, UESTC. His research in-
[33] M. Antoniou, M. Cherniakov, and C. Hu, “Space-surface bistatic SAR terests are mainly in synthetic aperture radar and
image formation algorithms,” IEEE Trans. Geosci. Remote Sens., vol. 47, statistical signal processing.
no. 6, pp. 1827–1843, Jun. 2009.
[34] L. Cazzani, C. Colesanti, and D. Leva, “A ground-based parasitic SAR
experiment,” IEEE Trans. Geosci. Remote Sens., vol. 38, no. 5, pp. 2132–
2141, Sep. 2000. Qing Huo Liu (S’88–M’89–SM’94–F’05) received
[35] J. Wu, J. Yang, Y. Huang, H. Yang, and H. Wang, “Bistatic forward- the B.S. and M.S. degrees in physics from Xiamen
looking SAR: Theory and challenges,” in Proc. RADAR, Pasadena, CA, University, Xiamen, China, in 1983 and 1986,
USA, May 2009, pp. 1–9. respectively, and the Ph.D. degree in electrical en-
[36] G. W. Davidson, I. G. Cumming, and M. R. Ito, “A chirp-scaling approach gineering from the University of Illinois at Urbana-
for processing squint mode SAR data,” IEEE Trans. Aerosp. Electron. Champaign, Champaign, IL, USA, in 1989.
Syst., vol. 32, no. 1, pp. 121–133, Jan. 1996. He was with the University of Illinois at Urbana-
[37] G. P. Cardillo, “On the use of the gradient to determine bistatic SAR Champaign as a Research Assistant from September
resolution,” in Proc. IEEE Antennas Propag. Soc. Int. Symp., 1990, 1986 to December 1988 and as a Postdoctoral Re-
pp. 1032–1035. search Associate from January 1989 to February
[38] Y. Huang, J. Yang, J. Wu, L. Xian, H. Yang, and Z. Tian, “Vehicleborne 1990. He was a Research Scientist and Program
bistatic synthetic aperture radar imaging,” in Proc. IGARSS, Barcelona, Leader with Schlumberger-Doll Research, Ridgefield, CT, USA, from 1990
Spain, Jul. 2007, pp. 2164–2166. to 1995. From 1996 to May 1999, he was an Associate Professor with New
Mexico State University, Las Cruces, NM, USA. Since June 1999, he has
been with Duke University, Durham, NC, USA, where he is currently a Pro-
fessor of Electrical and Computer Engineering. His research interests include
Junjie Wu (S’06–M’13) received the B.S., M.S., electromagnetics, acoustics, inverse problems, geophysical subsurface sensing,
and Ph.D. degrees in electronic engineering from the nanophotonics, and biomedical imaging. He has published over 200 papers in
University of Electronic Science and Technology of refereed journals in these areas.
China (UESTC), Chengdu, China, in 2004, 2007, Dr. Liu is a Fellow of the Acoustical Society of America, a member of
and 2013, respectively. Phi Kappa Phi and Tau Beta Pi, and a full member of the U.S. National
He is currently an Associate Professor with the Committee of Union of Radio Science Commissions B and F. Currently, he
UESTC. From January 2012 to January 2013, he serves as the Deputy Editor in Chief of Progress in Electromagnetics Research,
was a visiting student with the Department of Elec- an Associate Editor for the IEEE T RANSACTIONS ON G EOSCIENCE AND
trical and Computer Engineering, Duke University, R EMOTE S ENSING, and an Editor for the Journal of Computational Acoustics.
Durham, NC, USA. His research interests include He is also a Guest Editor in Chief of the P ROCEEDINGS OF THE IEEE for
synthetic aperture radar (SAR) imaging (with partic- a special issue on large-scale computational electromagnetics published in
ular emphasis on bistatic SAR). He is a reviewer of Institution of Engineering February 2013. He was the recipient of the 1996 Presidential Early Career
and Technology Radar, Sonar and Navigation, IEEE Geoscience and Remote Award for Scientists and Engineers from the White House, the 1996 Early
Sensing letters, European Association for Signal Processing Journal on Wire- Career Research Award from the Environmental Protection Agency, and the
less Communications and Networking, and so on. 1997 CAREER Award from the National Science Foundation.