100% found this document useful (1 vote)
2K views793 pages

Avionics Navigation Systems Second Edition

Avionics Navigation Systems Second Edition by Myron Kayton and Walter R. Fried

Uploaded by

Saif Husam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
100% found this document useful (1 vote)
2K views793 pages

Avionics Navigation Systems Second Edition

Avionics Navigation Systems Second Edition by Myron Kayton and Walter R. Fried

Uploaded by

Saif Husam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.

AVIONICS NAVIGATION SYSTEMS

Avionics Navigation Systems. Myron Kayton and Walter R. Fried


Copyright © 1997 John Wiley & Sons, Inc.
AVIONICS NAVIGATION
SYSTEMS
SECOND EDITION

Myron Kayton and Walter R. Fried

A WILEY -INTERSCIENCE PUBLICATION

JOHN WILEY & SONS, INC.


New York • Chichester • Weinheim • Brisbane • Singapore • Toronto
This text is printed on acid-free paper.

Copyright © 1997 by John Wiley & Sons, Inc.

All rights reserved. Published simultaneow.ly in Canada.

Reproduction or translation of any part of this work beyond


that permitted by Section 107 or 108 of the 1976 United
States Copyright Act without the permission of the copyright
owner is unlawful. Requests for permission or further
information should be addressed to the Permissions Department,
John Wiley & Sons, Inc., 605 Third Avenue, New York, NY
10158-0012.

This publication is designed to provide accurate and


authoritative information in regard to the subject
matter covered. It is sold with the understanding that
the publisher is not engaged in rendering legal, accounting,
or other professional services. If legal advice or other
expert assistance is required, the services of a competent
professional person should be sought.

Library of Congress Cataloging in Publication Data:


Avionics navigation systems / Myron Kay1on, Walter Fried [editors].
p. em.
Includes bibliographical references.
ISBN 0-471-54795-6 (cloth : alk. paper)
1. Avionics. 2. Aids to air navigation. I. Kayton, Myron.
II. Fried, Walter.
TL695.A82 1996
629.135'1-dc20 96-23729

Printed in the United States of America

10 9
CONTENTS

Preface xvii
Acknowledgments xxi
List of Contributors xxiii

1 Introduction 1
Myron Kayton

1.1 Definitions
1.2 Guidance versus Navigation
1.3 Categories of Navigation 2
1.4 The Vehicle 3
1.4.1 Civil Aircraft 3
1.4.2 Military Aircraft 5
1.5 Phases of Flight 7
1.5.1 Takeoff 7
1.5.2 Terminal Area 7
1.5.3 En route 7
1.5.4 Approach 8
1.5.5 Landing 8
1.5.6 Missed Approach 9
1.5.7 Surface 9
1.5.8 Weather 9
1.6 Design Trade-offs 9
1.7 Evolution of Air Navigation II
1.8 Integrated Avionics 15
1.8.1 All Aircraft 15
1.8.2 Military Avionics 16
1.8.3 Architecture 17
1.9 Human Navigator 19

2 The Navigation Equations 21


Myron Kayton

l.l Introduction 21
2.2 Geometry of the Earth 23
v
vi CONTENTS

2.3 Coordinate Frames 26


2.4 Dead-Reckoning Computations 29
2.5 Positioning 32
2.5.1 Radio Fixes 32
2.5.2 Line-of-Sight Distance Measurement 33
2.5.3 Ground-Wave One-Way Ranging 35
2.5.4 Ground-Wave Time-Differencing 36
2.6 Terrain-Matching Navigation 37
2.7 Course Computation 38
2.7.1 Range and Bearing Calculation 38
2.7.2 Direct Steering 41
2.7.3 Airway Steering 41
2.7.4 Area Navigation 42
2.8 Navigation Errors 44
2.8.1 Test Data 44
2.8.2 Geometric Dilution of Precision 48
2.9 Digital Charts 49
2.10 Software Development 51
2.11 Future Trends 52
Problems 52

3 Multisensor Navigation Systems 55


James R. Huddle, R. Grover Brown

3.1 Introduction 55
3.2 Inertial System Characteristics 57
3.3 An Integrated Stellar-Inertial System 61
3.4 Integrated Doppler-Inertial Systems 64
3.5 An Airspeed-Damped Inertial System 67
3.6 An Integrated Stellar-Inertial-Doppler System 68
3.7 Position Update of an Inertial System 69
3.8 Noninertial GPS Multisensor Navigation Systems 69
3.9 Filtering of Measurements 70
3.9.1 Single Sensor, Stationary Vehicle 70
3. 9.2 Multiple Sensors, Stationary Vehicle 71
3.9.3 Multiple Sensors, Moving Vehicle 72
3.10 Kalman Filter Basics 72
3.1 0.1 The Process and Measurement Models 73
3.10.2 The Error Covariance Matrix 75
3.10.3 The Recursive Filter 75
3.11 Open-Loop Kalman Filter Mechanization 77
3.12 Closed-Loop Kalman Filter Mechanization 79
3.13 GPS-INS Mechanization 81
CONTENTS vii

3.13.1 Linearizing a Nonlinear Range Measurement 81


3.13.2 GPS Clock Error Model 82
3.13.3 11-State GPS-INS Linear Error Model 83
3.13.4 Elaboration of the 11-State GPS-INS
Error Model 90
3.14 Practical Considerations 91
3.15 Federated System Architecture 93
3. 16 Future Trends 96
Problems 96

4 Terrestrial Radio-Navigation Systems 99


Bahar 1. Uttam, David H. Amos, Joseph M. Covino, Peter Morris
4.1 Introduction 99
4.2 General Principles 99
4.2. I Radio Transmission and Reception 99
4.2.2 Propagation and Noise Characteristics I 04
4.3 System Design Considerations Ill
4.3.1 Radio-Navigation System Types 11 I
4.3.2 System Performance Parameters 114
4.4 Point Source Systems 116
4.4.1 Direction-Finding 116
4.4.2 Non directional Beacons 120
4.4.3 Marker Beacons 121
4.4.4 VHF Omnidirectional Range (VOR) 122
4.4.5 Doppler VOR 126
4.4.6 Distance-Measuring Equipment (DME) 127
4.4.7 Tactical Air Navigation (Tacan) 133
4.4.8 VORTAC 138
4.5 Hyperbolic Systems 138
4.5.1 Loran 138
4.5.2 Omega 155
4.5.3 Decca 171
4.5.4 Chayka 173
4.6 Future Trends 175
Problems 176

5 Satellite Radio Navigation 178


A. 1. Van Dierendonck
5.1 Introduction 178
5.1.1 System Configuration 179
5.2 The Basics of Satellite Radio Navigation 180
5 .2. I Ranging Equations 181
viii CONTENTS

5.2.2 Range-Rate (Change-in-Range) Equations 183


5.2.3 Clock Errors 184
5.3 Orbital Mechanics and Clock Characteristics 184
5.3.1 Orbital Mechanics 184
5.3.2 Clock Characteristics 190
5.4 Atmospheric Effects on Satellite Signals 192
5 .4.1 Ionospheric Refraction 192
5.4.2 Tropospheric Refraction 195
5.5 NAVSTAR Global Positioning System 197
5.5.1 Principles of GPS and System Operation 197
5.5.2 GPS Satellite Constellation and Coverage 200
5.5.3 Space Vehicle Configuration 204
5.5.4 The GPS Control Segment 207
5.5.5 GPS Signal Structure 213
5.5.6 The GPS Navigation Message 218
5.5.7 GPS Measurements and the Navigation
Solution 226
5.5.8 Aviation Receiver Characteristics 229
5.5.9 Differential GPS 248
5.5 .I 0 GPS Accuracy 253
5.6 Global Orbiting Navigation Satellite System
(GLONASS) 257
5.6.1 GLONASS Orbits 257
5.6.2 GLONASS Signal Structure 258
5.6.3 The GLONASS Navigation Message 261
5.6.4 Time and Coordinate Systems 262
5.6.5 GLONASS Constellation 262
5.7 GNSS Integrity and Availability 262
5.7.1 Receiver Autonomous Integrity Monitoring
(RAIM) 263
5.7.2 Combined GPS/GLONASS 267
5.7.3 Wide Area Augmentation System (WAAS) 268
5.7.4 Pseudolite Augmentation 275
5.8 Future Trends 278
Problems 279

6 Terrestrial Integrated Radio Communication-Navigation


Systems 283
Walter R. Fried, James A. Kivett, Edgar Westbrook
6.1 Introduction 283
6.2 JTIDS Relative Navigation 284
6.2.1 General Principles 284
6.2.2 JTIDS System Characteristics 285
CONTENTS ix

6.2.3 Clock Synchronization 286


6.2.4 Coordinate Frames and Community
Organization 288
6.2.5 Operational Utility 290
6.2.6 Mechanization 290
6.2.7 Error Characteristics 297
6.2.8 System Accuracy 299
6.3 Position Location Reporting System 299
6.3. I General Principles 299
6.3.2 System Elements 300
6.3.3 Control Network Structure 301
6.3.4 Waveform Architecture 302
6.3 .5 Measurements 304
6.3.6 Position Location and Tracking 306
6.3.7 Tracking Filter 307
6.3.8 Network and Traffic Management 308
6.3.9 System Capacity and Accuracy 309
6.3.1 0 PLRS User Equipment Characteristics 310
6.3.11 System Enhancements 310
6.4 Future Trends 311
Problems 312

7 Inertial Navigation 313


Daniel A. Tazartes, Myron Kayton, John G. Mark

7 .I Introduction 313
7.2 The System 314
7.3 Instruments 317
7 .3.1 Accelerometers 317
7.3.2 Gyroscopes 324
7.3.3 Optical Gyroscopes 326
7.3.4 Mechanical Gyroscopes 342
7.3 .5 Future Inertial Instruments 34 7
7.4 Platforms 348
7 .4.1 Analytic Platform (Strapdown) 348
7.4.2 Gimballed Platform 361
7 .4.3 Inertial Specifications 364
7.5 Mechanization Equations 365
7.5 .1 Coordinate Frames 365
7.5.2 Horizontal Mechanization 368
7.5.3 Vertical Mechanization 373
7.6 Error Analysis 376
7.6.1 Purpose 376
x CONTENTS

7.6.2 Simulation 376


7.6.3 Error Propagation 377
7.6.4 Total System Error 379
7.7 Alignment 379
7.7.1 Leveling 382
7.7.2 Gyrocompass Alignment 384
7.7.3 Transfer Alignment 386
7.7.4 Attitude and Heading Reference Systems
(AHRS) 389
7.8 Fundamental Limits 389
7.9 Future Trends 389
Problems 390

8 Air-Data Systems 393


Stephen S. Osder
8.1 Introduction 393
8.2 Air-Data Measurements 394
8.2.1 Conventional "Intrusive" Probes 394
8.2.2 Static Pressure 394
8.2.3 Total Pressure 396
8.2.4 Air Temperature 398
8.2.5 Angle of Attack and Angle of Sideslip 399
8.2.6 Air-Data Transducers 400
8.3 Air-Data Equations 402
8.3.1 Altitude 402
8.3.2 Mach Number 405
8.3.3 Calibrated Airspeed 406
8.3.4 True Airspeed 407
8.3.5 Altitude Rate 407
8.4 Air-Data Systems 407
8.4.1 Accuracy Requirements 407
8.4.2 Air-Data Computers 409
8.4.3 Architecture Trends 412
8.5 Specialty Designs 413
8.5.1 Helicopter Air-Data Systems 413
8.5.2 Optical Air-Data Systems 418
8.5.3 Hypersonic Air Data 421
8.6 Calibration and System Test 422
8.6.1 Ground Calibration 422
8.6.2 Flight Calibration 423
8.6.3 Built-in Test (BIT) 423
8.7 Future Trends 424
Problems 424
CONTENTS xi

9 Attitude and Heading References 426


Myron Kayton, Willis G. Wing
9.1 Introduction 426
9.2 Basic Instruments 427
9.2.1 Gyroscopes 427
9.2.2 Gravity Sensors 428
9.3 Vertical References 429
9.3.1 The Averaging Vertical Reference 431
9.3.2 Rate Compensations 433
9.3.3 Acceleration Corrections 434
9.3.4 Maneuver Errors 436
9.4 Heading References 436
9.4.1 Earth's Magnetic Field 437
9.4.2 Aircraft Magnetic Effects 438
9.4.3 The Magnetic Compass Needle 439
9.4.4 Magnetometers 440
9.4.5 Electrical Swinging 443
9.4.6 The Directional Gyroscope 444
9.5 Initial Alignment of Heading References 446
9.6 Future Trends 446
Problems 447

10 Doppler and Altimeter Radars 449


Walter R. Fried, Heinz Buell, James R. Hager
I 0.1 Doppler Radars 449
I 0.1.1 Functions and Applications 449
I 0.1.2 Doppler Radar Principles and
Design Approaches 451
I 0.1.3 Signal Characteristics 4 72
10.1.4 Doppler Radar Errors 4 77
10.1.5 Equipment Configurations 490
I 0.2 Radar Altimeters 491
10.2.1 Functions and Applications 491
10.2.2 General Principles 492
10.2.3 Pulsed Radar Altimeters 492
I 0.2.4 FM-CW Radar Altimeter 493
I 0.2.5 Phase-Coded Pulsed Radar Altimeters 497
10.3 Future Trends 498
Problems 500

11 Mapping and Multimode Radars 503


Jack 0. Pearson, Thomson S. Abbott, Jr., Robert H. Jeffers
11.1 Introduction 503
xii CONTENTS

11.2 Radar Pilotage 504


11.3 Semiautomatic Position Fixing 509
11.4 Semiautomatic Position Fixing with Synthetic
Aperture Radars 5 II
11.4.1 Unfocused Systems 514
11.4.2 Focused Systems 516
11.4.3 Motion Compensation 518
11.5 Precision Velocity Update 522
11.5 .I Mechanization 523
11.5.2 PVU Measurement Errors 525
11.5.3 PVU Kalman Filter 527
11.5.4 PVU Mode Observability Concerns 529
11.6 Terrain Following and Avoidance 529
11.6.1 Radar Mode and Scan Pattern
Implementation 532
11.6.2 Terrain Measurement 534
11.6.3 Aircraft Control 536
11.7 Multimode Radars 538
11.8 Signal Processing 539
11.9 Airborne Weather Radar 540
11.9.1 Radar Reflectivity of Weather Formations 542
11.9.2 Weather Radar Processing 543
11.9.3 Radar Detection of Microburst and
Wind Shear 544
11.10 Future Trends 545
11.10.1 Electronic Scanned Arrays 546
11.10.2 Radar Processing 54 7
11.10.3 Radar Receiver/Exciter Function 548
11.10.4 Interfaces and Packaging 549
11.10.5 Displays 549
Problems 549

12 Celestial Navigation 551


Edward J. Knobbe, Gerald N. Haas

12.1 Introduction 551


12.1.1 Evolution of Celestial Navigation 551
12.1.2 General System Description 552
12.2 Star Observation Geometry 553
12.3 Theory of Stellar-Inertial Navigation 557
12.3.1 Modeling and Kalman Filtering 558
12.3.2 Information and Observability 562
CONTENTS xiii

12.4 Stellar Sensor Design Characteristics 564


12.4.1 Telescope Parameters 564
12.4.2 Star-Signal Power 567
12.4.3 Sky Background Power 568
12.4.4 Star-light Detection 572
12.4.5 Focal Plane Array Processing 573
12.5 Celestial Navigation System Design 575
12.5 .I Time Reference 575
12.5 .2 Star Observation and Pointing Errors 57 6
12.5.3 Stabilized Platform Configuration 578
12.5 .4 Strapdown IMU Configurations 5 81
12.6 Star Catalog Characteristics 583
12.6.1 Star Catalog Contents 584
12.6.2 Star Catalog Size 584
12.6.3 Planet and Moon Avoidance 586
12.6.4 Star Position Corrections 586
12.7 System Calibration and Alignment 590
12.7.1 Factory Calibration 590
12.7.2 Pre-flight and In-flight Calibration and
Alignment 592
12.8 Future Trends 594
Problems 594

13 Landing Systems 597


D. B. Vickers. Richard H. McFarland, William M. Waters, Myron Kayton
13.1 Introduction 597
13.2 Low- Visibility Operations 597
13.3 The Mechanics of the Landing 600
13.3.1 The Approach 600
13.3.2 The Flare Maneuver 603
13.3.3 The Decrab Maneuver and Touchdown 603
13.3.4 Rollout and Taxi 604
13.4 Automatic Landing Systems 605
13.4.1 Guidance and Control Requirements 606
13.4.2 Flare Guidance 606
13.4.3 Lateral Guidance 607
13.5 The Instrument Landing System 608
13.5.1 ILS Guidance Signals 608
13.5.2 The Localizer 613
13.5.3 The Glide Slope 614
13.5.4 ILS Marker Beacons 618
13.5.5 Receivers 618
13.5 .6 ILS Limitations 619
xiv CONTENTS

13.6 The Microwave-Landing System 620


13.6.1 Signal Format 621
13.6.2 The Angle Functions 621
13.6.3 Data Functions 625
13.6.4 Aircraft Antennas and Receivers 626
13.6.5 Mobile MLS 627
13.6.6 Precision DME (DME/P) 627
13.7 Satellite Landing Systems 628
13.7. I Augmentation Concepts 628
13.7.2 Position Solutions 629
13.7.3 Research Issues 630
13.8 Carrier-Landing Systems 630
13.8.1 Description of the Problem 630
13.8.2 Optical Landing Aids 633
13.8.3 Electronic Landing Aids 634
13.9 Future Trends 636
13.9.1 Pilot Aids 636
13.9.2 Satellite Landing Aids 638
13.9.3 Airport Surface Navigation 638
13.9.4 Carrier Landing 638
Problems 638

14 Air Traffic Management 642


Clyde A. Miller, John A. Scardina
14.1 Introduction 642
14.1.1 Services Provided to Aircraft Operators 642
14.1.2 Government Responsibilities 643
14.2 Flight Rules and Airspace Organization 643
14.2.1 Visual and Instrument Flight Rules 643
14.2.2 Altimetry 644
14.2.3 Controlled Airspace 645
14.2.4 Uncontrolled Airspace 645
14.2.5 Special Use Airspace 646
14.3 Airways and Procedures 646
14.3.1 Victor Airways and Jet Routes 646
14.3.2 Random Routes 649
14.3.3 Separation Standards 649
14.3 .4 Terminal Instrument Procedures 65 I
14.3.5 Standard Instrument Departures and Arrivals 655
14.4 Phases of Flight 655
14.4.1 Pre-flight Planning 656
CONTENTS xv

14.4.2 Departure 657


14.4.3 En Route 658
14.4.4 Approach and Landing 659
14.4.5 Oceanic 660
14.5 Subsystems 661
14.5.1 Navigation 661
14.5.2 Radar Surveillance 664
14.5.3 Automatic Dependent Surveillance 667
14.5.4 Air-to-Ground Data Link Communications 669
14.5.5 Aviation Weather 672
14.5.6 Automation and Display Subsystem 673
14.5.7 Airborne ATM Subsystems 675
14.6 Facilities and Operations 677
14.6.1 National Traffic Management 677
14.6.2 En-route Facilities 677
14.6.3 Terminal Facilities 679
14.6.4 Airport Facilities 679
14.6.5 Flight Service Facilities 680
14.6.6 Oceanic Facilities 680
14.7 System Capacity 681
14.7.1 Reducing Peak Demand 681
14.7.2 Increasing System Capacity 682
14.8 Airborne Collision Avoidance Systems 684
14.9 Future Trends 686
Problems 689

15 Avionics Interfaces 691


Cary R. Spitzer

15.1 Introduction 691


15.2 Data Buses 691
15.3 Crew Displays 694
15.4 Power 700
15.5 Maintenance 700
15.6 Physical Interface 701
15.7 Future Trends 703
Problems 704

References 705
Index 741
PREFACE

The purpose of this book is to present a unified treatment of the principles and
practices of modern navigation sensors and systems. This second edition is a
total rewrite of the first edition.
During the 28 years since the first edition was published, there have been
tremendous changes in the science and practice of navigation: the introduction
of navigation satellites that provide, for the first time in history, global, con-
tinuous precise navigation; an enormous increase in the speed and memory of
digital computers, accompanied by a sharp decrease in their size and cost; the
invention of clever algorithms, based primarily on Kalman filters, that mix the
outputs of several sensors to produce a best estimate of position, velocity and,
sometimes, of time; and the proliferation of avionics on aircraft, interconnected
by digital data buses, so that navigation is only one of several avionic subsys-
tems.
This book was written for the navigation system engineer, whether user or
designer, who is concerned with the practical application of newly developed
technology, and for the technical specialist who wishes to learn about adja-
cent specialties. It is an engineer-oriented text that will serve a wide spectrum
of readers, from the systems analyst who writes mathematical models to oper-
ations personnel who want to learn about the avionics equipment in their air-
craft. This book applies to civil and military aircraft, helicopters, and unmanned
aerial vehicles. It covers the speed range from hovering helicopters to hyper-
sonic transports. For all those vehicles, it discusses the state-of-the-art and the
development of new systems that are likely to be introduced in the future.
Each chapter first presents basic functions and fundamental principles. It then
discusses design characteristics, equipment configurations, sources of error, and
typical performance levels. It closes with a projection of future trends. Topics
such as comparative performance levels, weights, and costs of equipment are
covered wherever possible. Most chapters assume a knowledge of undergradu-
ate physics and mathematics; some assume a knowledge of electronic circuits.
References are collected at the end of the book, chapter by chapter, for the
interested reader to use as background reading and to pursue the subject in
more depth. The index is comprehensive enough to allow readers to find topics
outside their area of specialty. It includes a glossary of acronyms. Chapters 2
through 15 conclude with illustrative problems that clarify points in the text
and lead the reader into new areas. These problems will be useful to university
instructors who use the text as part of a course in avionics, guidance and control,

xvii
xviii PREFACE

or navigation. The chapters are extensively cross-referenced for the readers'


convenience; Section x.y.z. points to Chapter x, Section y.z.
Chapter 1 discusses the role of electronic navigation equipment in the mis-
sion of civil and military aircraft. Chapters 2 and 3 discuss navigation princi-
ples, the equations that are the basis of all navigation systems, the calculation
of navigation errors, and the mechanization of multisensor systems. Chapter 2
describes the principles of terrain-matching navigation systems. The first three
chapters serve as the core for the next nine, which deal with sensors.
Chapter 4 discusses radio propagation on the surface of the Earth and
the method of operation of traditional ground-based radio-navigation systems.
Chapter 5 treats the principles and characteristics of satellite-based radio-
navigation sytems, particularly GPS and GLONASS. Chapter 6 covers inte-
grated communication-navigation systems used on battlefields. Chapter 7 dis-
cusses inertial navigation systems that provide navigation and attitude infor-
mation on civil and military aircraft. Chapter 8 describes air-data sensors and
algorithms that compute airspeed, angles of attack and sideslip, and baromet-
ric altitude. Chapter 9 describes the attitude and heading sensors that continue
to be used for attitude-control and dead reckoning on all aircraft. Chapter I 0
covers Doppler-radar navigators, which dead-reckon aircraft and military heli-
copters. Chapter I 0 also describes radar altimeters that are used on civil aircraft
for landing and on military aircraft for terrain-following and terrain-matching.
Chapter I1 covers airborne mapping and multimode radars, terrain-avoidance
radars, and weather radars. Chapter I2 covers celestial navigation and high-
accuracy stellar-inertial systems.
The last three chapters cover the navigation environment. Chapter 13 dis-
cusses the mechanics of landing, electronic landing aids, and naval carrier-
landing systems. Chapter 14 (also chapter I) describes the worldwide air-traffic
management environment in which civil and military aircraft operate. Chapter
15 discusses the interfaces among the navigation devices, other avionic devices,
displays, and electric power.
Readers may wish to consult the first edition for information on systems
that are now obsolete. The first edition contained chapters on analog and digi-
tal computers and displays, which were unfamiliar to many avionics engineers
in the 1970s. It had a chapter on flight control, which is now a subject in its
own right that is of interest to navigation engineers because navigation-derived
steering commands are executed by the flight controls. The authors regret that,
this second edition could not include every subject related to aircraft navigation.
Cartography is discussed only as it relates to digital-map data bases and navi-
gation coordinates. Automatic flight control, aerodynamic stability and control,
weapon control, and localization of radar emitters are omitted.
The first edition was written by a small group of authors who spoke with a
single voice. Some of them have retired and some have died; many no longer
work in their former fields. We want to acknowledge those who could not par-
ticipate in the second edition: Richard Andeen, John Andresen, Paul Astholz,
Frank Brady, Sven Dodington, Dr. R. C. Duncan, Alton Moody, Glenn Quasius,
PREFACE xix

Seymour Schoen, T. J. Thomas, Carl Wiley and Willis Wing. The Acknowledg-
ments explain which first-edition material was re-used.
Due to increasing specialization, the second edition was written by a much
larger, more diverse team, whose members are the foremost current experts in
their fields. We wish to thank them for their generous contributions of time
in preparing drafts, editing, and re-editing in order to give you, the reader, a
coherent, unified book.
While the art and science of navigation is hundreds of years old, the last
50 years have produced exciting new sensors and systems that permit an accu-
racy and level of safety never before seen on moving vehicles. We hope that
this second edition presents the fundamentals and enough details to stimulate
innovation and the development of ever-improving systems of navigation.

MYRON KA YTON
Santa Monica, California

WALTER R. FRIED
Santa Ana, Calijbrnia
January 1997
ACKNOWLEDGMENTS

Dr. Kayton wishes to acknowledge Clarence Asche of Honeywell for photos of


inertial instruments, Anthony Bommarito of Wilcox for information about ILS
and MLS landing aids, Phil Bruner and Wayne Knitter of Litton Industries for
storage of magnetic models in guidance computers, Dennis Cooper, FAA rep-
resentative in Moscow, for information about Russian air traffic control. Walter
Fried for his contributions to Chapters 1, 2, and 9, Professors Frank von Graas
and Per Enge for information about GPS landing aids, Dr. James Huddle of Lit-
ton Guidance and Control for information about mechanization techniques, Dr.
David Y. Hsu of Litton Guidance and Control for using his software to calcu-
late CEPs, the International Civil Aviation Organization for information about
worldwide airspace regulations, Jeppesen-Sanderson for information about dig-
ital aeronautical maps, Dr. Robert Kelly, formerly of Bendix Communications
for the definition of airways based on required navigation performance, Bob
Knutson of Honeywell for information about air data, Dan Martinec of ARINC
for publications, Harold Moses of RTCA for specifications, Bill Murray and Erv
Ulbrich of McDonnell-Douglas for drawings, Norman Peddie and John Quinn
of USGS for information about magnetic models, Walter Schoppe for informa-
tion about naval communication links, and David Scull for various government
documents.
Dr. Kayton included Willis Wing as a co-author of Chapter 9 because so
much of his first edition material was reused, though Mr. Wing did not directly
participate in the second edition.
Mr. Walter Fried wishes to thank Gregory Soloway of GEC Marconi Elec-
tronic Systems Corporation for information on JTIDS terminals.
Dr. Edward Knobbe and Dr. Gerald Haas wish to acknowledge that most of
Sections 12.4.2 and 12.4.3, including Figure 12.7 and Tables 12.2 and 12.3,
were written by Glenn Quasius for the first edition.
Dr. Clyde Miller wishes to thank Gene Wong and J. C. Johns of the FAA and
Capt. Colin Miller of the U.S. Air Force for providing reference materials and
for reviewing various sections of Chapter 14. Chapter 14 does not necessarily
represent the official views of the U.S. government.
Dr. Jack Pearson wishes to acknowledge the use of some of Carl Wiley's
(deceased) material from the first edition.
Dr. Bahar Uttam wishes to acknowledge the use of Sven Dodington's Sec-
tions 4.2, 4.3, and 4.4 from the first edition.
Dr. A. J. Van Dierendonck wishes to acknowledge the contributions of Dr.

xxi
xxii ACKNOWLEDGMENTS

R. Grover Brown on Receiver Autonomous Integrity Monitoring, Ed Martin


on the GPS spacecraft, and Jack Klobuchar on ionospheric effects on satellite
navigation.
Mr. Doug Vickers wishes to thank Robert J. Bleeg of Boeing Commercial
Airplane Division and Steve Osder for reviewing the flight-control sections of
Chapter 13.
Dr. William Waters wishes to acknowledge the contributions of Robert Wig-
ginton of the U.S. Naval Electronic Systems Engineering Activity in Section
13.8.
Mr. Edgar Westbrook wishes to thank Mr. Wayne Altrichter of GEC-Marconi
Electronic Systems Corp. for his review of Section 6.2. He wishes to recognize
Mr. Robert C. Snodgrass of the MITRE Corporation (retired) for his many con-
tributions to the development of JT][DS RelNav.
The cover photograph is courtesy of Rockwell.
LIST OF CONTRIBUTORS

THOMSON S. ABBOTT, JR. (Co-author, Chapter II), Hughes Aircraft Com-


pany, El Segundo, CA

DAVID H. AMOS (Co-author, Chapter 4), Senior Director, Systems Engineer-


ing, Synetics Corporation, Wakefield, MA

R. GROVER BROWN, Ph.D. (Co-author, Chapter 3), Distinguished Professor


Emeritus, Iowa State University, Clear Lake, IA

HEINZ BUELL (Co-author, Chapter 10), Senior Member of Technical Staff,


GEC Marconi Electronic Systems Corporation, Wayne, NJ

JOSEPH M. COVINO (Co-author, Chapter 4), Senior Engineer, Synetics Cor-


poration, Wakefield, MA

WALTER R. FRIED, M.S. (Editor; lead author, Chapters 6 and 10), Consultant,
Hughes Aircraft Company, Santa Ana, CA

GERALD N. HAAS, Ph.D. (Co-author, Chapter 12), Senior Research Engineer,


Northrop-Grumman Electronic Systems, Hawthorne, CA

JAMES R. HAGER (Co-author, Chapter 10), Honeywell Military Avionics,


Minneapolis, MN

JAMES R. HUDDLE, Ph.D. (Lead author, Chapter 3), Chief Scientist and Head
of Advanced System Engineering, Litton Guidance and Control Division,
Woodland Hills, CA

ROBERT H. JEFFERS, Ph.D. (Co-author, Chapter 11), Senior Scientist, Hughes


Aircraft Company, El Segundo, CA

MYRON KAYTON, Ph.D., P.E. (Editor; author of Chapters 1, 2; co-author Chap-


ters 7 and 13; lead author, Chapter 9), Consulting Engineer, Kayton Engi-
neering Company, Santa Monica, CA

JAMES A. KIVETT (Co-author, Chapter 6), Hughes Aircraft Company, El


Segundo, CA

EDWARD J. KNOBBE, Ph.D. (Lead author, Chapter 12), Advanced Systems


Scientist (retired), Northrop-Grumman Electronic Systems, Hawthorne, CA
xxiii
xxiv LIST OF CONTRIBUTORS

JOHN G. MARK, Ph.D. (Co-author, Chapter 7), Chief Scientist, Litton Guid-
ance and Control Division, Woodland Hills, CA
RICHARD H. MCFARLAND, Ph.D., P.E. (Co-author, Chapter 13), Director,
Emeritus, Avionics Engineering Center. Ohio University, Athens, OH
CLYDE A. MILLER, Ph.D. (Lead author, Chapter 14), Program Director for
Research, Federal Aviation Administration, Washington, DC
PETER MORRIS (Co-author, Chapter 4 ), The Analytical Sciences Corporation,
Reading, MA
STEPHEN S. OSDER (Author, Chapter 8), Consultant, formerly McDonnell-
Douglas Fellow, Scottsdale, AZ
JACK 0. PEARSON, Ph.D. (Lead author, Chapter 11), Vice President, Radar
and Communication Systems, Hughes Aircraft Company, El Segundo, CA
JOHN A. SCARDINA, Ph.D. (Co-author, Chapter 14), Team Leader for Air Traf-
fic Management, Federal Aviation Administration, Washington, DC
CARY R. SPITZER (Author, Chapter 15), President, AvioniCon, formerly,
National Aeronautics and Space Administration, Williamsburg, VA
DANIEL A. T AZARTES (Lead author, Chapter 7), Senior Member of Technical
Staff, Litton Guidance and Control Division, Woodland Hills, CA
BAHAR UTTAM, Ph.D. (Lead author, Chapter 4), President, Synetics Corpo-
ration, Wakefield, MA
A. J. VAN DIERENDONCK, Ph.D. (Author, Chapter 5), AJ Systems, Los Altos,
CA
D. B. VICKERS, M.S. (Lead author, Chapter 13), Technical Director, Avionics
Engineering Center. Ohio University, Athens, OH
WILLIAM M. WATERS, Ph.D. (Co-author, Chapter 13), Senior Consultant,
Naval Research Laboratory, Washington, DC
EDGAR A. WESTBROOK, (Co-author, Chapter 6), Technical Staff, retired, The
MITRE Corporation, Bedford, MA
WILLIS G. WING, (Co-author, Chapter 9), Sperry Gyroscope Company,
retired, Glen Head, NY
1 Introduction

1.1 DEFINITIONS

Navigation is the determination of the position and velocity of a moving vehi-


cle. The three components of position and the three components of velocity
make up a six-component state vector that fully describes the translational
motion of the vehicle. Navigation data are usually sent to other on-board sub-
systems, for example, to the flight control, flight management, engine control,
communication control, crew displays, and (if military) weapon-control compu-
ters.
Navigation sensors may be located in the vehicle, in another vehicle, on
the ground, or in space. When the state vector is measured and calculated on
board, the process is called navigation. When it is calculated outside the vehicle,
the process is called surveillance or position location. Surveillance information
is employed to prevent collisions among aircraft. The humans and computers
that direct civil air traffic and most military traffic are located in Air Route
Traffic Control Centers on the ground, whereas some military controllers are
based in surveillance aircraft or aircraft carriers. Existing traffic control sys-
tems observe the position of aircraft using sensors outside the aircraft (e.g.,
surveillance radars) or reports of position from the aircraft itself. "Automatic
dependent surveillance" is a term for the reporting of position, measured by
sensors in an aircraft, to a traffic control center.
Traditionally, ship navigation included the art of pilotage: entering and leav-
ing port, making use of wind and tides, and knowing the coasts and sea condi-
tions. However, in modern usage, navigation is confined to the measurement of
the state vector. The handling of the vehicle is called guidance; more specifi-
cally, it is called conning for ships, flight control for aircraft, and attitude control
for spacecraft. This book is concerned only with the navigation of manned and
unmanned aircraft. The calculation of the navigation state vector requires the
definition of a navigation coordinate frame (as discussed in Chapter 2).

1.2 GUIDANCE VERSUS NAVIGATION

The term "guidance" has two meanings, both of which are different from "nav-
igation":

Avionics Navigation Systems. Myron Kayton and Walter R. Fried 1


Copyright © 1997 John Wiley & Sons, Inc.
2 INTRODUCTION

1. Steering toward a destination of known position from the aircraft's present


position. The steering equations can be derived from a plane triangle for
nearby destinations or from a spherical triangle for distant destinations
(Chapter 2).
2. Steering toward a destination without explicitly measuring the state vec-
tor. A guided vehicle can home on radio, infrared, or visual emissions.
Guidance toward a moving target is usually of interest for military tactical
missiles in which a steering algorithm ensures impact within the maneu-
ver and fuel constraints of the interceptor. One of several related guid-
ance algorithms, collectively called proportionial navigation, processes
sensor data and steers the vehicle to impact. Guidance toward afixed tar-
get involves beam-riding, as :in the Instrument Landing System (Chapter
13).

1.3 CATEGORIES OF NAVIGATION

Navigation systems can be categorized as positioning or dead-reckoning. Posi-


tioning systems measure the state vector without regard to the path traveled by
the vehicle in the past. There are three kinds of positioning systems:

1. Radio systems (Chapters 4 to 6). They consist of a network of transmit-


ters (sometimes also receivers) on the ground, in satellites, or on other
vehicles. The airborne navigation set detects the transmissions and com-
putes its position relative to the known positions of the stations in the
navigation coordinate frame. The aircraft's velocity is measured from the
Doppler shift of the transmissions or from a sequence of position mea-
surements.
2. Celestial systems (Chapter 12). They compute position by measuring the
elevation and azimuth of celestial bodies relative to the navigation coor-
dinate frame at precisely known times. Celestial navigation is used in
special-purpose high-altitude aircraft in conjunction with an inertial nav-
igator. Manual celestial navigation was practiced at sea for millennia and
in aircraft from the 1930s to the 1960s.
3. Mapping navigation systems (Section 2.6). They observe images of the
ground, profiles of altitude, or other external features.

Dead-reckoning navigation systems derive their state vector from a contin-


uous series of measurements relative to an initial position. There are two kinds
of dead-reckoning measurements:

I. Aircraft heading and either speed or acceleration. For example, heading


can be measured with gyroscopes (Chapter 7) or magnetic compasses
(Chapter 9), while speed can be measured with air-data sensors (Chapter
THE VEHICLE 3

8) or Doppler radars (Chapter I 0). Vector acceleration is measured with


inertial sensors (Chapter 7).
2. Emissions from continuous-wave radio stations. They create ambiguous
"lanes" (Chapter 4) that must be counted to keep track of coarse position.
Their phase is measured for fine positioning. They must be reinitialized
after any gap in radio coverage.

Dead-reckoning systems must be re-initialized as errors accumulate and if


electric power is lost.

1.4 THE VEHICLE

1.4.1 Civil Aircraft


The civil aviation industry consists of air carriers and general aviatiOn. Air
carriers operate large aircraft used on trunk routes and small aircraft used in
commuter service. General aviation ranges from single-place crop dusters to
well-equipped four-engine corporate jets.
Most air carriers and general-aviation jet aircraft operate exclusively in
developed areas where ground-based radio aids are plentiful. Others operate
over oceans and undeveloped areas where, before the Global Positioning Sys-
tem (GPS, Chapter 5), navigation aids were nonexistent. Before the 1970s, such
aircraft had astrodomes through which a human navigator took celestial fixes
with a bubble-sextant (Chapter 12). From the 1970s to 2000, aircraft flying
over oceans and undeveloped areas used unaided inertial systems or Omega
(Chapter 4). By the year 2000, most of these aircraft will use GPS alone or in
combination with inertial systems (Chapter 7). Beginning in the mid-1980s, the
US-FAA allowed overwater flight with a single long-range navigation set and
a separate single long-range communication set.
Simple general-aviation aircraft (including helicopters) operate over short
routes, have two or fewer engines, and are flown by one or two pilots. They
are used for water drops on fires, search-and-rescue, ferrying crews to offshore
oil platforms, police patrols, interplant shuttles, crop dusting, and carrying logs
from forests, for example. Each usage has its own navigation requirements. The
simplest aircraft navigate visually or with Loran or GPS sets; the more complex
aircraft use the same navigation equipment as do air carriers. In 1996 civil
helicopters used VOR/DME (see Chapter 4) in developed areas. They landed
visually because their approach paths were too steep for the instrument landing
system (ILS, Chapter 13). Many will adopt GPS for instrument approaches.
Civil aircraft fly in a benign environment; the major electrical stresses on
avionic equipment are caused by lightning and electric-power transients; the
major mechanical stresses are caused by air turbulence, hard landings, and abu-
sive handling by maintenance technicians. Figure 1.1 shows the antenna farm
and avionics bays on an advanced transport that is outfitted for civil and mil-
""'

C- Band SATCOM

Figure 1.1 Avionics placement on multi-purpose transport (Courtesy of McDonnell Douglas,


modified by author).
THE VEHICLE 5

itary usage. The avionics bay is below the cockpit in the space between the
radome and nose wheel well (in many civil aircraft, the avionics bay is aft of
the nosewheel). Avionics and air-data sensors are located in the bay. Access is
beneath the aircraft.
Trans-Pacific hypersonic aircraft may be developed in the twenty-first cen-
tury that will navigate as does the Space Shuttle: inertial boost, GPS or celestial
midcourse, and GPS or other radio approach. They will compete with electronic
mail and teleconferences.

1.4.2 Military Aircraft


Fixed-wing military aircraft can be divided as follows:

1. Interceptors and combat air patrols. These small, high-climb-rate aircraft


protect the homeland, a naval fleet, or an invasion force by seeking and
destroying invading bombers, cruise missiles, and aircraft that carry con-
traband. Interceptors are vectored to their targets by ground-based, ship-
borne, or airborne command posts. Interceptors carry on-board air-to-air
radar (Chapter II) to close on their targets. They use inertial navigation
(Chapter 7) or Tacan (Chapter 4) to return to their bases. GPS may replace
Tacan for returning to fixed bases leaving Tacan for returning to aircraft
earners.
2. Close-air support. These medium-sized aircraft deliver weapons in sup-
port of land armies. They may attack troops, tanks, convoys, or command
centers. They have inertial navigators or GPS to locate the approximate
position of targets and have sensors (e.g., optics and moving-target-indi-
cating (MTI) radar) to locate the precise position of the targets. They
carry communication systems that keep them in contact with local troop
commanders and airborne command posts. These aircraft have relied on
inertial navigation and Tacan to return to their bases. Close-air support
aircraft carry inertial navigators as precise attitude references for optical
sensors and as velocity references for releasing weapons.
3. Interdiction. These medium-sized and large aircraft strike behind enemy
lines to attack strategic targets such as factories, power plants, and mil-
itary installations. Nuclear strategic bombers and fighter bombers are
included in this category. These aircraft carry the most precise naviga-
tion systems, based on inertial, GPS, and celestial sensors. They may
obtain en-route position fixes with optical or radar image comparators
or terrain matching (Section 2.6). Inertial navigators provide precise atti-
tude and velocity references for pointing terminal-area optic sensors and
for releasing weapons. Interdiction aircraft often have sensors that find
tanker aircraft and allow formation flight in instrument weather condi-
tions. Flying at treetop level to avoid enemy radars complicates the task
of navigation.
6 INTRODUCTION

4. Cargo carriers. These aircraft have the same navigation requirements as


do civil aircraft; in addition, they drop cargos by parachute and refuel
from tankers. Cargo drops require flight along a predetermined path and
release at predetermined positions. Cargo aircraft are also sometimes out-
fitted as refueling-tankers and mobile hospitals. Tankers are equipped
with radar beacons to aid in rendezvous. They may be asked to make
Category III landings at third-world airports.
5. Reconnaissance aircraft. These aircraft collect photographic and elec-
tronic-signals data. They navigate precisely in order to annotate the data
and fly close to hostile borders. They measure velocity precisely in order
to compensate cameras and synthetic-aperture radars for vehicle motion.
6. Helicopter and short-takeoff~and-landing (STOL) vehicles. Military heli-
copters often support troops, for example, to attack tanks, to suppress
artillery and small-arms fire, and to transport soldiers and casualties. They
search for and destroy submarines from their bases on large ships. They
measure position, so they can locate targets in the coordinate frames estab-
lished by command posts. Most of them navigate by visual pilotage. Some
Navy and Army helicopters dead-reckon with Doppler radar and compass
(Chapters 9 and I 0) using Tacan to return to their ships or land bases.
They measure velocity precisely, so they can hover, handover targets,
launch weapons, and transition from vertical to horizontal flight. Airspeed
is difficult to measure due to the downwash from the rotors. Doppler radar
can establish a coordinate frame fixed in the moving ocean surface that
is useful when working with submarine-detecting sonobuoys. Search-and-
rescue helicopters carry receivers that detect and direction-find emergency
locator transmitters [8]. Complex helicopter weapon-platforms carry iner-
tial navigators and optical imagers for locating targets and for landing.
7. Unmanned air vehicles (once called "remotely piloted vehicles"). They
range in size from model airplanes to ten thousand kilograms. They are
used as target drones, reconnaissance vehicles, and strategic bombers.
They attack high-risk targets (radiation-emitting antennas and artillery)
without endangering the lives of a crew. Some have elaborate inertial,
map-matching, and acoustic sensors; some carry Doppler radars. They
often navigate inertially until their on-board optical or infrared sensors
acquire the target, then guide themselves to impact with submeter accu-
racy.

Virtually every military aircraft carries an instrument-landing system (ILS)


receiver (Chapter 13). In the past, some relied on a "ground-controlled
approach" in which a human, watching a radar display on the ground or on
an aircraft carrier, radioed steering commands to the pilot. The special prob-
lems of landing on an aircraft carrier are discussed in Chapter 13. Military air-
craft often engage in high-speed, high-g, low-altitude maneuvers that challenge
the mechanical design of on-board avionics. Guns and rocket launchers impose
PHASES OF FLIGHT 7

shock and vibration loads. By the year 2000, most military aircraft will carry
GPS receivers.

1.5 PHASES OF FLIGHT

1.5.1 Takeoff
The takeoff phase begins upon taxiing onto the runway and ends when climb-
out is established on the projected runway centerline. The aircraft is guided
along the centerline by hand-flying or a coupled autopilot based on steering
signals (from an ILS localizer since 1945). Two important speed measurements
are made on the runway. The highest ground speed at which an aborted takeoff
is possible is precomputed and compared, during the takeoff run, to the actual
ground speed as displayed by the navigation system. The airspeed at which the
nose is lifted ("rotation") is precalculated and compared to the actual airspeed
as displayed by the air-data system. Barometric altitude rate or GPS-derived
altitude rate (inertially smoothed) is measured and monitored.

1.5.2 Terminal Area


The terminal phase consists of departure and approach subphases. Departure
begins when the aircraft maneuvers away from the projected runway centerline
and ends when it leaves the terminal-control area (by which time it is established
on an airway). Approach begins when the aircraft enters the terminal area (by
which time it has left the airway) and ends when it intercepts the landing aid at
an approach fix. In 1996, vertical navigation was based on barometric altitude,
and heading vectors were assigned by traffic controllers. Major airports have
standard approach and departure routes unique to each runway. In the United
States, the desired terminal-area navigation accuracy is 1.7 nmi, 2-sigma per
Advisory Circular 20-130. Further details are in Chapter 14.

1.5.3 En Route
The en-route phase leads from the origin to the destination and alternate des-
tinations (an alternate destination is required of civil aircraft operating under
instrument flight rules). From the 1930s to the 1990s, airways were defined
by navigation aids over land and by latitude-longitude fixes over water. The
width of airways and their lateral separation depended on the quality of the
defining navaids and the distance between them. The introduction of inertial
navigation systems and DME in the 1970s caused aviation authorities to cre-
ate "area-navigation" airways (RNAV) that do not always interconnect VOR
navaids [7: AC-90-45A] (see Section 2.7.4).
Beginning in the 1990s, GPS has allowed precise navigation anywhere, not
just on airways. Given the extensive use of on-board collision-avoidance equip-
8 INTRODUCTION

ment and the trend toward reducing government budgets, "free-flight" is being
introduced in controlled airspace. Each aircraft would agree on a route before
takeoff and then be free to change the route, after interaircraft communication
verified that the risk of collision is sufficiently low. En-route surveillance by
independent ground-based radars may disappear or be replaced by position fixes
and reports via com-nav satellites. Busy terminal areas and airport surfaces are
likely to remain under central, positive control.
In the United States in 1996, the en-route navigation error must be less
than 2.8 nmi over land and 12 nmi over oceans (2-sigma) [7: AC-20-130]. As
regional maps become available in digital form with aeronautical annotation
(e.g., minimum en-route altitude), aircraft in undeveloped areas will use GPS
for en-route navigation and nonprecision approaches.

1.5.4 Approach
The approach phase begins at acquisition of the landing aid and continues until
the airport is in sight or the aircraft is on the runway, depending on the capa-
bilities of the landing aid (Chapter 13 ).
During an approach, the decision height (DH) is the altitude above the run-
way at which the approach must be aborted if the runway is not in sight. The
better the landing aids, the lower the decision height. Decision heights are pub-
lished for each runway at each airport (Chapter 13). The decision height for a
Category III landing is I 00 ft or less. By law, an approach may not even be
attempted unless the horizontal visibility, measured by a runway visual range
(RVR) instrument, exceeds a threshold that ranges from zero (Category IIIC,
not approved anywhere in 1996) to 800 meters (Category I). A nonprecision
approach has electronic guidance only in the horizontal direction. An aircraft
executing a nonprecision approach must abort if the runway is not visible at
the minimum descent altitude, which is typically 700 ft above the runway.
In 1996, civil aircraft outside the ex-Soviet bloc used the Instrument Land-
ing System for low-ceiling, low-visibility approaches. A Microwave Landing
System had been approved by the International Civil Aviation Organization for
precision approaches and was being installed at major international airports,
especially in Europe. In the United States, Loran and GPS had been approved
for nonprecision approaches at many airports. (Landing aids are discussed in
Chapter 13.)

1.5.5 Landing
The landing phase begins at the decision height (when the runway is in sight)
and ends when the aircraft exits the runway. Navigation during flare and decrab
may be visual or the navigation set's electrical output may be coupled to an
autopilot. A radio altimeter measures the height of the main landing gear above
the runway for guiding the flare. The rollout is guided by the landing aid (e.g.,
the ILS localizer). Landing navigation is described in Chapter 13.
DESIGN TRADE-OFFS 9

1.5.6 Missed Approach


A missed approach is initiated at the pilot's option or at the traffic controller's
request, typically because of poor visibility, poor alignment with the runway,
equipment failure, or conflicting traffic. The flight path and altitude profile for
a missed approach are published on the approach plates. The missed approach
consists of a climb to a predetermined holding fix at which the aircraft awaits
further instructions. Terminal area navigation aids are used.

1.5.7 Surface
Aircraft movement from the runway to gates, hangars, or revetments is a major
limit on airport capacity in instrument meteorological conditions. Surface navi-
gation is visual on the part of the crew, whereas the ground controllers observe
aircraft visually or with a surface surveillance radar. No matter how good
the surface navigation, collision avoidance among aircraft and ground vehi-
cles requires central guidance, typically provided by a human controller with
computer assistance. Position reports (e.g., via GPS) from aircraft that are con-
cealed in radar shadows reduce the risk of collision and help keep unwanted
aircraft off active runways.

1.5.8 Weather
Instrument meteorological conditions (IMC) are weather conditions in which
visibility is restricted, typically less than 3 miles as defined by law. Aircraft
operating in IMC are supposed to fly under instrument flight rules (IFR), defined
by law in each country (Chapters 13 and 14).

1.6 DESIGN TRADE-OFFS

The navigation-system designer conducts trade-offs for each aircraft and mis-
sion to determine which navigation systems to use. Trade-offs consider the fol-
lowing attributes:

1. Cost. Included are the construction and maintenance of transmitter sta-


tions and the purchase of on-board hardware and software. Users are
concerned only with the cost of on-board hardware and software. In the
past, governments have paid to operate radio-navigation transmitters. In
the future, combined com-nav aids may be operated privately and funded
by user charges.
2. Accuracy ofposition and velocity. This is specified in terms of the sta-
tistical distribution of errors as observed on a large number of flights [4].
The accuracy of military systems is often characterized by circular error
probable (CEP, in meters or nautical miles; Chapter 2). The maximum
10 INTRODUCTION

allowable CEP is frequently established by the kill radius of the weapons


that are released from the aircraft. For civil air carriers, the allowable
en-route navigation error is based on the calculated risk of collision. In
the 1990s, each subsystem was allocated a safety-related failure proba-
bility of 10- 9 per hour [4]. The accuracy of the navigation systems is
often defined as "twice the distance root mean square" (2drms), which
encompasses 95% to 98% of the errors (Section 2.8.1 ). The allowable
landing error depends on runway width, aircraft handling characteristics,
and flying weather.
3. Autonomy. This is the extent to which the vehicle determines its own
position and velocity without external aids. Autonomy is important to
certain military vehicles and to civil vehicles operating in areas of inad-
equate radio-navigation coverage. Autonomy can be subdivided into five
classes:
• Passive self-contained systems that neither receive nor transmit elec-
tromagnetic signals. They emit no radiation that would betray their
presence and require no external stations. Failures are detected and cor-
rected on board. They include dead-reckoning systems such as inertial
navigators.
• Active self-contained systems that radiate but do not receive externally
generated signals. Examples are radars and sonars. They do not depend
on the existence of navigation stations.
• Receivers of natural radiation. These systems measure naturally emit-
ted electromagnetic radiation. Examples are magnetic compasses,
star trackers, and passive map correlators. Some unmanned military
weapons guide themselves toward acoustic emissions. These systems
do not announce their presence by emitting, nor do they need naviga-
tion stations.
• Receivers of artificial radiation. These systems measure electromag-
netic radiation from navaids (Earth based or space based) but do not
themselves transmit. Examples are Loran, Omega, VOR (Chapter 4),
and GPS (Chapter 5). They require external cooperating stations but
do not betray their own presence.
• Active radio navaids that exchange signals with navigation stations.
These include DME, JTIDS, PLRS, and collision-avoidance systems
(Chapters 4, 6, and 14). The vehicle betrays its presence by emit-
ting and requires cooperative external stations. These are the least
autonomous of navigation systems.
4. Time delay in calculating position and velocity, caused by computa-
tional and sensor delays. Time delay (also called latency) can be caused
by computer-processing delays, scanning by a radar beam, or gaps in
satellite coverage, for example. Forty years ago, it took five minutes to
plot a fix manually on an on-board aeronautical chart. Today, navigation
EVOLUTION OF AIR NAVIGATION 11

calculations are completed in tens of milliseconds by a digital compu-


ter.
5. Geographic coverage. Terrestrial radio systems operating below ap-
proximately 100 KHz can be received beyond line of sight on Earth;
those operating above approximately 100 MHz are confined to line of
sight. Each satellite can cover millions of square miles of Earth, while
a constellation of satellites can cover the entire Earth.
6. Automation. The aircraft's crew receives a direct reading of position,
velocity, and equipment status, usually without human intervention, as
described in Section 1.9. In years past, navigation sets were operated by
skilled people, to the extent of manipulating wave forms on cathode-ray
tubes.
7. Availability. This is the fraction of time that the system is usable for
navigation. Downtime is caused by scheduled maintenance, by unsched-
uled outage (usually due to equipment failure), and by radio-propagation
problems that cause excessive errors.
8. System capacity. This is the number of aircraft that the system can
accommodate simultaneously. It applies to two-way ranging systems.
9. Ambiguity. This is the identification, by the navigation system, of two
or more possible positions of the aircraft, with no indication of which is
correct. Ambiguities are characteristic of ranging and hyperbolic systems
when too few stations are received.
10. Integrity. This is the ability of the system to provide timely warnings to
aircraft when its errors are excessive. For en-route navigation in 1996,
an alarm must be generated within 30 seconds of the time a computed
position exceeds its specified error. For a nonprecision landing aid, an
alarm must be generated within ten seconds. For a precision landing aid,
an alarm must be generated within two seconds. Integrity is an important
issue for GPS, especially when it is used as a landing aid in the differ-
ential mode (Chapters 5 and I 3). Any sensor that is the sole means of
navigation must have high integrity.

1.7 EVOLUTION OF AIR NAVIGATION

The earliest aircraft were navigated visually. Pilots had an anemometer for air-
speed, a barometer for altitude, and a magnetic compass for heading. Artificial
horizons and turn-and-bank indicators allowed pilots to hold attitude and head-
ing in clouds, hence motivating the installation of navigation aids. Lighted bea-
cons were installed across the United States in the 1920s to mark airmail routes.
Starting in I 929, four-course radio beacons were also added to the lighted air-
ways to guide aircraft. Four-course beacons were installed in France, South
America, and North Africa. In the 1930s, aircraft were equipped with medium-
frequency and high-frequency direction finders (MF/DF and HF jDF) that mea-
12 INTRODUCTION

sured the bearing of broadcast stations relative to the axis of the aircraft. A fix
was obtained by plotting the direction toward two or more stations. Beacons
near an airport allowed aircraft to fly a "nonprecision approach" to the runway
(Chapter 13). Vertical beacons at 75 MHz, called z-beacons or marker beacons,
were installed along the four-course airways and along approaches to runways
to give a positive indication of position (Chapters 13 and 14).
Air-traffic control was procedural, following the precedent of railroad
"block" clearances. Overland airways connected radio beacons; overwater air-
ways were defined on a map, hundreds of miles apart. The airways were divided
into longitudinal blocks of 20- to 30-minutes flying time. The air-traffic con-
troller relied on the pilots' report of position, allowed only one aircraft at a time
to enter a block, and kept the block free of other traffic until the pilot reported
leaving. The size of the block was commensurate with the uncertainties in nav-
igation at the time.
During World War II, meteorologists learned to route aircraft to take advan-
tage of the cyclonic winds that eire le around high- and low-pressure regions at
mid- and high-latitudes. Bellamy [ 12] states that the transatlantic flying time
was reduced an average of I Oo/c compared to a great-circle track, with occa-
sional savings exceeding 25%, by taking advantage of cyclonic tail winds.
These pressure-pattern routes were plotted graphically in the 1940s-1960s but
are now computed routinely in airline and military dispatch offices.
Crosswinds cause an aircraft to "drift" perpendicularly to its longitudinal
axis. From the 1930s to the 1960s, drift angle was measured in flight with a
downward-looking telescope that observed the direction of movement of the
ground, when it was visible. From the 1940s to the 1960s, drift was estimated
over oceans by observing trends in the difference, D, between the readings
of the radio altimeter and pressure altimeter. Bellamy showed that in cyclonic
winds, drift is proportional to the horizontal gradient of D [12]. The introduc-
tion of Doppler and inertial navigators in the 1960s and 1970s allowed drift
to be observed directly. The Doppler navigator measures the direction of the
ground-speed vector relative to the aircraft's centerline. The inertial navigator
subtracts in-flight-measured airspeed from the measured ground velocity to cal-
culate wind, hence lateral drift.
After World War II, VOR stations (Chapter 4) and Instrument Landing Sys-
tems (ILS, Chapter 13) were installed. VOR/DME and ILS have been the basis
of navigation in western countries ever since. During the 1960s, air-traffic con-
trollers came to rely on surveillance radar in densely populated airspace (Chap-
ter 14). The controller identified the aircraft on his screen, hence eliminating
the need for a position report from the crew. Radar surveillance of air traffic
is called "positive control," which, in 1996, existed in the United States, most
of Canada, western Europe, and Japan. In the late 1990s, the automatic report-
ing of on-board-derived position began to supplement (perhaps eventually to
replace) radar surveillance.
The former Soviet republics have ICAO navigation aids and ILS at about 50
international airports and on corridors connecting them to the borders. Over-
EVOLUTION OF AIR NAVIGATION 13

flying western aircraft navigate inertially and with Omega, GPS, and nondirec-
tional beacons. Since the late 1960s, domestic civil and military aircraft have
used an L-band range-angle system known by its Russian acronym, RSBN, and
not standardized by ICAO. It has 176 channels between 873 and 1000 MHz.
Domestic airports guide landing aircraft with ground-based precision approach
radar (PAR) using verbal commands to the crew. At international airports, PARs
monitor aircraft on ILS approaches. In the 1990s, the former Soviet republics
were purchasing western avionics equipment.
The People's Republic of China depended on imported Russian nondirec-
tional beacons and PARs until the late 1970s, when it began to install western
radars, ILS, VOR, and DME. In the 1990s, China installed VHF air-to-ground
radio relays throughout most of the nation [ 13]. In 1996, western air-traffic con-
trol and navigation equipment was being installed throughout Southeast Asia
and Indonesia.
Outside the developed world, major cities and some airways had
VOR/DME-based procedural traffic control, so aircraft filing flight plans could
be separated from each other by human controllers. Polar areas, the South
Atlantic Ocean, and much of the Pacific and Indian Oceans had no navaids and
no control whatsoever. Most of the rest of the world was divided into Flight
Information Regions that advised crews of weather conditions and the status of
airports and navigation aids but did not separate traffic. Position reports over
oceans and in remote areas are mostly by HF radio but, beginning in the 1990s,
were being made via satellite (e.g., North Pacific and Atlantic Oceans). In 1996,
a few airlines were transmitting GPS-inertial position over digital data links via
geostationary communication satellites over the Pacific Ocean, a system called
Automatic Dependent Surveillance, the first step in the Future Air Navigation
System (FANS, Chapter 14). Outside the United States and Canada, most air-
craft pay directly for traffic control services.
Until the 1970s, precise absolute time could not economically be measured
on a vehicle. Hence, radio navigation aids were built that measured the dif-
ference in time of arrival of radio signals from ground stations. The earliest
(hyperbolic Loran and Decca, some military systems) date from the 1940s. As
airborne clocks became more stable in the 1970s, "passive" or "one-way" rang-
ing systems could solve for position and the absolute clock offset by processing
precisely timed signals from several stations. Direct-ranging Loran and Omega
(as distinguished from hyperbolic Loran and Omega, all discussed in Chapter
4), GPS and GLONASS (Chapter 5), and JTIDS (Chapter 6) are examples of
such one-way ranging systems. As airborne clocks become more accurate in
the twenty-first century, absolute time of arrival will be directly measurable
and clock offsets will become negligible.
GPS and GLONASS are based on one-way passive range measurements
to several stations, most of which are spacecraft (Figure 1.2). A few stations
are ground-based pseudolites whose transmissions mimic those of spacecraft.
Chapter 5 describes the GPS and GLONASS systems. The receiver in the air-
plane computes position, velocity, the offset in the airborne clock, and, in some
14 INTRODUCTION

Figure 1.2 Global Positioning System Spacecraft, Block IIF (courtesy of Rockwell).
L-band antenna array, S-band control antenna, and solar array are v isible.

receivers, the ionospheric delay (Chapter 5). In 1996, the military modes of
GPS achieved 20-meter (2dnns) accuracy anywhere in the world , while the civil
mode could achieve 40-meter accuracy but was intentionally degraded to I00-
meters, a handicap to civil navigation that may be discontinued before the year
2000. The United States and Russia have announced that GPS and GLONASS
wi II be avai Iable worldwide , free of charge. for a least 15 years and there-
after with 6 years ' warning of the end of service. Nevertheless, worldwide civil
authorities are reluctant to rely on military -controlled navigation aids that might
be switched oil or degraded during hostilities. The advent of continuous GPS
allows the use of AHRS-quality inertialjattitude-ret'erence systems (Chapters
7 and 9) in all but the most demanding military applications. The undetected
loss of a navigation signal or the failure of a receiver could be catastrophic.
especially during a landing at low decision height.
A widespread method of improving GPS accuracy and monitoring the signals
is to install a ground station that receives GPS signals and transmits position
errors or ranging errors and satellite failure status on a radio link to nearby air-
craft. This diffi' rentiill CPS (DGPS) can achieve centimeter accuracy for fixed
observers and 1- to 5-meter accuracy on aircraft that can solve at tens of itera-
tions per second or whose velocity calculations are smoothed by an inertial nav-
igator. The United States was experimenting with a nationwide DGPS system
INTEGRATED AVIONICS 15

(Wide Area Augmentation System. WAAS: Chapter 5) that could eventually


replace the network of VORT ACs.
The GPS and GLONASS systems are expensive to operate, each costing nearly
a billion dollars per year for the replacement of satellites and the maintenance of
the ground-control and monitoring network. The cost of collecting user charges
(e.g .. by selling encryption keys or taxing receivers) would exceed the revenue
that could be extracted from navigation-only users. Hence, in the next genera-
tion. GPS transmitters will be installed on low-cost communication satellites as
a way to augment the GPS network or as a low-cost replacement for dedicated
GPS satellites. Governments would still maintain the control and monitoring sta-
tions that calculate the orbits and uplink data for rebroadcast. Taxpayer support is
more likely if GPS becomes widely used in automobiles.
If present trends continue toward fee for service, taxpayer-funded navigation
aids may cease to exist circa 2020. when commercial com-nav satellites will
have superimposed ranging codes on their communication signals. Communica-
tion and navigation would then be available on a per-call basis, forcing aircraft
again to rely on precise dead reckoning between intermittent fixes. probably
from self-contained panel-mounted micro-machined inertial instruments (Chap-
ters 7 and 9).
JTIDS and PLRS (Chapter 6) are military com-nav systems that constitute a
battlefield-sized network whose terminals are in command centers and vehicles.
PLRS terminals can be backpacked by soldiers.
Since airborne digital computers became available in the 1960s. algorithms
have been invented and perfected that combine the measurements of diverse
navigation sensors to create a "best estimate of position and velocity". They
are used in "hybrid'' navigation systems. From 1970 to the end of the century.
various forms of Kalman filters were favored for combining data from diverse
navigation sensors (Chapter 3).

1.8 INTEGRATED AVIONICS

1.8.1 All Aircraft


"Navigation"' is one of several electronic subsystems, collectively called avion-
ics. The other subsystems are as follows:

I. Communication. An airplane's communication system consists of an


intercom among the crew members and one or more external two-way
voice and data links.
2. Flight control. This consists of stability augmentation and autopilot. The
former points the airframe and controls its oscillations. while the latter
provides such functions as attitude-hold, heading-hold, and altitude-hold.
Flaps. slats, and spoilers are often controlled electronically in addition to
rudder. elevator, and ailerons.
16 INTRODUCTION

3. Engine control. This is the electronic control of engine thrust, often called
throttle management. Afterburner and thrust reversers may be controlled
manually, perhaps via a thrust-by-wire control system.
4. Flight management. This subsystem stores the coordinates of en-route
waypoints and calculates the steering signals to fly toward them. It cal-
culates climb and descent profiles that may be followed with or without
constraints on the time at which designated fixes and altitudes are crossed.
Crossing fixes at predetermined times and altitudes is sometimes called
four-dimensional navigation; it requires that the flight management sub-
system control engine thrust. In 1996 all flight management subsystems
stored waypoints in digital form. By the year 2000 many will store dig-
ital maps of the en-route airspace, standard approaches (called STARs in
the United States), standard departures (called S!Ds in the United States),
approach plates, and checkli~;ts (see Chapter 14 ).
5. Subsystem monitoring and control. Faults in all subsystems are displayed,
as are recommended actions to be taken. This subsystem includes wired
logic and software for the automatic reconfiguration of faults in time-criti-
cal subsystems (e.g., flight control, where a fault can destroy the aircraft
in less than three seconds). Quick-responses to safety-critical faults were
automated in flight-control systems by the 1980s. In the 1990s, the trend
was to automate the responses to slower-acting faults, thus reducing the
workload in one-pilot and two-pilot aircraft. The failure-monitoring sub-
system may include an on-board maintenance recorder, the radio trans-
mission of faults to reduce repair time, and an accident recorder whose
data survive a crash (required by law on many aircraft).
6. Collision avoidance. This subsystem predicts impending collisions with
other aircraft or the ground and recommends an avoidance maneuver
(Chapter 14 ).
7. Weather detection. This sub~.ystem observes weather ahead of the aircraft
so that the route of flight can be altered to avoid thunderstorms and areas
of high wind-shear. The sensors are usually radars (Chapter II) and lasers
(Chapter 8).
8. Emergency locator transmitter ( ELT). This subsystem is triggered auto-
matically on high-g impact or manually. In 1996, ELTs emit distinc-
tive tones on 121.5, 243, and 406 MHz [8]. These frequencies (and per-
haps soon 1.6 GHz) are monitored by search-and-rescue aircraft and by
SARSAT-COSPAS satellites.

1.8.2 Military Avionics


The avionics often cost 40% of the value of a military aircraft. In addition to the
navigation subsystem and the subsystems described in Section 1.8.1, military
avionics consist of
INTEGRATED AVIONICS 17

I. Radar, infrared, and other target sensors. These may have their own dis-
plays and controls or may share multipurpose devices.
2. Weapon management
• Fire control. Calculates lead angle for aiming guns and unguided rockets
at other aircraft and at ground targets.
• Stores management, that initializes and launches guided weapons: mis-
siles and bombs.
3. Electronic countermeasures. This subsystem detects, locates, and iden-
tifies enemy emitters of electromagnetic radiation. It may also generate
jamming signals. In 1996, electronic countermeasures were often so com-
plex that they were installed in an externally carried pod on specially
equipped aircraft.
4. Mission planning. Pre-flight mission planning is usually done at the air-
base by a computer that prepares coordinated flight plans for an entire
squadron. On-board software replans routes through enemy defenses
based on en-route observations. En-route replanning requires on-board
digital maps of the terrain and the real-time detection of enemy radars.
5. Formation flight. This subsystem maintains formation flight in instrument
meteorological conditions. It once consisted of beacons, transponders, and
communication links but is being replaced by relative GPS.

1.8.3 Architecture
Before the 1960s, electrical and electronic systems on aircraft consisted of inde-
pendent subsystems, each with its own sensors, analog computers, displays, and
controls. The appearance of airborne digital computers in the 1960s created the
first integrated avionic systems. The interconnectivity of airborne electronics is
called architecture. It involves six aspects:

l. Displays. They present information from the avionics to the pilots (Chap-
ters 9 and 15). The information consists of vertical and horizontal navi-
gation data, flight-control data (e.g., speed and angle of attack), and com-
munication data (radio frequencies). The displays show the status of all
subsystems including their faults. Displays consist of dedicated gauges,
dedicated glass displays, multipurpose glass displays, and the support-
ing symbol generators. In 1996, flat-panel vertical- and horizontal-situa-
tion displays were displacing cathode-ray tubes as "glass displays." Mul-
tipurpose displays of text and block diagrams are flat-panel matrices sur-
rounded by buttons whose labels change as the displays change. On-board
digital terrain data, used for mission planning, can be displayed on the
horizontal situation display or on a head-up display.
2. Controls. The means of inputting information from the pilots to the
avionics. The flight controls traditionally consist of rudder pedals and a
18 INTRODUCTION

control-column or stick. Fly-by-wire aircraft are increasingly using either


two-axis hand controllers and rudder pedals or (especially in manned
spacecraft) three-axis hand controllers. The subsystem controls consist of
panel-mounted buttons and ~;witches. Switches are also mounted on the
control column, stick, throttle, and hand-controllers; sometimes 5 buttons
per hand. The buttons on the periphery of multipurpose displays control
the subsystems.
3. Computation. The method of processing sensor data. Two extreme orga-
nizations of computation exist:
• Centralized. Data from all sensors are collected in a bank of central
computers in which software from several subsystems are intermingled.
The level of fault tolerance is that of the most critical subsystem, usu-
ally flight control. It has the simplest hardware and interconnections.
In 1996, central computers were redundant uniprocessors or multipro-
cessers.
• Decentralized. Each traditional subsystem retains its integrity. Hence,
navigation sensors feed a navigation computer, flight-control sensors
feed a flight-control computer that drives flight-control actuators, and
so on. This architecture requires complex interconnections but has the
advantages that fault-tolerance provisions can differ for each subsystem
according to the consequences of failure and that software is created
by experts in each subsystem, executes independently of other soft-
ware, and is easily modifted. When designed with suitable intercom-
puter channels and data-reasonability tests, decentralized systems have
more reliable software than do centralized systems.
Many avionic systems combine features of centralized and decentralized
architectures.
4. Data buses. Copper or fiber-optic paths among sensors, computers, actu-
ators, displays, and controls, as discussed in Chapter 15. Some data paths
are dedicated and some are multiplexed. Complex aircraft contain parallel
buses (one wire, pair of wires, or optical fiber per bit) and serial buses
(bits sent sequentially on one wire-pair or fiber). A large aircraft can have
a thousand pounds of signal. wiring.
5. Safety partitioning. Commercial fly-by-wire aircraft sometimes divide
the avionics into a highly redundant safety-critical flight-control system, a
dually redundant mission-critical flight-management system, and a nonre-
dundant maintenance system that collects and records data. Military air-
craft sometimes partition their avionics for reasons other than safety.
6. Environment. Avionic equipment are subject to aircraft-generated elec-
tric-power transients, whm.e effects are reduced by filtering and bat-
teries. Equipment are also subject to externally generated disturbances
from radio transmitters and lightning. The effects of external disturbances
(high-intensity radiated fields, HIRF) are reduced by shielding metal
HUMAN NAVIGATOR 19

wires and by using fiberoptic data buses. Aircraft constructed with a con-
tinuous metal skin have an added layer of Faraday shielding. Neverthe-
less, direct lightning strikes on antennas destroy input circuits and may
damage feed cables. A nearby strike may induce enough current to do the
same. Composite airframe structures can be transparent to radiation, thus
exposing the avionics and power systems to external fields.
7. Standards. The signals in space created by navaids are standardized by
the International Civil Aviation Organization (ICAO), Montreal, a United
Nations agency [3]. These standards are written by committees that con-
sist of representatives of the member governments. Interfaces among
airborne subsystems, within the aircraft, are standardized by ARINC
(Aeronautical Radio, Inc.), Annapolis, Maryland, a nonprofit organiza-
tion owned by member airlines [1]. Other requirements are imposed
on airborne equipment by two nonprofit organizations supported by
member entities (mostly airframe and avionics manufacturers and gov-
ernment agencies). In the United States, RTCA, Inc. (Formerly Radio
Technical Commission for Aeronautics), Washington D.C., defines the
environmental specifications and test procedures for airborne hardware
and software, and writes performance specifications for airborne equip-
ment [5]. In Europe, EUROCAE (European Organisation for Civil Avi-
ation Equipment), Paris, produces specifications for airborne equipment,
some of which are in conjunction with RTCA [II]. Government agen-
cies in all major nations define rules governing the usage of naviga-
tion equipment in flight, weather minimums, traffic separation, ground
equipment required, pilot training requirements, and so on [6-1 0] (see
Chapters I 3, I 4). Some of these rules are standardized internationally
by ICAO. U.S. military organizations once issued their own standards for
airborne circuit boards but have accepted civil standards since the early
1990s.

1.9 HUMAN NAVIGATOR

Large aircraft often had (and a few still had in 1996) a third crew member, the
flight engineer, whose duties were to operate engines and aircraft subsystems
such as air conditioning and hydraulics. Aircraft operating over oceans once
carried a human navigator who used celestial fixes, whatever radio aids were
available, and dead reckoning to plot the aircraft's course on a paper chart (some
military aircraft still do). Those navigators were trained in celestial observato-
ries to recognize stars, take fixes, compute position, and plot the fixes.
The navigator's crew station disappeared in civil aircraft in the 1970s,
because inertial, Doppler, and radio equipment came into use that automati-
cally selected stations, calculated position, calculated waypoint steering, and
accommodated failures. Hence, instead of requiring a skilled navigator on each
20 INTRODUCTION

aircraft, a smaller number of even more skilled engineers were employed to


design the automated systems. Since the 1980s, the trend has been to automate
large aircraft so that subsystem management and navigation can be done by one
or two pilots. Displays and controls are discussed in Chapters 9 and I 5.
The key navigation skill in the twenty-first-century airplane is the operation
of flight-management, inertial, satellite-navigation, and VOR equipment, each
of which has different menus, inputting logic, and displays. The crew must
learn to operate them in all modes, respond to failures, and enter waypoints
for new routes manually. A new industry was created in the 1990s to pro-
duce computer-based trainers (called CBTs) that emulate subsystem software
and include replica control panels in order to allow crews to practice scenarios
without consuming expensive time on a full-mission simulator.
2 The Navigation Equations

2.1 INTRODUCTION

The navigation equations describe how the sensor outputs are processed in the
on-board computer in order to calculate the position, velocity, and attitude of the
aircraft. The navigation equations contain instructions and data and are part of
the airborne software that also includes moding, display drivers, failure detec-
tion, and an operating system, for example. The instructions and invariant data
are usually stored in a read-only memory (ROM) at the time of manufactur-
ing. Mission-dependent data (e.g., waypoints) are either loaded from a cock-
pit keyboard or from a cartridge, sometimes called a data-entry device, into
random-access memory (RAM). Waypoints are often precomputed in a ground-
based dispatch or mission-planning computer and transferred to the flight com-
puters.
Figure 2.1 is the block diagram of an aircraft navigation system. The system
utilizes three types of sensor information (as explained in Chapter 1):

1. Absolute position data from radio aids, radar checkpoints, and satellites
(based on range or differential range measurements).
2. Dead-reckoning data, obtained from inertial, Doppler, or air-data sen-
sors, as a means of extrapolating present position. A heading reference is
required in order to resolve the measured velocities into the computational
coordinates.
3. Line-of-sight directions to stars, which measure a combination of position
and attitude errors (as explained in Chapters I and 12).

The navigation computer combines the sensor information to obtain an esti-


mate of the aircraft's position, velocity, and attitude. The best estimate of posi-
tion is then combined with waypoint information to determine range and bear-
ing to the destination. Bearing angle is displayed and sent to the autopilot
as a steering command. Range to go is the basis of calculations, executed
in a navigation or flight-management computer, that predict time of arrival
at waypoints and that predict fuel consumption. Map displays, read from on-
board compact discs (CD-ROM, Section 2.9), are driven by calculated posi-
tion.
Avionics Navigation Systems. Myron Kayton and Walter R. Fried 21
Copyright © 1997 John Wiley & Sons, Inc.
N
N

Heading
attitude
~--------------------------------------~
I
To cockpit displays
pointing sensors
'?-
I i I
I
I
Dead-
Inertial }
-.:::.. reckoning
----------· I
I
I
air data I
computations I
I
Doppler I
I
Position I Waypoints
I
velocity I

attitude · ·

Most
l
I
I

t ~ Time to go
Star line Celestial __... Position
probable Course
of sight
...::.
equations
position
-= computations ~ Range, bearing to displays, FMS
1---1
computations ~ Steering signals to autopilot
Position in ..., To map display
sensors ~
..:::.. To weapon computers

}
Radio (VOR, Velocity
Loran, Orne Positioning
Satellite (G
~~ computations J Pc
Radar data

Figure 2.1 Block diagram of an aircraft navigation system.


GEOMETRY OF THE EARTH 23

2.2 GEOMETRY OF THE EARTH

The Newtonian gravitational attraction of the Earth is represented by a gravi-


tational field G. Because of the rotation of the Earth, the apparent gravity field
g is the vector sum of the gravitational and centrifugal fields (Figure 2.2):

g= G- fix (fix R) (2.1)

where !1 is the inertial angular velocity of the Earth (15.04107 degjhr) and
R is the radius vector from the mass center of the Earth to a point where the
field is to be computed. The direction of g is the "plumb bob," or "astronomic"
vertical [ 10].
In cooling from a molten mass, the Earth has assumed a shape whose surface
is a gravity equipotential and is nearly perpendicular to g everywhere (i.e., no
horizontal stresses exist at the surface). For navigational purposes, the Earth's
surface can be represented by an ellipsoid of rotation around the Earth's spin
axis. The size and shape of the best-fitting ellipsoid are chosen to match the
sea-level equipotential surface. Mathematically, the center of the ellipsoid is
at the mass center of the Earth, and the ellipsoid is chosen so as to minimize
the mean-square deviation between the direction of gravity and the normal to
the ellipsoid, when integrated over the entire surface. National ellipsoids have
been chosen to represent the Earth in localized areas, but they are not always
good worldwide approximations. The centers of these national ellipsoids are
not exactly coincident and do not exactly coincide with the mass center of the
Earth [9]. In 1996, the World Geodetic System (WGS-84, [20]) was the best
approximation to the geoid, based on gravimetric and satellite observations.
Reference [20] contains transformation equations that convert between WGS-84
and various national ellipsoids. The differences are typically hundreds of feet,
though some isolated island grids are displaced as much as a mile from WGS-
84. The navigator does not ask that the Earth be mapped onto the optimum
ellipsoid. Any ellipsoid is satisfactory for worldwide navigation if all points on
Earth are mapped onto it.
The geometry of the ellipsoid is defined by a meridian section whose semi-
major axis is the equatorial radius a and whose semiminor axis is the polar
radius b, as shown in Figure 2.2. The eccentricity of the elliptic section is
defined as e= Va 2 - b 2 /a and the ellipticity, or flattening, asf =(a- b)/a.
The radius vector R makes an angle F c with the equatorial plane, where F c
is the geocentric latitude; R and F c are not directly measurable, but they are
sometimes used in mechanizing dead-reckoning equations.
The geodetic latitude F T of a point is the angle between the normal to the ref-
erence ellipsoid and the equatorial plane. Geodetic latitude is our usual under-
standing of map latitude. The term "geographic latitude" is sometimes used syn-
onymously with "geodetic" but should refer to geodetic latitude on a worldwide
ellipsoid.
The radii of curvature of the ellipsoid are of fundamental importance to dead-
~ Polar axis of Earth
and axis of ellipsoid

Surface of Earth

tA = astronomic latitude of P
tT = geodetic latitude of P
Nav.1on;ar. 4>c = geocentric latitude of P
=
PC= h height above
reference ellipsoid
OE ==a= semimajor axis
=
OD = b semiminor axis
Center of ellipsoid
and mass
center of Earth tc
0

Normal to reference ellipsoid

AvI
Figure 2.2 Meridian section of the Earth, showing the reference ellipsoid and gravity field.
GEOMETRY OF THE EARTH 25

reckoning navigation. The meridian radius of curvature, RM, is the radius of


the best-fitting circle to a meridian section of the ellipsoid:

RM = a(l - d-) "" a [ I + e


2 ( -3 sin 2 F T - I) ] (2.2)
(1 - e2 sin 2 F T )312 2

The prime radius of curvature, Rp, is the radius of the best-fitting circle to a
vertical east-west section of the ellipsoid:

Rp = a "" a [ I + -d- sm
. 2 F T] (2.3)
(l-e2sin 2 FT) 112 2

The Gaussian radius r~f curvature is the radius of the best-fitting sphere to the
ellipsoid at any point:

(2.4)

The radii of curvature are important, because they relate the horizontal com-
ponents of velocity to angular coordinates, such as latitude and longitude; for
example,

· Veast
lcos FT = - - - (2.5)
Rp +h

where h is the aircraft's altitude above the reference ellipsoid, measured along
the normal to the ellipsoid (nearly along the direction of gravity), and 1 is its
longitude, measured positively east.
For numerical work a= 6378.137 km = 3443.918 nmi,f = 1/298.2572, and
d- = f(2 -f) [18]. (One nautical mile = 1852 meters exactly, or 6076.11549
ft.) The angle between the gravity vector and the normal to the ellipsoid, the
deflection qf the vertical, is commonly less than 10 seconds of arc and is rarely
greater than 30 seconds of arc [18]. The magnitude of gravity at sea level is

g = 978.049( 1 + 0.00529 sin 2 F T) em/ sec 2 (2.6)

within 0.02 emf sec 2 . It decreases 10- 6 g for each 10-ft increase in altitude above
sea level [ 161.
26 THE NAVIGATION EQUATIONS

2.3 COORDINATE FRAMES

The position, velocity, and attitude of the aircraft must be expressed in a coordi-
nate frame. Paragraph l below describes the rectangular Earth-centered, Earth-
fixed (ECEF) coordinate frame, y; Paragraph 2 describes the Earth-centered
inertial (ECI) coordinate frame, Xi, which simplifies the computations for iner-
tial and stellar sensors. Other Earth-referenced orthogonal coordinates, called
Z;, can simplify navigation computations for some navaids and displays. Para-
graphs 3-5 describe coordinates commonly used in inertial navigation systems.
Paragraph 6 describes coordinates used in land navigators or in military air-
craft that support ground troops. Paragraphs 7-9 describe coordinates that were
important before powerful airborne digital computers existed.

l. Earth-centered, Earth-fixed (ECEF). The basic coordinate frame for nav-


igation near the Earth is ECEF, shown in Figure 2.3 as the y; rectangular
coordinates whose origin is at the mass center of the Earth, whose y 3 -
axis lies along the Earth's spin axis, whose y 1 axis lies in the Greenwich
meridian, and which rotates with the Earth [ l 0]. Satellite-based radio-nav-
igation systems often use these ECEF coordinates to calculate satellite and
aircraft positions.
2. Earth-centered inertial (ECI). ECI coordinates, x;, can have their origin at
the mass-center of any freely falling body (e.g., the Earth) and are nonro-
tating relative to the fixed stars. For centuries, astronomers have observed

Y3

North Pole

Geodetic wander-azimuth
coordinates

Figure 2.3 Na"Vigation coordinate frames.


COORDINATE FRAMES 27

the small relative motions of stars ("proper motion") and have defined
an "average" ECI reference frame [11]. To an accuracy of w-s degjhr,
an ECI frame can be chosen with its x 3 -axis along the mean polar axis
of the Earth and with its x 1- and x 2 -axes pointing to convenient stars (as
explained in Chapter 12). ECI coordinates have three navigational func-
tions. First, Newton's laws are valid in any ECI coordinate frame. Sec-
ond, the angular coordinates of stars are conventionally tabulated in ECI.
Third, they are used in mechanizing inertial navigators, Section 7.5.1.
3. Geodetic spherical coordinates. These are the spherical coordinates of
the normal to the reference ellipsoid (Figure 2.2). The symbol z 1 repre-
sents longitude 1; z2 is geodetic latitude F r, and z3 is altitude h above
the reference ellipsoid. Geodetic coordinates are used on maps and in the
mechanization of dead-reckoning and radio-navigation systems. Transfor-
mations from ECEF to geodetic spherical coordinates are given in [9] and
[23].
4. Geodetic wander azimuth. These coordinates are locally level to the ref-
erence ellipsoid. 23 is vertically up and z2 points at an angle, a, west of
true north (Figure 2.3). The wander-azimuth unit vectors, z 1 and z2, are
in the level plane but do not point east and north. Wander azimuth is the
most commonly used coordinate frame for worldwide inertial navigation
and is discussed below and in Section 7.5.1.
5. Direction cosines. The orientation of any z-coordinate frame (e.g., navi-
gation coordinates or body axes) can be described by its direction cosines
relative to ECEF y-axes. Any vector V can be resolved into either the y-
or z-coordinate frame. The y and z components of V are related by the
equation

(2.7)

where

C11 =-cos a sin 1- sin a sin F cos 1


C12 = cos a cos 1 - sin a sin F sin 1
C13 = cos F sin a
C21 = sin a sin 1 - sin F cos a cos 1
C22 = - sin a cos 1 - cos a sin F sin 1
C23 = cos F cos a
C31 = cos F cos 1
C32 = cos F sin 1
C33 =sin F (2.8)

The navigation computer calculates in terms of the Cu, which are usable
28 THE NAVIGATION EQUATIONS

everywhere on Earth. The familiar geographic coordinates can be found


from the relations

sin F = C33,

or

Cos 2 F = c2! 3 + c223 = c231 + c232


C3o
tan l = C31

tan a = - -
c13 (2.9)
C23

wherever they converge. In polar regions, where a and l are not mean-
ingful, the navigation system operates correctly on the basis of the C;i.
Section 7.5.1 describes an inertial mechanization in direction cosines. If
the z-coordinate frame has a north-pointing axis, a = 0.
6. Map-grid coordinates. The navigation computer can calculate position in
map-grid coordinates such as Lambert conformal or transverse Mercator
xy-coordinates [13]. Grid coordinates are used in local areas (e.g., on mil-
itary battlefields or in cities) but are not convenient for long-range naviga-
tion. A particular grid, Universal Transverse Mercator (UTM), is widely
used by army vehicles of the western nations. The U.S. Military Grid
Reference System (MGRS) consists of UTM charts worldwide except,
in polar regions, polar stereographic charts [ 13]. The latter are projected
onto a plane tangent to the Earth at the pole, from a point at the opposite
pole.
7. Geocentric spherical coordinates. These are the spherical coordinates of
the radius vector R (Figure 2.2). The symbol z1 represents longitude
l; Z2 is geocentric latitude F c, and z3 is the radius. Geocentric coordi-
nates are sometimes mechanized in short-range dead-reckoning systems
using a spherical-Earth approximation. Initialization requires knowledge
of the direction toward the mass center of the Earth, a direction that is
not directly observable.
8. Transverse-pole spherical coordinates. These coordinates are analogous
to geocentric spherical coordinates except that their poles are deliberately
placed on the Earth's equator. The symbol z1 represents the transverse
longitude; Z2, the transverse latitude; and Z3, the radius. They permit non-
singular operation near the north or south poles, by placing the transverse
pole on the true equator. Transverse-polar coordinates involve only three
z; variables instead of nine direction cosines. However, they cannot be
used for precise navigation, since the transverse equator is elliptical, com-
DEAD-RECKONING COMPUTATIONS 29

plicating the precise definition of transverse latitude and longitude. They


are similar to but not identical to the stereographic coordinates often used
in polar regions. Transverse-pole coordinates were used in inertial and
Doppler navigation systems from 1955 to 1970 when primitive airborne
computers required simplified computations.
9. Tangent plane coordinates. These coordinates are always parallel to the
locally level axes at some destination point (Figure 2.3). They are locally
level only at that point and are useful for flight operations within a few
hundred miles of a single destination. Here z3 lies normal to the tangent
plane, and z2 lies parallel to the meridian through the origin. Section 7.5.1
describes the mechanization of an inertial navigator in tangent-plane coor-
dinates.

2.4 DEAD-RECKONING COMPUTATIONS

Dead reckoning (often called DR) is the technique of calculating position from
measurements of velocity. It is the means of navigation in the absence of posi-
tion fixes and consists in calculating the position (the Zi-coordinates) of a vehi-
cle by extrapolating (integrating) estimated or measured ground speed. Prior to
GPS, dead-reckoning computations were the heart of every automatic naviga-
tor. They gave continuous navigation information between discrete fixes. In its
simplest form, neglecting wind, dead reckoning can calculate the position of a
vehicle on the surface of a flat Earth from measurements of ground speed V R
and true heading wy:

Vnorth = vii cos Wy, y- Yo= ft


()
Vnorth dt

Veast = Vg sin Wy, X- Xo = ft


()
Veast dt (2.1 0)

where x- x 0 and y- y0 are the east and north distances traveled during the mea-
surement interval, respectively. Notice that a simple integration of unresolved
ground speed would give curvilinear distance traveled but would be of little
use for determining position.
Aircraft heading (best-available true heading) is measured using the quanti-
ties defined in Figure 2.4. With a magnetic compass, for example, the best avail-
able true heading is the algebraic sum of magnetic heading and east variation.
With a gimballed inertial system, the best available true heading is platform
heading (relative to the Zi computational coordinates) plus the wander angle
a (Section 7.5.2). When navigating manually in polar regions, dead-reckoned
velocity is resolved through best available grid heading.
30 THE NAVIGATION EQUATIONS

True
north Grid
1' north Magnet1c

j1. north
(D1rection of 1

:2b~:~:r-t --- - ~~
1, Tr = true track
'Pr =true headmg
f \ ,J Gri?tion .,./

Vanation I j ~ J'b
~_j_/ ~ ~'I$'? V w = wind speed
I o~
I / Tr ~,-4i

I. . / j I
..:.~
Dntt
angle, o
True f -- i 7-Y':r \ Sideslip
angle,~
beanng, BT 1 1
1

True east

------ ---
( a)

V E = Earth speed
vg = ground speed = horizontal
(b) component of Vr;
YA =air-mass flight
path angle
YT = Earth-referenced
flight path angle

Figure 2.4 Geometry of dead reckoning.

In the presence of a crosswind the ground-speed vector does not lie along
the aircraft's center line but makes an angle with it (Figure 2.4). The true-o
track angle T 1 , the angle from true north to the ground-speed vector, is pre-
ferred for dead-reckoning calculations whe n it is available. The drift angle 0 can
be measured with a Doppler radar or a drift sight (a downward-pointing tele-
DEAD-RECKONING COMPUTATIONS 31

scope whose reticle can be rotated by the navigator to align with the moving
ground).
In a moving air mass

Vnorth = VTAS cos (8- ex) cos (lj;T + (3) + vwind-north


Veast = VTAS cos (8- ex) sin (1/;T + (3) + vwind-east (2.11)

where 8 is the pitch angle, VTAS is the true airspeed and (3 is the sideslip angle.
On a flat Earth, the north and east (or grid north and grid east) distances traveled
are found by integrating the two components of velocity with respect to time.
On a curved Earth, the position coordinates are not linear distances but angular
coordinates. Equations 2.5 show a method for transforming linear velocities
to angular coordinates. The accuracy of airspeed data is limited by errors in
predicted windspeed and by errors in measuring airspeed and drift angle.
The dead-reckoning computer can process Doppler velocity. If the Doppler
radar measures ground speed V., and drift angle o,
Vnorth = Vg cos (1/;T + o)
Veast = Vg sin (lj;T + o) (2.12)

Equivalently, the Doppler radar can measure the components of groundspeed


in body axes: v~ along-axis and v~ cross-axis (using the same symbols as in
Chapter 10), both of which can be resolved through the three attitude angles,
1/;T, pitch and roll. Antenna misalignment relative to the heading sensor can be
calibrated from a series of fixes, either in closed form or with a Kalman filter
(Chapter 3).
Equations 2.5 relate velocity to A and 'lr T on an ellipsoidal Earth. Similar
relations can be derived for the other coordinate frames discussed in Section 2.3.
The velocity with respect to Earth dRjdtly can be expressed in z; components
as follows:

dRI dRI z + (WyzXR)


dt (2.13)
dt v

For example, in a spherical coordinate frame whose z3 -axis lies along the posi-
tion vector R:

dRI =dR
- - R+(wyzXR)
A

dt y dt

where the first term is the rate of change of radius, along the radius vector,
32 THE NAVIGATION EQUATIONS

and Wyz is the angular velocity of the Z;-coordinate frame relative to y;; R is
the unit vector in the direction of R. In direction-cosine mechanizations, the
C;J are related to the C;J by Equation 7.40, where w; - ll; of that equation is
identical to Wyz of this one.

2.5 POSITIONING

2.5.1 Radio Fixes


There are five basic airborne radio measurements:

I. Bearing. The angle of arriva:t, relative to the airframe, of a radio signal


from an external transmitter. Bearing is measured by the difference in
phase or time of arrival at multiple antennas on the airframe. At each bear-
ing, the distortion caused by the airframe may be calibrated as a function
of frequency. If necessary, calibration could also include roll and pitch.
2. Phase. The airborne receiver measures the phase difference between
continuous-wave signals emitted by two stations using a single airborne
antenna. This is the method of operation of VOR azimuth and hyperbolic
Omega (Chapter 4).
3. Time difference. The airborne receiver measures the difference in time of
arrival between pulses sent from two stations. A 10- 4 clock (one part in
I 04 ) is adequate to measure the short time interval if both pulses are sent
simultaneously. Because Loran pulses can be transmitted 0.1 sec apart,
a clock error less than 10- 6 is needed to measure the time difference.
In time-differencing and phase-measuring systems (hyperbolic Loran), at
least two pairs of stations are required to obtain a fix.
4. Two-way range. The airborne receiver measures the time delay between
the transmission of a pulse and its return from an external transponder
at a known location. Round-trip propagation times are typically less than
a millisecond, during which the clock must be stable. A 1% range error
requires a clock-stability of 0.3% (3 x 10- 3 ), which is two microseconds
at 100-km range. The calculation of range requires knowledge of the prop-
agation speed and transponder delay. DME is a two-way ranging system
(Section 4.4.6).
5. One-way range. The airborne receiver measures the time of arrival with
respect to its own clock. If the airborne clock were synchronized with the
transmitter's clock upon departure from the airfield and ran freely there-
after, a I% range error at a distance of I 00 km from a fixed station would
require a clock error of one microsecond, which is 5 x 10- 11 of a five-
hour mission. When 25,000-km distances to GPS satellites are to be mea-
sured with a one-meter error, a short-term clock stability of one part in 3
x I 0 8 would be needed to measure range and I o- 4 seconds absolute time
POSITIONING 33

error would be needed to calculate the satellite's position (GPS satellites


are moving at 3000 ft/sec relative to Earth). Together these would require
an error of one part in I 0 12 for a clock synchronized with the transmitters
at the start of a ten-hour mission and allowed to run freely thereafter. Only
an atomic clock had this accuracy in 1996. Therefore practical one-way
ranging systems use a technique called pseudoranging. The transmitters
contain atomic clocks with long-term stabilities of about 10- 13 , while the
airborne receiver's clocks have accuracies and stabilities of 10- 6 to 10- 9 .
The airborne computer solves for the aircraft's clock offset (and some-
times, drift rate) by making redundant range measurements. For exam-
ple, measuring four pseudoranges obtains three-dimensional position and
clock offset to a few nanoseconds using Equations 2.17 and 2.18. Pseu-
doranging is used in GPS and GLONASS (Chapter 5) and in one-way
ranging (direct ranging) of Loran and Omega (Chapter 4).

2.5.2 Line-of-Sight Distance Measurement


Figure 2.5 shows an aircraft near the surface of the Earth at R0 and a radio
station that may be near the surface or in space, at R,i· The slant range, IR,i-
R0 I, from the aircraft to the station could be measured by one-way or two-way
v
ranging. If is the unit local vertical vector at the aircraft, the elevation angle
of the line of sight to the radio station is

Y3

Radio station i
Surface of earth

Yl
Figure 2.5 Light-of-sight distance.
34 THE NAVIGATION EQUATIONS

v · (R,.;- Ro)
sin E = -----,------,-- (2.14)
IR.,;- Rol

If n is the unit north-pointing horizontal vector at the aircraft, the azimuth angle
of the line of sight is

'' x (R,; - Ro) ,


sin Az = IR,;- Ro I cos E x n (2.15)

These vector equations can be resolved into any coordinate frame. For exam-
ple, in ECEF,

n= - xsin <Jl cos A - y sin <Jl sin A + zcos <Jl


v= xcos <Jl cos A + ycos <Jl sin A + zsin <Jl
R = x(Rm +h) cos <Jl cos A+ jl(Rm +h) cos <Jl sin A

(2.16)

In one-way ranging systems, where the clock offset and range are to be cal-
culated, the measured pseudorange vector from the ith station, R;m, is R 0 -
R,;, corrected for the unknown offset of the airborne clock and for propagation
delays in the atmosphere, expressed in distance units. The magnitude of the
pseudorange in any coordinate frame (e.g., ECEF) is

R;m = ryc(TOA)
Rim(k) = [(xk- x,.;) 2 + (yk- y,;) 2 + (Zk- z.,;) 2 ] 1/ 2 -ryctk (2.17)

where
is the measured pseudorange from the aircraft to the ith
station
Rim(k) is the calculated pseudorange in the kth iteration.
TOA; is the time of arrival of the signal from the ith station
relative to the expected time of arrival as measured by
the aircraft's clock
c is the speed of light in vacuum = 2.99792458x 108 m/sec
POSITIONING 35

YJ is the average index of refraction in the propagation


medium; partly space, partly atmosphere
xb Yb Zk is the unknown position of the aircraft (in the kth
computational iteration)
x,;, y,;, Zsi is the known position of the ith station
tk is the computed time offset of the aircraft's clock relative to
the station's clock in the kth iteration

The stations may be moving (e.g., satellites) or stationary (e.g., GPS pseudolites).
Four pseudorange measurements are needed to solve for the four unknowns in
Equations 2.17: aircraft position and clock offset. When more than four measure-
ments are made, the equations are overdetermined so that a solution requires a
model of the ranging errors, for example, using a Kalman filter (Chapter 3). The
airborne computer usually solves for its position by assuming a position and clock
offset, calculating the pseudoranges to four stations from Equation 2.17, compar-
ing to the measured pseudoranges (with respect to its own clock), and iterating
until the calculated and measured pseudoranges are close enough. The next iter-
ation is chosen as follows: In the kth iteration, the assumed position is xk Yk Zk
whose range to the ith station differs from the measured range R;m by LlXk =
R;m(k)- R,;. The components of LlXk in the navigation coordinates are LlXk, LlYb
and LlZk. The sensitivity of R;111 to position is

(2.18)

where dR;m/dX1 are the direction cosines between the line of sight to the ith
station and the jth coordinate axis and Llt k is the error in estimating clock offset.
If the assumed position and clock offset were correct, LlXk. LlYk, LlZk, and Lltk
would be zero and LlR;k would also be zero. But if the assumed position were
misestimated by LlXk, the error along the line of sight would be the dot product
of the unit vector along the line of sight with LlXk. Thus after computing Rimk
from Equation 2.17, the next iteration is Rim(k + l) = Rim(k)+LlXk. Iterations cease
when the difference between the calculated and measured pseudorange is within
the desired accuracy. A recursive filter allows a new calculation of position
and clock offset after each measurement (Chapters 3 and 5). In Equation 2.17,
Earth-based line-of-sight navaids use an average index of refraction YJ (Chapter
4), whereas satellite-based navaids, whose signals propagate mostly in vacuum,
assume that YJ = I and correct for the atmosphere with a model resident in the
receiver's software (Section 5.4).

2.5.3 Ground-Wave One-Way Ranging


Loran and Omega waves propagate along the curved surface of the Earth (as
explained in Chapter 4). With either sensor, an aircraft can measure the time
36 THE NAVIGATION EQUATIONS

of arrival of the navigation signal from two or more stations and compute its
own position as follows:

• Assume an aircraft position.


• Calculate the exact distance and azimuth to each radio transmitter using
ellipsoidal Earth equations [ 18].
• Calculate the predicted propagation time and time of arrival allowing for
the conductivity of the intervening Earth's surface and the presence or
absence of the dark/light terminator between the aircraft and the station
(as described in Chapter 4).
• Measure the time of arrival using the aircraft's own clock, which is usually
not synchronized to the transmitter's clock.
• Calculate the difference between the measured and predicted times of
arrival to each station.
• The probable position is the assumed position, offset by the vector sum
of the time differences, each in the direction of its station, converted to
distance.
• Assume a new aircraft position and iterate until the residual is within the
allowed error.

Three or more stations are needed if the aircraft's clock is not synchronized
to the Loran or Omega transmitters. If a receiver incorporated a sufficiently sta-
ble clock, only two stations would be needed for a direct-ranging fix. Two-sta-
tion fixes with a synchronized clock or three-station fixes with an asynchronous
clock result in two position solutions, at the intersections of two circular, each
having a vortex at a transmitter. The correct position can be found by receiving
an additional station or from a priori knowledge.

2.5.4 Ground-Wave Time-Differencing


An aircraft can measure the difference in time of arrival of Loran or Omega
signals from two or more stations (Chapter 4). To measure the time-difference
within 0.1% requires a 0.03% accuracy clock, which is less expensive than
the 10- 11 clock required for unconrected one-way ranging or the 10- 6 to 10- 9
clock required for pseudoranging (Section 2.5.1). As in one-way ranging, the
iterative procedure is based on the precise calculation of propagation time from
the aircraft to each station on an ellipsoidal Earth:

• Assume an aircraft position.


• Calculate the exact range and azimuth from the assumed position to each
observed radio station using ellipsoidal Earth equations [ 18].
• Calculate the predicted propagation time allowing for the conductivity of
TERRAIN-MATCHING NAVIGATION 37

the intervening Earth's surface and the presence of the sunlight terminator
between the aircraft and the station.
• Subtract the times to two stations to calculate the predicted difference in
propagation time.
• Measure the difference in time of arrival of the signals from the two sta-
tions.
• Subtract the measured and predicted time differences to the two stations.
• Calculate the time-difference gradients from which is calculated the most
probable position of the aircraft after the measurements (see Section 2.5.2
of the First Edition of this book and Chapter 4 of this book).
• Iterate until the residual is smaller than the allowed error.

2.6 TERRAIN-MATCHING NAVIGATION

These navigation systems obtain occasional updates when the aircraft overflies
a patch of a few square miles, chosen for its unique profile [5]. A digital map
of altitude above sea level, h,, is stored for several parallel tracks; see Figure
2.6. For example, if 0.1-nmi accuracy is desired, h,(t) must be stored in 200-ft
squares sampled every 0.2 sec at 600 knots.
The aircraft measures the height of the terrain above sea level as the differ-
ence between barometric altitude (Chapter 8) and radar altitude (Chapter 10);
see Figure 2.7. Each pair of height measurements and the dead-reckoning posi-
tion are recorded and time-tagged.
After passing over the patch, the aircraft uses its measured velocity to calcu-
late the profile as a function of distance along track, hm(x), and calculates the
cross-correlation function between the measured and stored profiles:

Terrain Terrain profile


patch

Stored track + 1
-------~-

Nominal track - -~
------------~------~~~--

Stored track -1 ~
-- --
-------~- ~------~

Figure 2.6 Parallel tracks through terrain patch.


38 THE NAVIGATION EQUATIONS

hradar

Sea level
Figure 2.7 Measurement of terrain altitude.

ilA
r:f>ms(7) =
f
0
hm(x)h,(x- 7) dx (2.19)

where the map patch has a length A. The integration is long enough (n > l) to
ensure that the patch is sampled, even with the expected along-track error. The
computer selects the track whose cross-correlation is largest as the most prob-
able track. The computer selects the x-shift of maximum correlation 7 as the
along-track correction to the dead-reckoned position. Heading drift is usually
so small that correlations are not required in azimuth. The algorithm accom-
modates offsets in barometric altitude caused by an unknown sea-level setting.
The width of the patch depends on the growth rate of azimuth errors in the
dead-reckoning system. Simpler algorithms have been used ("mean absolute
differences") and more complex Kalman filters have been used [5].
Terrain correlators are built under the names TERCOM, SITAN, and TER-
PROM. They are usually used on unmanned aircraft (cruise missiles) and can
achieve errors less than 100 ft [ 1, 8]. The feasibility of this navigation aid
depends on the existence of unique terrain patches along the flight path and
on the availability of digital maps of terrain heights above sea level. The U.S.
Defense Mapping Agency produces TERCOM maps for landfalls and mid-
course updates in three-arcsec grids.

2.7 COURSE COMPUTATION

2.7.1 Range and Bearing Calculation


The purpose of the course computation is to calculate range and bearing from
an aircraft to one or more desired waypoints, targets, airports, checkpoints, or
COURSE COMPUTATION 39

radio beacons. The computation begins with the best-estimate of the present
position of the aircraft and ends by delivering computed range and bearing to
other vehicle subsystems (Figure 2.1 ). Waypoints may be loaded before depar-
ture or inserted en route. The navigation computer, mission computer, or flight-
management computer performs the steering calculations.
Range and bearing to a destination can be calculated by using either the
spherical or the plane triangle of Figure 2.8. If flat-Earth approximations are
satisfactory, the xy coordinates of the aircraft are computed using the dead-
reckoning Equation 2.1 0; x 1 and y 1 of the targets are loaded from a cassette or
from a keyboard. Then, range D and bearing BT to the target, measured from
true north, are

1 X- Xt
BT =tan (2.20)
Y- Yt

The crew will want a display of relative bearing (BR = BT - 1/JT) or relative
track (TR = BT- T T). BT is the true bearing of the target. Relative bearing BR
is the horizontal angle from the longitudinal axis of the aircraft to the target,
and relative track T R is the horizontal angle from the ground track of the aircraft
to the target (Figure 2.4 ).
1
If .::1}.. and Llcll are less than radian, the plane triangle solution exceeds the

Ya
North pole

Waypoint

L = departure from airway


Y1 ll1 =range-to-go angle
0 2 = range-to-go along airway
D =distance-to-go= R01
Figure 2.8 Course-line calculations.
40 THE NAVIGATION EQUATIONS

spherical triangle solution by a range !1D:

(2.21)

This error is 36 nmi at a range of D = 1000 nmi, an azimuth Br = 45 deg and


a latitude of cp = 45 deg. If this is not sufficiently accurate, a spherical-triangle
solution may be used:

D
cos ~ = sin cp sin cp 1 + cos cp cos cp 1 cos (/\ - 1\ 1)
Rc

(2.22)

Rc is the Gaussian radius of curvature, Equation 2.4. At long range, where the
absolute and percent errors are largest, they are usually least significant. Within
I 00 nmi of the aircraft, the Earth can be assumed flat within an error of 0.3
nm1.
Steering and range-bearing computations can be performed directly from the
Z; navigation coordinates or from the direction cosines Ci to prevent singular-
ities near the north and south poles.
Knowing the measured or computed ground speed, an aircraft can be steered
in such a manner that the ground speed vector-not the longitudinal axis of
the aircraft-tracks toward the de~:ired waypoint (relative track is the steering
command that is nulled). The difference between heading toward the target
and tracking toward the target is significant only for helicopters; the vehicle
eventually arrives there in either case, by slightly different paths.
Two general kinds of steering to a destination are commonly used: (I) steer-
ing directly from the present position to the destination and (2) steering along
a preplanned airway or route. The former results in area navigation (Section
2.7.4) using the shortest (though not necessarily the fastest) route to the des-
tination, whereas the latter is representative of flying along assigned airways
(Chapter 14). Either steering method may be solved by the plane-triangle or
spherical-triangle calculation.
The rhumb line is used by ships and simple aircraft. It is defined by flying
at a constant true heading to the local meridians. The resulting flight path is a
straight line on a Mercator chart. A1,rcraft sometimes divide a complex route into
rhumb-line segments so that each segment can be flown at constant heading.
More often, the continually changing heading toward the next waypoint is
recomputed and fed to the autopilot. Since the great circle maps into a near-
straight line on a Lambert conformal chart, the crew can monitor the flight
path by manual plotting, if desired. In the twenty-first century, electronic map
displays will show the moving aircraft on charts.
COURSE COMPUTATION 41

2.7.2 Direct Steering


The steering computer calculates the ground speed V1 along the direction to
the destination and V 2 normal to the line of sight to the destination. The com-
manded ~ank angle is made proportional to V 2 in order that the aircraft's head-
ing rate H be driven to zero when flying along the desired great circle. If Va
is airspeed, if= (g/Va) tan r:/> and the commanded bank angle is

(2.23)

K 2 provides some anticipation when approaching the correct direction of flight.


The commanded bank angle r:f>c is limited to a maximum value (e.g., 15 deg) in
order to avoid violent maneuvers when the aircraft's flight direction is greatly in
error.
Near the destination, the computation of lateral speed, V 2 , becomes sin-
gular, and the steering signal would fluctuate erratically. To prevent this, the
track angle or heading is frozen and held until the destination (computed from
range-to-go and ground speed) is passed. The range at which the steering must
be frozen is determined by simulation. This navigation method is sometimes
called proportional navigation, a term derived from missile-steering techniques
in which the heading rate of the vehicle is made proportional to the line-of-sight
rate to the target.
The normal to the great-circle plane connecting present position R 3 to the
destination R 2 is defined by the unit vector u:

(2.24)

(Figure 2.8). The lateral speed V 2 is the magnitude of the dot product of the air-
craft's velocity with this unit vector. The range to go from R 3 to R 2 is given by
Equation (2.20) or (2.22). Time to go is calculated from the proposed velocity
schedule.
The fastest route is neither the great circle nor the airway because of winds,
especially because of the stratospheric jet stream whose speed often exceeds I 00
knots. Thus where high-altitude aircraft are not confined to airways, they follow
preplanned "pressure-pattern" routes that take advantage of cyclonic tail winds.

2.7.3 Airway Steering


The steering algorithm calculates a great circle from the takeoff point (or from
a waypoint) to the destination (or to another waypoint). The aircraft is steered
along this great circle by calculating the lateral deviation L (Figure 2.8) from
the desired great circle and commanding a bank angle:
42 THE NAVIGATION EQUATIONS

(2.25)

The integral-of-displacement term is added to give zero steady-state displace-


ment from the airway in the presence of a constant wind and is also used in
automatic landing systems (Chapter 13) in order to couple the autopilot to the
localizer beam of the instrument-landing system. The bank angle is limited, to
prevent excessive control commands when the aircraft is far off course. Near
the destination, the track, or heading, is frozen to prevent erratic steering.
As the aircraft passes each waypoint, a new waypoint is fetched, thus select-
ing a new desired track. The aircraft can then fly along a series of airways
connecting checkpoints or navigation stations.
The great-circle airway is defined by the waypoint vectors R 1 and Rz. The
angle to go to waypoint 2 is:

IR3 x Rzl
(2.26)
IR311Rzl

Range and time to go are calculared as in Section 2.7.2. The lateral-deviation


angle L/D is:

(2.27)

These computations can be performed directly in terms of the navigation coor-


dinates z;.

2.7.4 Area Navigation


Between 1950 and approximately 1980, aircraft in developed countries flew on
airways, guided by VOR bearing signals (Chapter 4). Position along the airway
could be determined at discrete intersections (~in Figure 2.9) using cross-bear-
ings to another VOR. In the 1970s DME, colocated with the VOR, allowed
aircraft to determine their position along the airways continuously. Thereafter,
regulating authorities allowed them to fly anywhere with proper clearances, a
technique called RNAV (random navigation) or area navigation. RNAV uses
combinations of VORs and DME~. to create artificial airways either by connect-
ing waypoints defined by latitude/longitude or by triangulation or trilateration
to VORTAC stations (as shown by the dotted lines to A 1 in Figure 2.9). The
on-board flight-management or navigation computer calculates the lateral dis-
placement L from the artificial airway and the distance D to the next waypoint
A 1 along the airway [24b,c], [25].
In Figure 2.9, p 1 and P3 are the measured distances to the DME stations at
COURSE COMPUTATION 43

~Pk = Position correction


for next iteration

p2
_... ---- --,
--- Pt
''
'' '
''
I
I ''
I
I
I
I
I
I
I
I
.'

L is lateral displacement from airway


D is distance to waypoint
Figure 2.9 Plan view of area-navigation fix.

VI and v3. The position PI is found from the triangle PI VI V3. The aircraft's
position must be known well enough to exclude the false solution at P 2 . An
artificial airway is defined by the points A 1 and A 2 • D and L are usually found
iterati vel y:

1. Assume P 1 based on prior navigation information.


2. Calculate the ranges p 1 and p 3 to the DMEs at V1 and V 3 using the range
equations of Section 2.5.2. A range-bearing solution relative to a single
VOR station calculates the aircraft's position, but not as accurately as a
range-range solution to two stations.
3. Correct the measured ranges for the altitudes of the aircraft and DME
station.
4. Subtract the measured and calculated ranges

!::J.p 1 = P 1(measured) - P 1(calculated)


I::J.p3 = P3(measured) - P3(calculated)

5. The next estimate of p 1 is along the vector I::J.pk in Figure 2.9, whose
components along p 1 and P3 are I::J.p 1 and I::J.p3.
6. Repeat step 2 and iterate until I::J.p; are acceptably small.
44 THE NAVIGATION EQUATIONS

After determining the aircraft's position P 1 , the distance-to-go and lateral dis-
placement are calculated as in Section 2.7. Lis sent to the autopilot, as explained
in Section 2.7.3, and D is used to calculate time-to-go.
In the 1990s, civil aircraft were being allowed the freedom to leave RNAV
airways (Section 1.5.3) using GPS, inertial, Omega, and Loran navigation, none
of which constrain aircraft to airways.

2.8 NAVIGATION ERRORS

2.8.1 Test Data


Navigation errors establish the width of commercial airways, the spacing of
runways, and the risk of collision. Navigation errors determine the accuracy of
delivering weapons and pointing sensors.
All navigation systems show a statistical dispersion in their indication of
position and velocity. Test data can be taken on the navigation system as a whole
and on its constituent sensors. Tests are conducted quiescently in a laboratory,
in an artificial environment (e.g., rate table or thermal chamber), or in flight.
As accuracies improve, the statistical dispersions, once considered mere noise,
become important enough to predict (as discussed in Chapter 3).
The departure of a commercial aircraft from its desired flight path is some-
times divided into:

I. Navigation sensor errors


2. Computer errors
3. Data entry errors
4. Display error if the aircraft [s flown by a pilot
5. Flight technical error, which is the departure of the pilot-flown or
autopilot-flown aircraft from the computed path

Deterministic errors are added algebraically and statistical errors are root
sum squared. The total is sometimes called total system error.
The two- or three-dimensional vector position error r can be defined as indi-
cated minus actual position. A series of measurements taken on one navigation
system or on any sample of navigation systems will yield a series of position
measurements that are all different but that cluster around the actual position.
If the properties of the navigation systems do not change appreciably with age,
if the factory is neither improving nor degrading its quality control, and if all
systems are used under the same conditions, then the statistics of the series
of measurements taken on any one system are the same as those taken for a
sample ("ensemble") of systems. Mathematically it is said that the statistics are
ergodic and stationary.
If the position errors are plotted in two dimensions, as shown in Figure 2.1 0,
NAVIGATION ERRORS 45

Arbitrary y axis

Principal x axis
• •

Arbitrary x axis


Actual position _or

Figure 2.10 Two-dimensional navigation errors. In principal axes, the x andy statistics
are independent.

it will generally be found that the average position error r = Et:.r;/N) is not zero
(t:.ri are the individual position errors; N is the total number of measurements).
Two measures of performance are the mean error and the circular error probabil-
ity (CEP) (also known as the circular probable error, CPE). The CEP is usually
considered to be the radius of a circle, centered at the actual position (but more
properly centered at the mean position of a group of measurements) that encloses
50% of the measurements. The mean error and the CEP may be suitable as crude
acceptance tests or specifications, but they yield little engineering information.
More rigorously, the horizontal position error should be considered as a
bivariate (two-dimensional) distribution. The mean error r and the directions
of the principal axes x andy, for which errors in x and y are uncorrelated, must
be found. To find the principal axes, a convenient orthogonal coordinate system
(x',y') is established with its origin at r. Then the standard deviation (or rms)
a in each axis and the correlation coefficient p between axes are calculated:

L,(x;)2
N
L,(y;)2
N
~ ''
L...XiYi
p=-- (2.28)
Gx'G:/
46 THE NAVIGATION EQUATIONS

From these quantities are determined a new set of coordinates, xy, which are
rotated () from x' y':

2ax'av'
tan 2() = 2
·
2
p (2.29)
ax'~ ay'

The origin of the new xy-coordinates coincides with the origin of x' y'. The com-
ponents of the position errors along x and y are uncorrelated and can be consid-
ered separately. In inertial systems, the principal axes are usually the instrument
axes. In Doppler systems, the principal axes are along the velocity vector and
normal to it or along the aircraft axis and normal to it if the antenna is body-
fixed (as was usually the case in 1996). In Loran systems, the principal axes
are found by diagonalizing the covariance matrix. The rms errors in the new
coordinates can be calculated anew or can be found from the errors in x'y' from
[7, p. 598].
The one-dimensional statistics along either of the principal axes will now be
discussed. First, the mean and standard deviation are found in each independent
axis. The errors along each axis are plotted separately as cumulative distribution
curves, which show the fraction of errors less than x versus x. This curve allows
all properties of the statistics to be determined. In many systems, experimental
cumulative distributions will fit a Gaussian curve.
If the one-dimensional errors are indeed Gaussian, their statistics have the
following properties:

Mean x
Standard deviation (rms) a
50% of the errors lie within x ± 0.675a (probable error)
68.3% of the errors lie within x±a
95.4% of the errors lie within x±2a
99.7% of the errors lie within x±3a

From these properties of the Gaussian distribution, it is easy to tell whether


an experimentally determined cumulative distribution curve is approximately
Gaussian. Reference [7] shows statistical tests for "normality" of experimental
data.
Returning to the two-dimensional case, the probability that a navigation error
falls within a rectangle 2m by 2n, centered at the mean and aligned with the prin-
cipal axes (x andy uncorrelated), is found by multiplying the tabulated error func-
tions form/ ax and nj a v computed independently. Tables for the probability inte-
gral ([14], p. 116) can be used instead for mj(ax..Ji) and n/(ayh). For exam-
ple, if ax = 0.4 nmi and a y = 1.0 nmi, the probability of falling inside a rectan-
gle 1.2 nmi (in x) by 2.4 nmi (in YLis the product of the probability integrals for
(0.6/0A..Ji) = I .06 and ( 1.2/ 1.0V2) = 0.85, which is 0.866 x 0.770 = 66%. Thus,
the I .2 by 2.4 nmi rectangle encloses 66% of the navigation errors.
NAVIGATION ERRORS 47

Navigation systems, civil and military, are often specified by the fraction of
navigation errors that fall within a circle of radius E p, centered on the mean.
Figure 2.11 shows this probability if ax and a y are Gaussian distributed and
uncorrelated. For example, if ax = 0.4 nmi and a y = 1.0 nmi, b = ax/ ay = 0.4.
If the radius of the circle is E p = 1.5 nmi, then a = E p /a y = 1.5, and from the
graph, P = 0.85. Thus, 85% of the errors fall within a circle of radius 1.5 nmi.
Weapons often inflict damage in a circular pattern. Hence, military tactical
navigation systems are sometimes specified by the CEP, the radius of the circle
that encompasses 50% of the navigation errors (which are inherently elliptically
distributed):

. av
CEP = 0.59(ax + ay) ± 3% 1f - · < ax < 3a y (2.30)
3

When ax= ay =a, the CEP = 1.18a.1 and, from Figure 2.11, 95% of the
navigation errors lie within a circle of radius 2.45a.

o..0.8
"'
"''2
'0
~0.7~--~--~~~~--~----4----4----~
0
Q)
u
~0.6~--~--~~~4----4----4----4----~

c:
.r.
·~ 0.5 1-----+-+->f----t+-1--4------t----+-----+----+-l
Ep
~
a=-
Uy
0
~ 04 f-----++--+-11----4-------'u. - - - x<Uy--H
b= u;
~03r--~~-+~----+------+----+-----+----+-l
..6
"'
..0
0
0: 0. 2 f--1--ft-J'f-----1-----+------1----+------t-----t-1

2 3
a
Figure 2.11 Probability of an error lying within a circle of radius tp. (ax and av are
the uncorrelated standard deviations in x andy.) From unpublished paper by Bacon and
Sondberg.
48 THE NAVIGATION EQUATIONS

Navigation errors are frequently defined in terms of a circle of radius 2drms


where

(2.31)

If ax = a y = a, 2drms = 2ha = 2.83a which, from Figure 2.10, encompasses


98% of the errors. However, if a.,, -:/: ay, the 2drms circle can enclose as few
as 95% of the errors. Sometimes a 3drms error is specified that encompasses
99.99% of the errors; collecting enough relevant data to measure compliance
with 3drms is usually impossible.
In three dimensions, the usual measure of navigation performance is the
radius of a sphere, 2drmsj3D:

(2.32)

If ax = a, = az = a, the 2drmsj3D sphere has a radius 2v3"a = 3.46a and


encloses 99% of the Gaussian-distributed navigation errors. In other words, if
the single-axis standard deviation:; are one nautical mile, a sphere of 3.5-nmi
radius encloses 99% of the errors. Military systems sometimes define a sphere
whose radius is the spherical error probable (SEP) that encloses 50% of all
three-dimensional errors. If the three variances are equal, the radius of the SEP
sphere is l.54a.
Navigation test data are often not Gaussian distributed; they have large tails
(outliers or wild points). Measures based on mean square are greatly increased
by these outlying points. Hence, test specifications are often written in terms
of the "95% radius," the radius of a horizontal circle, centered on the desired
navigation fix, that encloses 95% of the test points. As noted earlier, if the data
were Gaussian, if the two axes had equal standard deviations and if the mean
were at the desired fix (no bias), then the circle radius 2.45a would enclose
95% of the test points.

2.8.2 Geometric Dilution of Precision


Geometric dilution of precision (GDOP) relates ranging errors (e.g., to a radio
beacon) to the dispersion in measured position. If three range measurements are
made in orthogonal directions, the standard deviations in the aircraft's position
error are the same as those of the three range sensors. However, if the range
measurements are nonorthogonal or there are more than three measurements,
the aircraft's position error can be slightly smaller or much larger than the error
in each range.
If the variances in ranging errors to each station are equal, a J?, and if the
DIGITAL CHARTS 49

uncorrelated variances of aircraft position are a.~, a~ and a; in locally level


coordinates, then, by definition, the position dilution of precision is

(PDOP) 2 = (2.33)

and the horizontal dilution of precision (HDOP) is

(HDOP) 2 = (2.34)

In pseudoranging systems, the GDOP is

(GDOP) 2 = (PDOP) 2 + (TDOP) 2

where TDOP is the time dilution of precision, the contribution of clock error
to the error in pseudorange. Equations for GDOP, PDOP, and HDOP, when the
standard deviations in range to each station are different, are provided in [ 12].
Dilution of precision plays an important role in radio-ranging computations,
especially for Loran (Chapter 4) and GPS (Chapter 5). Detailed GDOP equa-
tions for Loran are in Section 4.5.1, for Omega in Section 4.5.2, and for GPS
in Section 5.5.2. Receivers usually flag a PDOP or HDOP greater than approx-
imately 6 as an indication of poor geometry of the radio stations, hence a poor
fix.

2.9 DIGITAL CHARTS

Traditional aeronautical charts are printed on paper. They are of three kinds:

I. Visual charts. Showing terrain, airports, some navaids, restricted areas


2. En-route instrument charts. Showing airways, navigation aids, intersec-
tions, restricted areas, and legal boundaries of controlled airspace. Air-
ways are annotated to show altitude restrictions; high terrain is identi-
fied.
3. Approach plates, standard approaches (STARs), and standard departures
(S/Ds). Showing horizontal and vertical profiles of preselected paths to
and from the runway, beginning or ending at en-route fixes. High terrain
and man-made obstacles are indicated. Missed approaches to a holding
fix are described visually.

Military targeting charts show the expected location of defenses, the initial
50 THE NAVIGATION EQUATIONS

approach fix, the direction of approach, a visual picture of the target in season
(e.g., snow covered) and the preplanned escape route.
Since World War II, experiments have been made with analog charts driven
by automatic navigation equipment. Paper charts were unrolled or scanned onto
CRTs, while an aircraft "bug" was driven by the navigation computations. The
systems were limited by cost, reliability, and the need for wide swaths of chart
to allow for diversion. Their use was confined to some helicopters and experi-
mental military aircraft.
In 1996, digital maps were well··established in surveying data bases, the cen-
sus, automotive navigation, and other specialized uses. Manufacturers of nav-
igation sets created their own data base of navaids and airports or purchased
one. Small digital data bases were included in the navaid's ROM whereas large
data bases, especially those that included terrain, were usually delivered to cus-
tomers on CD-ROM. National cartographic services in the developed world
were all converting from paper maps to digital data bases. The U.S. Defense
Mapping Agency (DMA) issued a standard for topographic maps on CD-ROM
[19], and several other nations' cartographic agencies were doing the same. The
U.S. Defense Mapping Agency produces separate data bases for terrain eleva-
tion and cultural features. They can be stored separately and superimposed on
an airborne display. A U.S. National Imagery Transmission Format was created
to send and store digital data. The GRASS language was widely used in the
United States to manipulate DMA data [6]. Private companies were produc-
ing remarkably diverse Geograph:tc Information System (GIS) data bases. In
1996, at least one company (Jeppesen) was producing digital approach plates
on CD-ROM (4]. RTCA published a guide to aeronautical data bases [24a] as
did ARINC [26a].
The technical challenges have been (I) to standardize the medium (e.g., CD-
ROM), (2) to standardize the format of data stored on the medium so that any
disc could be loaded onto any aircraft, just as any chart can be carried on any
aircraft, and (3) to develop on-board software that displays sections of the chart
across which the aircraft seems to move (moving map or moving bug, or both)
and orient the chart properly (north up or velocity vector up). As the aircraft
nears the edge of the chart, the software must move to a new section while
avoiding hysteresis when flying near the edge of a chart. The expectation is
that CD-ROM en-route and approach charts will be readily available to military
users before the year 2000.
Digital chart displays have provisions for weather or terrain overlays (from
airborne radar or from uplinked data), and provisions for traffic overlays (from
on-board TCAS, ground uplink on Mode-S, or position broadcasts from other
aircraft, Chapter 14). Civil airlines, driven by cost considerations, may gradually
abandon their practice of purchasing charts and distributing them to the crews
in hard copy. Instead, they may at first print charts on demand in the dispatch
room from a central data base and, later, distribute portable digital charts on
CD-ROM or via radio uplink to be loaded into the aircraft avionics when the
crew boards.
SOFTWARE DEVELOPMENT 51

2.10 SOFTWARE DEVELOPMENT

The sequence of activities in preparing the navigation software is as follows:

I. The vehicle requirements are decomposed into the navigation system


requirements. The navigation functions are allocated to hardware and soft-
ware, usually after trade-off studies.
2. A mathematical model of the vehicle and sensors is prepared. Sensors,
such as radars and inertial instruments, are simulated with respect to accu-
racy and reliability.
3. Engineering simulations are conducted on the accuracy model to deter-
mine the scaling, calculation speed, memory size, word length, and mini-
mum degree of complexity required to obtain acceptable accuracy. Refer-
ence trajectories are defined, that specify flight paths, speed profiles, and
attitude histories. Often man-in-the-loop simulations are required using
a cockpit mock-up to assess the crew's work load. Another simulation
determines the system's reliability and availability, given the known reli-
ability of the constituent sensors and computers.
4. The equations are coded for the flight computer. Prior to 1975, most nav-
igation software was coded in assembly language to increase the execu-
tion speed. Thereafter, higher-order languages such as Fortran, C, Pascal,
Jovial, and Ada have been used, though hardware drivers are often still
written in assembly language. Subroutines and functional modules come
from libraries of well-tested routines that are re-used. The modules are
individually tested; then the complete program is gradually compiled and
tested. The contents must be documented at each stage, a process called
configuration control.
5. The code is verified by an independent agent. Mission-critical code,
whose failure causes diversion of the aircraft, undergoes a simple ver-
ification, sometimes by engineers in the same company who did not
participate in the development or test of the code. However, if navi-
gation code is embedded with code that can cause loss of the aircraft
(safety-critical code), the code must be further verified, usually by an
independent organization using independently derived mathematical mod-
els of the aircraft and sensors. The high cost of independent verifi-
cation encourages architectures in which safety-critical code is segre-
gated. RTCA describes the certification of airborne software [24d] as does
ARINC [26b].
6. The code is loaded into ROM chips (often into ultraviolet-erasable
EPROMs or into electrically-erasable programmable ROMs called EE-
PROMs or flash ROMs) that are installed in the computer by the manufac-
turer. EEPROM and flash-ROM code can be field-altered using test con-
nectors. Sometimes, when navigation is not embedded into safety-critical
software, the code is delivered to the airbase on tape or CD-ROM and
52 THE NAVIGATION EQUATIONS

loaded into the flight computer via the on-board data bus. Revisions of
flight software may be issued from time to time.
7. A copy of the flight software is usually delivered to a training facility,
where it is used to check out crews in a ground simulator. The simulator
may be a part-task computer-based trainer (CBT); a terminal that emu-
lates the navigation keyboards, on-board computer, and displays; or it may
be a high-fidelity emulation of the cockpit and avionics. A high-fidelity
simulator may incorporate a flight computer that contains the navigation
software or may rely on a scientific computer, programmed to emulate the
flight computer. In a CBT, sensor inputs are simulated; in a high-fidelity
simulator, they may come from real or simulated hardware. Simulator
training is cheaper and often more effective than flight training.
8. The final task in the preparation of the navigation software is the evalua-
tion of its performance during flight versus the specification. This is done
by the aircraft manufacturer or operator.

2.11 FUTURE TRENDS

The increasing capability of airborne digital computers will permit more com-
plex algorithms to be solved. Companies that specialize in aircraft navigation
will continue to build libraries of proprietary algorithms that they incorporate
into their products. Crew interfaces will become more graphical to reduce work-
load and reduce errors in loading data. Direct loads from the ground via Mode-S
links and other data links (some via satellite) will be commonplace.
By the year 2000, on-board CD-ROM readers will display charts and flight-
manual data on military aircraft. The civil aviation industry may prefer to print
up-to-date paper charts for each flight in the dispatch rooms, downloaded from
a central data base, as an alternative to procuring and distributing them.
The software verification costs assigned to each aircraft will be substan-
tial, because they are amortized over a few hundred units, even with standard
libraries of routines.

PROBLEMS

2.1. The direction cosine matrix [C] transforms the Earth-centered inertial
coordinates y; into the locally-level navigation coordinates Zj· Let a = 0
when the aircraft is on the equator, and let the initial matrix be

[C] = [ ~ 0
0 !l
Let a= A..
PROBLEMS 53

(a) If the aircraft flies 90° due east, show the direction cosine matrix.

0I 00 0I]
Ans.
[0 I 0

(b) If the aircraft flies 90° due north from its original position, show the
direction cosine matrix.
Ans:

(c) If the aircraft flies 30° due east on the equator at 600 knots from its
original position, what are the C;i and the C;/ Use Equation 7.40.
Ans:

. 5
[C)=-- --J3
I -J3
I
--f3]1 hr
-]
57.3 [ I _-J3 O

2.2. An aircraft flies 3 hrs east then 2 hrs north at 300 knots at an altitude of
3 nmi, starting at 40° north latitude.
(a) Find its position using the flat-Earth dead-reckoning equations.
Ans. x = 900 nmi, y = 600 nmi.
(b) Find the final latitude-longitude using spherical-Earth equations with
the Gaussian radius of curvature.
Ans. <P = 49.98°, .1A = 19.54°.
(c) Find the final latitude-longitude using the ellipsoidal-Earth equations,
(2.5).
Ans. <P = 49.99°, .11-. = 19.50°.
(d) Find the distance from the start to the destination of case a using the
flat-Earth range equation, 2.20.
Ans. 1081.7 nmi.
(e) Find the distance to the destination of case b using the spherical-Earth
approximation with the Gaussian radius of curvature.
Ans. 1022.7 nmi.
(f) Find the distance to the destination of case b using the flat-Earth equa-
tions plus the correction of Equation 2.21.
Ans. 1027.0 nmi.
54 THE NAVIGATION EQUATIONS

2.3. A GPS satellite is at the ECEF coordinates:

x= 14367.71 nmi, y=O, z=O

The observer is at latitude 45', longitude 45°, 30,000-ft altitude. Calculate


the distance from the aircraft to the satellite and the elevation of the line
of sight. Use the WGS dimensions on page 25.
Ans. R = 12,986.56 nmi, 8 = 43.5 deg.

2.4. Derive Equation 2.21. Hint: solve the spherical triangle for the cosine of
the range angle and express the coordinates of the waypoint as the present
position plus a small increment. Expand the sines to third order and the
cosines to second order.
2.5. Verify the calculations on page 46 for the probability of falling inside a
rectangle of edge 1.2 nmi (x = 0.6 nmi) by 2.4 nmi (y = 1.2 nmi). Let ax
= 0.4 nmi and a y = 2.4 nmi.
3 Multisensor Navigation
Systems

3.1 INTRODUCTION

Multisensor navigation is the process of estimating the navigation variables of


position, velocity, and attitude from a sequence of measurements from more
than one navigation sensor. There are essentially two broad categories for sen-
sors used in avionics suites for navigation and related functions: dead-reckoning
sensors and positioning sensors. The processing equations for these are given
in Chapter 2.
Dead-reckoning sensors provide a measure of acceleration or velocity with
respect to an Earth-referenced coordinate system, consequently requiring inte-
gration with respect to time to provide vehicle position with respect to the Earth.
Examples of these types of sensors are inertial systems (Chapter 7), Doppler
radars (Chapter I 0), and air-data sensors (Chapter 8). The latter two require an
attitude and heading reference (AHRS) or an inertial system (INS) to provide
the required angular orientation with respect to the Earth.
Positioning sensors provide a position measurement that can be related to
Earth-referenced coordinates. Examples of these sensors are radio systems such
as the terrestrial-based Loran and the satellite-based Global Positioning System
(GPS) (Chapters 4 and 5) which provide the position of the antenna in geodetic
coordinates. A star-tracker can also be used for fixing position when its orien-
tation with respect to the Earth is determinable through some means such as an
AHRS or INS.
This chapter describes filtering algorithms whereby measurements from com-
binations of these sensors can be employed to satisfy the functional output vari-
able requirements of an avionics suite. In general, avionics system users require
a variety of information on the state of the air vehicle depending on the com-
plexity of the application-aerospace plane to auto-gyro-and accordingly will
include a variety of complementary sensors in their equipment suite. The infor-
mation desired can include the following:

• Position and velocity in geodetic coordinates-east, north, and up which


allow determination of ground speed and track angle.
• Orientation with respect to the Earth-pitch, roll, and yaw or heading
angles.
Avionics Navigation Systems. Myron Kayton and Walter R. Fried 55
Copyright © 1997 John Wiley & Sons, Inc.
56 MULTISENSOR NAVIGATION SYSTEMS

• Linear and angular acceleration and rate in body coordinates for vehicle
control purposes.
• Vehicle state relative to the air mass including orientation-angle of attack,
sideslip, and airspeed, again for vehicle control purposes.

The discussion of navigation sensors in other chapters explains their capa-


bilities and error characteristics. Clearly, the entire list of state variables for
the vehicle cannot be provided by any one sensor at normally desired levels of
accuracy and dependability. The deficiencies that exist in the individual sensors
employed in a multisensor avionics suite include at least one of the following
characteristics:

• Increase in error of the navigation variables as a function of time or


distance traveled. This is a characteristic of all dead-reckoning sensors
that eventually accumulate unbounded errors. Examples are inertial and
Doppler radar navigation systems.
• High noise level or low bandwidth in any derivative variable. This is a
characteristic of radio sensors that require differentiation with respect to
time of the basic measured variable to obtain rate or acceleration. For
example, with Loran, position must be differentiated to obtain velocity.
For Doppler radar, differentiation of speed can yield acceleration, once
the result is transformed to an Earth-referenced coordinate system using
attitude from an AHRS or inertial system.
• Reliance on sensors requiring off-board components to accomplish the
required functions. This includes radio sensors using terrestrial or satellite-
based assets whose access can be denied, due to intentional (jamming) or
nonintentional interference (transmission blockage or interfering transmis-
sions) of the propagated signal from the off-board assets, or failure of the
transmitters due to other causes.

To overcome individual sensor deficiencies, system designers have sought com-


binations of avionic sensors. These multisensor systems are designed to provide
reliable, dynamically accurate measurements of the air vehicle's state for all
specification-required flight conditions.
The most typical solution has been to use an inertial system in conjunction
with an appropriate set of complementary sensors that arrest the random, time-
increasing error in velocity, position, and orientation resulting from integra-
tion of the fundamental high-bandwidth inertial measurements of acceleration
(force corrected for the a priori known effect of gravity) from the accelerometers
and angular change from the gyroscopes. The following paragraphs discuss the
attributes of integrating various sensors with an inertial system to obtain high-
accuracy, high-bandwidth measurements of the dynamic state of the air vehicle.
Other multisensor configurations are discussed in Section 3.8.
INERTIAL SYSTEM CHARACTERISTICS 57

3.2 INERTIAL SYSTEM CHARACTERISTICS

Two fundamental error sources affect the error behavior of an inertial system.
These are the errors in the measurements of force made by the accelerometers
and the errors in the measurement of angular change in orientation with respect
to inertial space made by the gyroscopes. The basic mechanization of an iner-
tial system, which is described in detail in Chapter 7, is depicted schematically
in Figure 3.1. In this figure it is seen that the force measurements made by the
accelerometers are first transformed to a selected navigation coordinate frame
that is typically the local geodetic coordinates of east, north, and local verti-
cal. These measurements are then compensated for the force of gravity with
a mathematical model, such that vehicle acceleration with respect to inertial
space is obtained. The resulting variable, after correction for Coriolis accelera-
tion, is then integrated once into velocity and a second time into position change
with respect to the Earth. Additionally, the gyroscopic measurements of angular
change with respect to inertial space are modified using the system computed
velocity and the Earth's rotation rate vector to reflect the rotation of the local
vertical due to earth rotation and the vehicle change in position as it travels over
the surface of the Earth. In this manner the orientation of the accelerometer
axes relative to the Earth at the present position of the vehicle is continuously
computed. The result is that there are three sources of change in orientation
error of the accelerometers with respect to an Earth-fixed reference coordinate
frame: (I) integrated gyro drift rate; (2) integrated error in system computed
velocity which results from error in the measurement of acceleration-due to
accelerometer measurement errors, imperfect knowledge of the local force of
gravity and the current error in the know ledge of orientation of the accelerome-
ter sensing axes which causes a misresolution of any accelerometer force mea-
surement including that of the gravity vector; and (3) error in the orientation
of the navigation coordinate axes which changes as they rotate with respect to
inertial space.
The result of this interaction of error effects is that the error characteristic of
an inertial system for the computation of velocity, position, and instrument axes
orientation is described by the sinusoidal Schuler oscillation that has a period of
approximately 84.4 minutes. Due to this oscillatory characteristic, the position
error response to a step of constant accelerometer measurement error is not a
quadratic in time but a bounded Schuler oscillation as shown in Figure 3.2.
Note that the error in position and the "tilt," which is the error in orientation of
the accelerometer sensing axes with respect to the local level plane, are equal
for any acceleration measurement error.
The velocity error response to a step of constant gyro drift rate error, as
shown in Figure 3.3, is a bounded Schuler oscillation that has a constant error
dictated by the magnitude of the gyro drift rate. The error in position is also
characterized by a Schuler oscillation but diverges in time in proportion to the
integrated velocity error. The tilt due to the gyro drift rate is a bounded Schuler
oscillation with zero mean. This occurs because any tilt results in a counter-
U1
00 CORIOLIS ACCELERATION
COMPUTATION

ACCELEROMETER ~-----------,
FORCE TRANSFORMATION

I
(}
v
I
MEASUREMENTS TO NAVIGATION 1 (}
- '- COORDINATE R
FRAME I •-

MATHEMATICAL
MODEL FOR
MATHEMATICAL
MODEL FOR
I
EARTH'S EARTH'S
GRAVITY VECTOR ROTATION VECTOR

GYRO
MEASUREMENTS

R- LOCAL RADIUS OF EARTH'S CURVATURE


V - SYSTEM COMPUTED VELOCITY
8 - SYSTEM COMPUTED ANGULAR RATE OF CHANGE OF POSITION (8) OVER THE EARTH'S SURFACE
Q - SYSTEM COMPUTED EARTH RATE VECTOR
;:;- - SYSTEM COMPUTED ACCELEROMETER SENSING AXES ORIENTATION WITH RESPECT TO THE
NAVIGATION COORDINATE SYSTEM
Figure 3.1 Schematic block diagram on an inertial system mechanization.
,--
-------
------
----
0 ----------------~
G~"";1"~~~''""' 0
R ....,. •
'""•
I
I
I
I
I jf I

l
I "·' > I
I
I Velocity
I
I Error
" I

I ov I
R. I f-......1'-------. </> Platform
Tilt

tiA
Acceleration M
Measurement Error Gyro Drift
Rate

I 00 Position
Error

Velocity Error Position Error or Tilt


2 l

.13 fps per 5 micro-g 's


1.5

t: j t----+-'----~
0.5

~j 0 f------lr-----r------1
V - Step of Acceleration
Measurement Error
1

-0.5 0.5

-1~--~----~~~~--~
0 2 4 8 2 4 6 8
6

§.] §.-~
Ul Figure 3.2 Inertial system error response to a step of acceleration measurement error.
\C
Constant acceleration measurement error induces a zero-mean Schuler oscillation in velocity
error and an identical nonzero mean oscillation in tilt and position error.
""
0 r-----------------Dg ----------------,
I
I
I
I
I g Gravity Magnitude I
I R Radius of Curvature of Earth I
I w, ~ jf, (}Schuler Frequency )I
I Velocity 1

I I I I E~or I * ~ I I ! o¢Pl~~~rm
M
Acceleration
Measurement Error
I <ld
Gyro Drift
Rate
I
b - Step of Gyro Drift Rate

I
I I • o8 Position
Error

Tilt Velocity Error Position Error


0 0
8 arc-seconds
per .01 degrees per hour

[£:]
[R b~''t \ /
t/>(1) = sin (w)) ~

t~3 °
5113feet per

E;j :: .01 degrees per hour

-1-: .... ~
1 ~

-0.5 / ..~ -6
oO(I) = [£:]
@ 84.4 minutes

.~
[sio (w,l) - (w))]

-1 0 2 4 6 8 2
' 4 6 8 -8
0 2 4 6 8

~··~ ~··~ ~.]


Figure 3.3 Inertial system error response to a step of gyro drift rate bias. Constant gyro drift
rate induces a zero-mean Schuler oscillation in tilt, a nonzero mean oscillation in velocity
error and a divergent ramp oscillation in position error.
AN INTEGRATED STELLAR-INERTIAL SYSTEM 61

vailing accelerometer measurement error, whose result is to produce a constant


velocity error component that cancels any constant gyro drift rate at the tilt rate
node.
This discussion of deterministic error responses pertains to constant error
sources and initial conditions. As is well-known, the response of any oscillator
to random white noise or correlated noise inputs is divergent at some fractional
power of increasing time. Consequently the error response in inertial system
position, velocity, and orientation due to a noise input is divergent with time.
The discussion of inertial system error characteristics presented here is quite
simplified in that effects due to error in the system computed Earth rate vector
and error in the system computation of Coriolis acceleration lead to additional
24 hour and Foucault error oscillation characteristics as described by the error
equations in Chapter 7.
Clearly inertial systems tend to "drift off" in terms of the accuracy in the
system computed navigation variables of position, velocity, and orientation over
an extended time of operation. Consequently other sensors are used with inertial
systems to arrest this divergent error behavior for those applications that require
upper bounds on system error with time. It is the usual case that an inertial
system remains as the core element of an integrated aircraft navigation system
because it offers the unique properties of being self-contained on the vehicle
and stealthy in that it produces no emissions, cannot be jammed, or deceived
because it has no reliance on signals generated by external sources; the inertial
system is normally required to provide a continuous high-bandwidth reference
coordinate system relative to the Earth whose use is required by other avionics
sensors.

3.3 AN INTEGRATED STELLAR-INERTIAL SYSTEM

A star-tracker (Chapter 12) serves as a useful adjunct to an inertial system in


that it provides the capability of calibrating the inertial system gyroscopes that
are the primary source of divergence in the accuracy of a free-inertial navigation
system. The star-tracker is used in a natural way with an inertial system in that
the inertial system provides the reference coordinate system with respect to
which the measurements of star position can be referred.
A star-tracker and inertial system work in combination as depicted schemati-
cally in Figure 3.4. Figure 3.4 shows the three basic sources of error in employ-
ing a star-tracker in combination with an inertial system all of which result in
a pointing error to a selected star:

1. Orientation error of the inertial system ¢, relative to the reference ellip-


soid that results in a commensurate error in pointing to the star even if
there exists no error in inertial system computed position. This situation
is indicated at the true position in Figure 3.4.
62 MULTISENSOR NAVIGATION SYSTEMS

COMPUTED DIRECTION TO
STAR FROM UNTILTED TRUE DIRECTION TO STAR (AT oo)
PLATFORM AT ERRONEOUS
POSITION -------------
POINTING DIRECTION TO
STAR AT TRUE POSITION
FROM TILTED PLATFORM

/ TILTED PLATFORM (BY-<)>)


.....-:¥-«~-h ~ AT TRUE POSITION
RESULTS IN A POINTING
8
----1---"~f----t-----_:: ERROR TO THE STAR OF <j>

UNTILTED " "


PLATFORM TRUE POSITION

8
=
IF <I> 08, THEN NO ERROR IN 1jJ IS
OBSERVED, ALTHOUGH THERE EXISTS
• ERROR IN COMPUTED BOTH ORIENTATION ERROR <j>, AND
POSITION OF 68 ONLY POSITION ERROR 08
n
(NO TIL RESULTS IN
A POINTING ERROR
TO STAR 0Fc56

Figure 3.4 Angular position and orientation errors observed with a star-tracker with
inertial stabilization. A stellar sensor observes the error 1/;, the difference between the
position error (M) and the orientation error (1>) of the inertial system.

2. Error in system computed angular position on the surface of the Earth (j().
This causes an error in pointing to the star even if the reference coordinate
frame provided by the inertial system is perfectly aligned with respect to
the Earth or inertial space at its actual position. This situation is indicated
at the computed position in Figure 3.4.
3. Error in system time that results in an orientation error of an Earth-refer-
enced frame with respect to the stellar background due to Earth rotation.
This error is usually minimal and is not considered further here.

An interesting fact related to the stellar-inertial system combination is that an


error in orientation and system-computed position can result so that the resulting
pointing error to the star 1/; is zero:

The implications of this situation is obtained by referring to the error block dia-
gram of Figure 3.5 where the error 1/; is shown. Since with an inertial system
any error in the accelerometer measurement results in an equal error in both
position (j() and orientation ¢, then for such an error source 1/; will be zero and
will not be detected with a star observation. On the other hand, a constant gyro
drift rate produces an oscillatory diverging error in the angular position (j() and
a bounded oscillatory error in orientation ¢, so the resulting error 1/; is a ramp
AN INTEGRATED STELLAR-INERTIAL SYSTEM 63

• LEVEL AXIS SCHULER ERROR LOOP

r--------1 -g
GYRO DRIFT RATE

t---+--rp TILT
ACCELERATION
MEASUREMENT ERROR

ljJ IS THE ERROR OBSERVABLE WITH A STAR TRACKER

1j1 = [<j>--b9] = f bd dt

Figure 3.5 Illustration of the 1/.; vector observed with a star-tracker. An inertially sta-
bilized stellar sensor is used to measure the effects of gyro drift rate but cannot observe
the effect of acceleration measurement errors.

proportional to the gyro drift rate. Since this diverging error is detectable with
the inertially stabilized star-tracker, then clearly the error in position can be
corrected or "reset" when a star shot is taken.
Most stellar-inertial systems have been mechanized with inertial systems
using stabilized gimbal control loops (as described in Chapter 7). This mech-
anization permits a telescope and inertial instrument assembly to be manufac-
tured that has only two low-bandwidth pointing loops that maintain the tele-
scope line of sight to the star being observed, usually for the elevation and
bearing angles to the selected star. High-bandwidth pitch, roll, and heading sta-
bilization control loops employing gyro measurements of small angular dis-
turbances isolate the inertial instruments and tracker assembly from vehicle
angular motion. When such a gimballed mechanization is employed, the sys-
tem gyros are rotated slowly with respect to inertial space to maintain a desired
orientation of the instrument assembly with respect to the Earth, usually that
of the local-level mechanization. Because the gyro sensing axes change slowly
with respect to inertial space, the detected components of the 1/; vector, which
is the integral of gyro drifts projected onto inertially fixed axes, can be used to
calibrate the gyros. This is possible because the detected components of the 1/;
vector can be correlated with specific gyro-sensing axes.
Since the component of the 1/; error vector along the line of sight to an indi-
vidual star cannot be observed, two star shots are generally required to com-
pletely correct the system. Ideally these lines of sight are orthogonal to enhance
observability. It is possible to obtain a complete calibration of the system gyro
drift rate vector, when the following two conditions occur:

1. Two star shots are taken closely together in time.


64 MULTISENSOR NAVIGATION SYSTEMS

2. The gyro sensing axes have not changed orientation with respect to iner-
tial space by a great amount since the last point in time when a pair of
stars was observed.

With the advent of inertial instruments suitable for mechanizing strapdown


systems (discussed in Chapter 7) mechanizations of star-trackers with high-
bandwidth stabilization control loops for maintaining the star line of sight axis
have been considered. The dynamics relating inertial system gyro drift rate
equivalent errors to the observable 1/; vector error in this case clearly becomes
highly complex and disallows effective calibration using conventional ad hoc
fixed gain methods of the early 1960s. Fortunately the discovery of Kalman
filtering theory, as elaborated later in this chapter, provides at least a theoreti-
cal approach to this problem, wherein the correlation of observed errors to the
producing causes of these errors is computed in real time to provide a basis
for correction of error sources regardless of the complexity of the time-variant
error dynamics.
In summary, the addition of a star-tracker to an inertial system is synergistic
from the point of view that the inertial system provides the stabilized reference
coordinate system required by the star-tracker and the star-tracker can be used
to "reset" the inertial system error growth and correct the gyro drift rates that
are the principal source of position error divergence in the inertial system. On
the other hand, the star-tracker does not correct for sources of error in the deter-
mination of vehicle acceleration which include the accelerometer measurement
errors and any lack of knowledge about the local gravity vector deftection. 1
Consequently, other navigation sensors are usually considered to correct for
these effects. A stellar inertial system retains all the positive attributes of the
inertial system-self-contained, stealthy, and virtually impossible to deceive.
The use of the tracker can only be denied by adverse weather conditions.

3.4 INTEGRATED DOPPLER-INERTIAL SYSTEMS

In the early days of inertial and stellar-inertial system development, one


approach to decreasing the effect of divergent navigation error was the uti-
lization of a speed sensor such as a Doppler radar (Chapter I 0 describes the
theory and performance of Doppler radars). A Doppler radar, similar to a stellar-
tracker, has a natural use with an inertial system in that it requires the refer-
ence coordinate system (attitude and heading) provided by the inertial system
to refer its speed measurements to Earth-fixed coordinates to achieve the nav-
igation function.
An understanding of the value in using a Doppler radar in conjunction with

1
As a rule of thumb, the uncompensated effect of the deflection of the vertical results in an inertial
system position error divergence of about 0.1 nmi in one hour, thereafter being in proportion to
the square root of time.
INTEGRATED DOPPLER-INERTIAL SYSTEMS 65

an inertial system is obtained by referring to the schematic error block diagram


of Figure 3.6. For simplicity in this figure the error in the Doppler radar veloc-
ity measurement is assumed to be only a constant bias, oVR· The observation
(o V R- oV) in Figure 3.6 is obtained from the difference between the reference
Doppler radar velocity measurement and the inertial system computed veloc-
ity. This first requires the transformation from the Doppler radar-sensing axes
to the Earth-referenced navigation coordinates using the inertial system deter-
mined orientation. Thus, both the zero and nonzero mean Schuler oscillations
(Figure 3.6), due to, respectively, a step of acceleration measurement error and
gyro drift rate bias, are observable by using the reference Doppler radar mea-
surement. However, the bias in the inertial system velocity is not fully deter-
minable, since the Doppler radar also has a bias error and only the sum of these
two biases is observable.
Since the Schuler error oscillations due to inertial system error sources are
observable with the Doppler radar measurement, it is possible to "damp" these
Schuler error effects by feeding back the observed difference. This was done in
the early days by adding a fixed feedback damping gain KD back to the inertial
velocity integrator as shown in Figure 3.7. Additionally, a fixed feed forward
gain KF was employed in more sophisticated mechanizations to provide two
control parameters (damping factor and natural frequency) for this second-order
mechanization.
For the older configurations of Doppler radars in which the Doppler antenna
was physically stabilized using the gyro measurements, variants of the conven-
tional mechanization described above were possible. For example, if the gyro
drift rate biases significantly exceeded the Doppler radar measurement error
on an equivalent basis, then a feed forward integrator could be added to com-
pensate for the inertial system gyro drift rate error. An error in this type of
compensation would of course be caused by any nonzero Doppler radar bias.
More sophisticated mechanizations for error removal are possible. For exam-
ple, if the coordinate frame of the Doppler sensing axes rotates with respect
to the inertial system sensing axes and the Doppler measurement error rotates
accordingly, then a basis for observation and calibration of these error sources
exists, as the sums of errors described above change value as a function the
angle of rotation. The calibration of errors that become observable as a func-
tion of measured changes in orientation is obtained with ease through the use
of Kalman filtering techniques, which capitalize on the computed real-time cor-
relation of the modeled system error sources to the errors observed when com-
mensurate variables from different navigation sensors are compared.
One caveat to the properties of Doppler inertial systems that is important
is that changes in orientation and acceleration of the vehicle do not make the
errors in the inertial system orientation directly observable. Changes in the error
in both the inertial system velocity and transformed Doppler measurements due
to tilt and azimuth misalignment always cancel. This is because both systems
use the inertial orientation as the reference and are hence unobservable in the
direct comparison of velocity measurements. Tilt can be damped out indirectly,
~
~

GYRO DRIFT RATE


VELOCITY ERROR b STEP

v-.-j I I 1\v I <j> TILT

ACCELERATION
ljJ
BIAS STEP
0V R e-----+j

OBSERVATION I POSITION ERROR


09

ACCELERATION STEP, V • GYRO DRIFT STEP, b

1.3 FPS PER 20 ·10-6 g


[ Js]
1
-- · OV(t) = [R · b][cos(w 5t)-1]
0 V I '• I A ., (wst} I ()V(t) = [Js] sin (wst)
I
-1 I ' I

~ 2.02 FPS PER 0.01o/HR


-2
[R · b]

Figure 3.6 Integration of a reference speed sensor with an inertial system. A reference speed
sensor is used to observe the Schuler oscillations induced by acceleration measurement error
and gyro drift rate and the velocity bias error induced by constant gyro drift rate.
AN AIRSPEED-DAMPED INERTIAL SYSTEM 67

tlA f--.._-¢ Tilt


Acceleration
Measurement Speed Sensor
Error Measurement
Error
f----4---.... lJO
tlVR
Position Error

Figure 3.7 Block diagram of a fixed-gain (KD, Kp) speed sensor damped-inertial sys-
tem mechanization.

since it induces a longer-term observable Schuler oscillation. Inertial azimuth


misalignment can also be corrected indirectly by the normal gyrocompassing
process (as described in Chapter 7) through the Schuler oscillation it induces.
In that a Doppler radar measures the Doppler shift of a radiated signal with
respect to the scattering surface, it has error characteristics to be considered in
the system design. In overwater operation, the Doppler measurement is affected
by major water currents as well as by the behavior of the surface due to wind.
Variation in sea and terrain surface roughness also causes measurement error
variation due to variation in scattering of the signal. These effects are discussed
in detail in Chapter 10.1.4.
In summary, the addition of a speed sensor to an inertial system provides the
capability to remove the Schuler oscillations in the errors in the inertial sys-
tem computed position and velocity. Furthermore the orientation errors in tilt
and azimuth can be removed in much the same manner as in normal ground
alignment (as discussed in Chapter 7). Finally, a Doppler-inertial system is
self-contained but not completely stealthy in that it radiates a signal to obtain
the Doppler measurement. However, for all practical purposes, the Doppler
radar cannot be deceived or jammed, since narrow beams at steep angles are
employed.

3.5 AN AIRSPEED-DAMPED INERTIAL SYSTEM

Another way to obtain some decrease in the divergent navigation errors of an


inertial system is to employ the outputs of the air-data system on the aircraft.
Typically, what this sensor provides is the speed of the aircraft with respect
to the air mass along with the angle of attack and sideslip angle of the vehi-
cle (as discussed in Chapter 8). Consequently using the orientation information
from the inertial system, the velocity of the aircraft with respect to the air mass
68 MULTISENSOR NAVIGATION SYSTEMS

can be expressed in the Earth-referenced navigation coordinate system and be


compared with the inertial system computed components of velocity. The major
error source in this comparison is the uncertainty in knowledge of the wind
(air mass motion relative to the Earth) which is generally quite large. Several
knots of error in the wind components along the east and north navigation axes
will make it practically impossible to do calibration of the inertial system gyro
drift rates. As a rule of thumb, I foot per second (fps) of reference velocity
error is equivalent to 0.01 deg/hr of gyro drift rate, which is typical of a navi-
gation grade inertial navigation system. Consequently reference velocity mea-
surements accurate to the order of I fps are required to obtain useful gyro cal-
ibration of such inertial systems.
When a navigation grade inertial system and an air data sensor are inte-
grated, the inertial system can be used to calibrate (measure) the relative wind
to the order of the gyro drift rate. The calibrated vehicle speed measurement can
then be used to reduce the Schuler error oscillations in the velocity components
computed by the inertial system.
Since an airspeed sensor is available on any aircraft, it is worth consider-
ing combining its measurements with an inertial system to obtain the synergy
described above. However, the parameters for a fixed gain mechanization or
the measurement noise parameters for a Kalman filter mechanization need to
be selected judiciously to reflect short-term error fluctuations in the airspeed
data.

3.6 AN INTEGRATED STELLAR-INERTIAL-DOPPLER SYSTEM

When a Doppler radar (or air data sensor) is combined with a stellar-inertial
system, significant benefits are obtained. Stellar measurements are used to cali-
brate the inertial system gyro drift rates, thereby reducing any mean error in the
inertial system computed velocity components. Consequently, when the stellar-
inertial system velocity components are compared with commensurate velocity
components from a reference velocity sensor, the mean errors in the reference
velocity sensor components can be calibrated. Effects of the errors in the mea-
surement of acceleration (accelerometer measurement error and uncertainty in
the gravity vector) can now be more effectively removed using the calibrated
reference velocity sensor. Recall that the errors induced by an error in the vehi-
cle acceleration measurement cause divergent velocity, position and orientation
errors in a stellar-inertial system, not observable with the star tracker.
The most attractive attribute of a stellar-inertial system when combined with
a reference velocity sensor is that the resulting errors in all the navigation vari-
ables are bounded. The residual error levels in these variables are of course a
function of the accuracy of the individual sensors employed. Accuracies on the
order of a hundred feet in position, a fraction of a foot per second in velocity
and subarcsecond in orientation are achievable with such a system. An added
attraction of such a system is that it is fully autonomous.
NONINERTIAL GPS MULTISENSOR NAVIGATION SYSTEMS 69

3.7 POSITION UPDATE OF AN INERTIAL SYSTEM

The simplest method of correcting the error drift in position computed by an


inertial system is to reset the position to the coordinates of a point observable by
the pilot on the surface of the Earth over which the aircraft can fly. Automatic
sources of more frequent position measurement include Loran and Omega with
their Earth-fixed radio transmitter stations and the Global Positioning System
(GPS) using radio transmitters on the satellites.
Due to the time-variant error dynamics associated with an inertial system,
principally due to acceleration effects, it has been cumbersome to design fixed
gain mechanizations to feed back the observed position differences between the
inertial system computation and the reference position measurements to cor-
rect inertial system computed velocity and orientation and the sources of these
enors (gyro drift rate and acceleration measurement error). Fortunately, with the
development of a method of recursive filtering theory by R. E. Kalman in 1960
[I] and sufficiently powerful digital computers to effect its mechanization, this
problem has become mute. With Kalman filtering theory, the weighting gains
between the observed position error and the other system errors are based on
error correlation information computed in real time, in what is called the error
covariance matrix. Correction of system errors through the use of such gains
obtains the theoretically optimal utilization of the measurements for minimiza-
tion of errors. (See Section 3.10 for Kalman filter basics.)
The most important integrated inertial and positioning measurement system
in use in 1996 is that which combines GPS with an inertial sensor (Section
3.13). Not only does GPS provide position along with time but also velocity
in geodetic coordinates. Mechanizations based on Kalman filtering theory are
invariably employed to effect a synergistic combination of GPS and the inertial
system (as discussed in Sections 3.13.3 and 3.13.4). An inertial system periodi-
cally corrected with external position measurements will have bounded errors
in all the navigation variables of position, velocity, and orientation. Such a sys-
tem is obviously not autonomous and is subject to intentional and unintentional
denial, which is unattractive in military applications.

3.8 NONINERTIAL GPS MULTISENSOR NAVIGATION SYSTEMS

Although most integrated navigation systems are inertially based, there are
exceptions. In particular, GPS receiver module cards have been developed that
can be embedded in the chassis of other sensors such as a Doppler radar [ 12] or
Loran [ll, 14, 15]. In the case of Loran, the integration was motivated by the
fact that the GPS satellites may not be sufficient in number at all times to pro-
vide positioning data with guaranteed integrity worldwide. A further attraction
of Loran is that it does not suffer from the line-of-sight limitation of GPS. It
employs a low-frequency signal that propagates along the surface of the Earth
(the ground wave), and it is time-synchronized using atomic clock controlled
70 MULTISENSOR NAVIGATION SYSTEMS

transmitters. In the case of Doppler radar, the motivation for GPS integration is
to significantly reduce the effect of Doppler associated error sources on navi-
gation performance. These include speed measurement errors over water, scale
factor error and bias, and the errors in a low-accuracy inertial attitude and head-
ing reference system (AHRS). Of course, the Doppler-AHRS system provides
the ability to dead-reckon position during GPS outages.

3.9 FILTERING OF MEASUREMENTS

3.9.1 Simple Sensor, Stationary Vehicle


The single example of measurement filtering is the case where a position sensor
provides a sequence of measurements

(3.1)

where
Xa is the actual position of the stationary vehicle
Ed; is a deterministic error in the ith measurement
~; is a random error in the ith measurement

The deterministic error can be caused, for example, by a known sensitivity to


temperature and determined a priori by calibration over the temperature range
of operation. The random error is usually due to unknown causes or causes not
worth the trouble to investigate. The mean value of the random error ~ is zero;
otherwise, it would be compensated. For ease of analysis, we assume that these
random errors have a fixed standard deviation a~ over time and are independent
from one measurement to the next. The error in each position measurement x;
is seen to be the random error

(3.2)

The best estimate of position in this case is the average of the measurements

(3.3)

where the standard deviation of the error in the estimated position

(3.4)

is the standard deviation of the random error a~ divided by the square root of
FILTERING OF MEASUREMENTS 71

the number of measurements

(3.5)

Hence the sequence of measurements can be processed to obtain an ever-


improving estimate of the vehicle's position. To avoid storing all past mea-
surements, the Nth estimate can be written in the recursive form

(3.6)

where only the (N - I )th estimate and the Nth measurement are required.

3.9.2 Multiple Sensors, Stationary Vehicle


A more complex algorithm is employed if the position of a stationary vehicle is
measured using sequences from multiple position sensors. In this case, assume
that the optimal estimate from the Kth sensor using N measurements is denoted
XNK and the standard deviation of its zero mean error is a 0xNK' For the case of
two position sensors, the optimal minimum error estimate of position of the
vehicle after the Nth measurement from each of the two independent sensors
lS

(3.7)

where the weighting factors are

(3.8)

The minimum error in this least squares estimate is the weighted sum of the
independent errors ox In and 8x2N:

(3.9)
72 MULTlSENSOR NAVIGATION SYSTEMS

which has variance

(3.10)

Alternately, this relation can be written

(3 .11)

where it is evident that the variance of the error in the combined estimate oxN
is less than the errors obtained from either of the two independent position
sensors.

3.9.3 Multiple Sensors, Moving Vehicle


The least-squares technique described above can also be applied to obtain an
instantaneous estimate of position from independent sensors for a moving vehi-
cle. In this case, simply assume that the position measurement of the first sensor
at the Nth instant of time is XJN and that of the second sensor is x2N. Equa-
tion 3.7 is then the formula for a minimal error estimate of position using both
sensors.
Unfortunately, this approach is far from optimum. It does not take advantage
of the capability of calibrating the errors of one sensor using measurements from
another sensor. For example, in the discussion of stellar-inertial and Doppler-
inertial navigation systems, the calibration in real time of gyro and Doppler bias
errors was shown to be possible. Improved accuracy is obtained in the sensor
outputs when such calibration can take place, which means that the errors ox 1N
ox
and 2N in the estimates are smaller than they are in Equation 3.9 above.
Until the early 1960s ad hoc mechanizations were employed to combine the
measurements from independent navigation sensors to obtain an improved over-
all solution. The development of Kalman filtering theory [1] provided a rigor-
ous approach to obtrain truly optimal estimates for systems whose errors are
described by linear time variant differential equations. Since the late 1960s,
most integrated navigation system mechanizations have been mechanized based
on this fundamentally superior method.

3.10 KALMAN FILTER BASICS

The Kalman filter requires that all error states are modelable as zero mean noise
processes with known variances, power spectral densities, and time correlation
parameters. Thus, the various error quantities to be estimated and the associ-
KALMAN FILTER BASICS 73

ated measurement noises are all random processes whose correlation structure is
assumed to be known. The Kalman filter then obtains estimates of the states of
these stochastic processes, which are described by a linear or linearized mathe-
matical model. It accomplishes this goal by capitalizing on the known correla-
tion structure of the various processes involved and the measurements of linear
combinations of the error states. To do this, both the error propagation in time
and the measurement processes are expressed in vector form. This provides
a convenient way with linear matrix algebra to keep track of relatively com-
plex relationships among all the quantities of interest. This is one of the main
features that distinguishes a Kalman filter from most digital signal processing
applications. The navigation system integration problem is multiple-input and
multiple-output in nature, and matrix algebra is essential in keeping track of
the relationships among the variables.
In the multisensor navigation system application, the behavior of the sen-
sor error states are described by linear differential or equivalent finite dif-
ference equations. Comparison of measurements between navigation sensors
are described by linear combinations of these error states. The filter accom-
plishes the task of error state estimation by a more complex recursive procedure,
rather than forming a simple weighted sum of the individual measurements as
described above.
Under the assumption of Gaussian noise distributions, the Kalman filter min-
imizes the mean square error in its estimates of the modeled state variables
[5]. The Gaussian assumption is usually a reasonable assumption in navigation
applications because the noise effects that arise in the measurements are often
due to a summation of many smaller random contributions. Thus, by the cen-
tral limit theorem of statistics, there is then a tendency toward the Gaussian
distribution regardless of the distribution of the individual contributions. Many
multisensor navigation systems have successfully implemented Kalman filters
while implicitly making this assumption.

3.10.1 The Process and Measurement Models


In Kalman filtering, the equations describing the process to be estimated and the
measurement relationships must first be formulated. This is sometimes referred
to as the modeling part of the problem. These equations will now be described.
The first assumption is that the time propagation of the state vector to be
estimated can be described by the linear finite difference equation

(3.12)

where
xk is the n x I process state vector at time tb being errors in position,
velocity, and attitude and sensor errors
74 MULTISENSOR NAVIGATION SYSTEMS

~k is the n x I process white noise vector


r:f>k is the n X n state vector transition matrix from time instant k
to time instant k + 1

The process noise vector ~k is assumed to be a white noise vector sequence


with known covariance matrix Qk:

(3 .13)

where E is expectation operator, ~;k is a column vector, and ~k is a row vector.


The comparison of commensurate measurements from the navigation sensors
must satisfy the following linear relationship:

(3 .14)

where
Zk is the m x 1 vector measurement at time tk
Ilk is am X 1 measurement white noise vector at time tk
Hk is the m X n matrix giving the ideal relationship between Zk and xk
when no noise is present

The measurement noise covariance matrix Rk is assumed to be known:

(3.15)

Note that there are now four known matrices that describe the process and mea-
surement models: ¢b Qb Hb and Rk. Thus, the design of a Kalman filter
requires considerable a priori knowledge about the dynamics and statistics of
the various processes to be modeled. The state transition matrix r:f>k describes
how the state vector Xk would propagate from one step to the next in the absence
of a driving function. The Qk parameter tells us something about the noise in
the Xk process. If the elements of ~b are large, this means that a large amount of
randomness is inserted into the process with each step. The Hk matrix describes
the linear relationship between sensor measurements Zk and the error states to
be estimated xk. Note that relatively complicated relationships can be accom-
modated as long as they are known and linear. Finally, Rk describes the mean-
square measurement noise errors. Generally, large values in the Rk matrices
means poor measurements. Note that all four of the key matrices are permitted
to vary with time (i.e., with the index k).
Perhaps the most challenging aspect of applying Kalman filtering in multi-
sensor navigation applications is that of establishing the mathematical equations
that describe the physical situation at hand and casting the equations in the form
dictated by Equations 3.12 and 3 . 14. Fortunately, this can be done with a rea-
KALMAN FILTER BASICS 75

sonable degree of accuracy in a wide variety of integrated navigation system


applications. However, one should always remember that the Kalman filter in
its basic form is a model-dependent filter and not adaptive. This means that, if
the model does not fit the physical situation under consideration, the filter may
yield poor results.

3.1 0.2 The Error Covariance Matrix


The error covariance matrix defines the probable error .X, in the filter estimate .X,
of the (error) state vector x. These elements vary with time. The error covariance
matrix must be computed in real time as part of the calculation of the optimum
gain to be applied to measurements to update the estimates of the error state
vector. Formally, the error covariance matrix is defined as

(3.16)

The diagonal elements of the error covariance matrix are the variances of the
error in the filter estimate of the navigation system error state vector. The off-
diagonal elements of the P matrix are co variances between different error states
in the vector, and they contain important information as to the degree of corre-
lation of one error state with another. The use of such correlation information
in the gain computation is what distinguishes the Kalman filter from simpler
mechanizations. In Kalman filtering, a measurement is used not only to update
estimates of navigation error variables directly involved in the observation (e.g.,
position) but also to update estimates of error variables not directly involved
(velocity, sensor errors, etc.).

3.10.3 The Recursive Filter


The derivation of the Kalman recursive filter equations can be found in ref-
erences [5, 6, 7]. A schematic of equation flow is summarized in Figure 3.8.
In words, the steps are as follows, beginning with the first measurement zo at
x
k == 0. Note that an initial estimate 0 and its associated error covariance matrix
P 0 must be assumed to start the procedure.

I. Compute the Kalman gain Ko.


x
2. Update the a priori estimate 0 to obtain the a posteriori estimate io. This
step assimilates the measurement z0 .
3. Update the a priori error covariance matrix P 0, and obtain the error
covariance matrix associated with the updated estimate io, Po.
4. Project both i 0 and Po ahead to the next step where a measurement is avail-
able. The resulting i! and Pi are the a priori state estimate and its error
covariance matrix at k = I just prior to assimilating the measurement Zt·

Steps 1 through 4 are then repeated for k = I, 2, and so on.


-..l
0'1
Initialize Filter with the
a priori Estimate &
Corresponding Error
Covariance Matrix

Compute the Kalman Gain: _1


Input Measurement
Kk = Pi Hk T( Hk Pi. Hk T+ Rk ) 1-- - Sequence
r--e Z(), z1, ... zk

Propagate the Update the


Estimate & the Error Estimate with the
Covariance Matrix: Measurement zk:
Xk+! = <t>kik ik ~ ik. + Kk (zk-Hkik.)
p- = <l>k pk <I>~ + Qk
k+l
I • •
L-+ x0, x 1, ••• xk

Compute the Error Covariance Output Sequence


Matrix for the Updated Estimate of Estimates

w Form for the Optimal Gain :

Pk = [1-Kk Hk ]Pk
w General Form for an Arbitrary
Gain
T

Pk = [1-Kk Hk] Pk- [ 1-Kk Hk] + Kk Rk K~


Figure 3.8 Schematic flow diagram of the discrete Kalman filter mechanization equations.
OPEN-LOOP KALMAN FILTER MECHANIZATION 77

It can be seen that programming the Kalman filter equations is straightfor-


ward; it only requires a few lines of code in matrix form. Once the initial con-
ditions and the four key matrices are defined, the resulting gain sequence is
determined; that is, the sequence of gains that determine how the past mea-
surements are weighted does not depend on the measurement sequences but
rather only on the assumed model parameters and the initial error covariance
matrix. The gain matrix computation at each step depends on the error covari-
ance matrix, so the error covariance matrix must be propagated in the recursive
process as a necessary adjunct for the gain matrix calculation. The error covari-
ance matrix is useful in real time because it provides an assessment of the qual-
ity of the total navigation solution. It is of interest in off-line analysis because
it can reflect the mean-square estimation error if the error model on which the
Kalman filter is based is accurate. In this case, it is useful for performance
analysis.

3.11 OPEN-LOOP KALMAN FILTER MECHANIZATION

The most common application of Kalman filtering in a suite of avionics nav-


igation equipment is that of integrating the navigation data from an inertial
navigation system (INS) with the navigation data from other sensors. Figure
3.9 shows a block diagram of one method for performing such an integration.
The configuration shown is called the open-loop configuration [2] because the
corrections to all the navigation sensors are made to the outputs and not fed
back to internally correct these sensors. By this mechanization, the INS is used
to provide the navigation variables for use in the Kalman filter transition and
observation matrix equations. The filter acts on the observable deviations of the
corrected inertial system outputs from the corrected aiding sensor measurements
to effect system corrections. The aiding sensors are depicted schematically in
one block in Figure 3.9.
This system integration scheme has been used successfully in a wide vari-
ety of applications since the mid 1960s l3, 4]. There are a number of good
reasons for selecting a method that employs the corrected inertial navigation
variables as the basis for the filter equations and the navigation system outputs.
First of all, the method allows utilizaiton of measurements from a wide variety
of aiding sensors. This is important, because the combination of aiding sensors
may vary during a mission and these sensors typically provide measurements
only at discrete points in time. Another reason for this integration method has
to do with the restrictions placed on the Kalman filter model. Recall that the
process dynamics and measurement relationship must both be linear. In gen-
eral, the whole value state variables (total position, velocity, etc.) do not satisfy
this requirement. The range measurement from electronic distance measuring
equipment is proportional to the square root of the sum of the squares of carte-
sian components, which is not a linear relationship. Therefore the problem is to
derive a linear representation of the situation to obtain correct application of the
78 MULTISENSOR NAVIGATION SYSTEMS

Uncorrected
Inertial Position,
Velocity &
Platform Orientation
Corrected Inertial System Outputs
Inertial
Navigation
System

Knertial System
.Error Estimate Prediction of
Propagation Aiding Sensor
[ Measurements

Kalman
Filter Observable
[ Design Measurement
Error

Uncorrected
Aiding Sensor
Q iding Sensor
·ror Estimate
l'ropagation

Corrected Aiding
Sensor
Aiding Sensors Measurements Measurements
such as GPS,
Doppler or Loran

Figure 3.9 Open-loop Kalman filter architecture.

Kalman filter theory. The details as to exactly how this linearization is gener-
ally done are given in references [2, 6]. The basic idea is to choose a reference
trajectory in state space that is close to the true trajectory and then write equa-
tions for the perturbation variables that represent the difference between the true
and reference trajectory for use in the filter equations. The equations in terms
of the perturbation variables are linear when higher-order terms are neglected.
In the block diagram of Figure 3.9, the INS-corrected output is used for the
reference trajectory as a matter of convenience, since it provides all the basic
navigation quantities of vehicle position, velocity, acceleration, and orientation
in a continuous manner. None of the other navigation sensors, when consid-
ered individually, provide the same complement of information in a continuous
manner. Thus, the INS is the logical choice for the reference, even though its
accuracy may be less than some of the other sensors at discrete instants in times
when their measurements are available.
The final reason for choosing the INS-based integration methodology has
to do with maintaining high dynamic response in the position, velocity, and
attitude state variables available from the inertial system without filtering. The
usual price associated with filtering is time delay or sluggish response. This
CLOSED-LOOP KALMAN FILTER MECHANIZATION 79

lag is undesirable in most real-time navigation applications because the nav-


igation system should follow the dynamics of the aircraft faithfully regard-
less of the rapidity of the change. The mechanization block diagram shown
in Figure 3.9 accomplishes this and at the same time provides filtering of the
measurement noise. At first glance, this may seem to be a contradiction, but
note that the corrections produced by the filter are based only on the errors in
the inertial system outputs and the aiding sensors. This approach is sometimes
called distortionless~filtering or dynamically exact system integration. With this
approach, the total dynamical quantities of interest (i.e., position, velocity, and
orientation) do not need to be modeled as random processes, and the filter mech-
anization equations can be based on a linear model in terms of the navigation
variable errors.
The implementation in Figure 3.9 is an example of the so-called extended
Kalman .filter, wherein the best estimates of the state vector are used as reference
values for linearization at each filter step, rather than the true values of the
trajectory which are unknown. Note that the block diagram shown in Figure
3.9 is conceptual and not literal. It is tacitly assumed that the appropriate INS
outputs are converted to the aiding source frame of reference before performing
the differencing operation shown. Commensurate sensor measurements must be
compared in the same reference frame.
The Kalman filter might reside within the INS computer in some applica-
tions where a tightly integrated mechanization is desired. In other cases, the
Kalman filter could reside within the computer associated with an aiding sen-
sor, or it might reside in a separate mission computer. After all, the filter is just
a digital computer program that accepts certain inputs to yield another set of
outputs. In some system integration applications it is not possible to perform
the filtering operation in a single centralized filter as shown. Equipment con-
straints may dictate that the outputs of filters in individual sensors be merged
in a subsequent filter. This leads to a cascading of filters that is a theoretically
more complicated situation discussed further in Section 3.15. The cascading of
Kalman filters generally leads to some degree of suboptimality. However, sys-
tem engineers sometimes have to live with this situation because of the given
equipment configuration.
The estimated inertial and aiding sensor errors could be fed back to inter-
nally correct the INS and the aiding sensors as opposed to just correcting their
respective outputs. This leads to the closed-loop Kalman filter mechanization
which is discussed in Section 3.12.

3.12 CLOSED-LOOP KALMAN FILTER MECHANIZATION

In the mechanization of the last section, the INS is not corrected internally
throughout the time span of the mission. Clearly, if the internally computed
inertial system navigation variables diverge too far from their true values, the
linearization assumption becomes suspect, and the associated modeling inaccu-
80 MULTISENSOR NAVIGATION SYSTEMS

racy can lead to difficulties. In the early days of Kalman filtering, system engi-
neers discovered that such divergence could be avoided by feeding the filter
error estimates back to correct internally the inertial system at each time point
where an aiding sensor measurement was available. This will then reduce the
difference between the corrected real-time computed navigation variables and
the true values, provided that the Kalman filter is producing good error esti-
mates. When the filter error estimates are fed back in this manner, the mecha-
nization is called the closed-loop Kalman filter mechanization [2]. This method
has been used extensively and successfully in a variety of actual navigation
applications. It is especially important to use the closed-loop Kalman mecha-
nization in applications where the mission length is relatively long and the error
model on which the filter equations are based are a simplification of the actual
linear error model of the system. Such simplifications are made to reduce the
computational burden on the real-time computer.
Figure 3.10 illustrates schematically the mechanization of the integrated INS
system for the closed-loop mechanization. This diagram and that of Figure 3.9
are both conceptual in that the INS correction that takes place is usually just a
matter of correcting certain numerical values in the inertial system computer. If
the error modeling assumptions are valid, the performance of the open-loop and
closed-loop Kalman filter mechanizations become essentially equivalent. Both
mechanizations of the Kalman filter have been used extensively, so the deci-
sion as to which should be used wi II depend on the particular situation at hand.
The closed-loop mechanization is generally used except when it is desirable to
preserve the pure INS output undisturbed by internal corrections. In this case,

-..
Inertial Corrected Inertial System Outputs
Navigation
System
T

I nertial
system
j~
,, Prediction of
Aiding Sensor
Measurements

Cor rections

l-
Kalman
Filter

--
Aiding Design
Se nsor
Cor rections , 1 Observable
Measurement j~
Error
Aiding
Sensors
Carrected Aiding Sensor Measurements
-....
Figure 3.10 Closed-loop Kalman filter architecture.
GPS-INS MECHANIZATION 81

the open-loop mechanization is employed or a separate inertial navigation solu-


tion mechanized in addition to the closed-loop corrected solution. When the
filter is operating closed-loop, the inertial computer maintains the optimal esti-
mates of the navigation variables in terms of the corrected position, velocity, and
so on, rather than in terms of the uncorrected navigation variables from which
the estimates of their errors are subtracted. The closed-loop configuration is
more robust in that it is less sensitive to parameter variations or error-modeling
simplifications.

3.13 GPS-INS MECHANIZATION

One of the most important multisensor navigation systems is that which inte-
grates an inertial navigation system (INS) with a Global Positioning System
(GPS) receiver. GPS (Section 5.5) is a very accurate, worldwide satellite nav-
igation system using ranging that will be operating into the twenty-first cen-
tury. The GPS-INS integration is nearly always done using a Kalman filter. A
simplified Kalman filter error model for accomplishing such an integration is
presented in Section 13.3.

3.13.1 Linearizing a Nonlinear Range Measurement


Recall that, to apply Kalman filter theory, the observation that is processed by
the filter to obtain error estimate updates must have a linear relationship to the
system error state vector that is being estimated. A range measurement does not
satisfy this criterion. This is illustrated in Figure 3.11 a with a simple two-di-
mensional sketch where a range measurement from the vehicle to a ground sta-
tion is made using distance measuring equipment (DME). Slant range is approx-
imated as being equal to horizontal range in this example. The aircraft is at (x,
y), and the DME station is at a known location (x 1 , y 1). A perfect range mea-

y y
DMEStation
(Xt.Yt)
p/
0
Aircraft Z
(x,y)

X
0
/~_:
(x, y)•-LSX-1 Ay
(x,* y*)

(a) True Geometry (b) Geometry with Small


Perturbations about (x,y)
Figure 3.11 Geometry for linearized range measurement error.
82 MULTISENSOR NAVIGATION s:YSTEMS

surement p would then be related to the unknown x and y coordinates through


the equation

(3.17)

Clearly, p is a nonlinear function of x andy, which are to be estimated, and it


violates the basic assumptions of the linear Kalman filter model.
Suppose that the aircraft knows its approximate position, denoted as (x*, y*),
which might come from an INS. Then for a small perturbation of these values
by (Llx, Lly) from the true aircraft position (x, y), the perturbation in the range
lS

Llp = p * - p = Llx cos (8) + Lly sin (8) (3.18)

From Figure 3.11 b, p is the measured range within some measurement error
and p * is the predicted range based on the aircraft's assumed reference position
which is slightly incorrect. Note that there is a linear relationship between Llp
and the perturbations Llx and Lly. Thus, using Llp as the observation (difference
of the commensurate measurements of range from the DME and estimated range
from the INS) processed by the Kalman filter and choosing the incremental
quantities Llx and Lly as error state variables rather than the total position states
x andy, the required linear measurement relationship for Kalman filter theory
is satisfied.
Referring again to Figures 3.9 and 3.1 0, when the corrected INS position
computation is used to estimate the range to the DME station, the differencing
operation produces Llp using the corrected range measurement from the aid-
ing sensor with the addition of some measurement noise. This example accom-
plishes exactly what small-perturbation theory says is necessary to linearize the
measurement process. Note the coefficients of Llx and Lly are just the direction
cosines between the line of sight t:o the DME station and the respective x- and
y-axis. The GPS measurement situation is similar in this regard and the direc-
tion cosines also appear in the modeling details discussed below.

3.13.2 GPS Clock Error Mod£:1


A GPS receiver obtains a measure of the difference between the time of a satel-
lite transmission to the time of its receipt at the receiver. The time-difference
is interpreted in terms of distance via a presumed value for speed of light with
various corrections for propagation through the Earth's atmosphere. The local
clock in the GPS receiver is usually a crystal clock whose stability is much
poorer than the atomic clocks in the satellies. Thus, the drift of the local clock
relative to GPS time maintained by the satellite clocks must be modeled in the
Kalman filter. The time offset between GPS time and the local receiver clock
GPS-INS MECHANIZATION 83

ub White Noise

Clock o,;n - ! Clock Bias

Ud ··--.t·lr--J----,1
White Noise ...__ ___,
d •6\-·--~·1 J
L------'
b


Figure 3.12 GPS clock error model.

is called clock bias. This bias in seconds can be scaled by the speed of light
to obtain a range bias in meters. Similarly, the rate of change of clock bias is
usually given in meters per second.
It is difficult to model crystal clock errors with a first-order state model; see
reference [8] or Chapter I 0 of reference [6]. The 2 state model shown in Figure
3.12 used in most Kalman filter implementations in GPS receivers allows for
the estimation of both clock bias and drift. Numerical values for the spectral
densities of the white noise forcing functions ud and ub depend on the quality
of the crystal clock. The references show how to convert from the usual clock
specifications to the Kalman filter parameters. This 2-state model is embedded
in the larger ll-state error model for the whole GPS/INS system. (For additional
discussion of clock characteristics, see Section 5.3.2.)

3.13.3 11-State GPS-INS Linear Error Model


The !!-state model presented here is a minimal error model of a generic GPS
receiver integrated with any INS whose accelerometer force measurements are
resolved into a local-level coordinate frame. The error equations presented are
referred to this coordinate system. The !!-state model accounts only for plat-
form misalignment (or equivalently the error in the knowledge of the orienta-
tion of the inertial instrument cluster), velocity and position errors, and GPS
receiver clock error. It does not allow for instrument bias errors for either the
accelerometers or gyros. The instrument errors are grossly simplified and are
modeled only as white noise forcing functions that drive the INS error dynam-
ics. This simple model was employed in many systems in the 1990s. There
are always two parts to any Kalman filter model: the process model and the
measurement model.
The error state variables in the random process model are the INS errors
plus those errors in the reference source that have nontrivial time-correlation
structure and need to be estimated. The block diagrams in Figures 3.13, 3.14,
and 3.15 show how small errors propagate along each of the three navigation
coordinate axes of the INS. The INS coordinate frame is local level and can be
viewed as the x-axis pointing east, y north, and z up. The continuous differential
equations for the INS errors may be obtained directly from the block diagrams.
These equations are as follows:
00

""'"

Aximuth
Gyro White
Noise
Ez

Gyro White Noise


Acceleration
White Noise
bAx • .,(
f
l.'J Tilt

-!- ;_____; Veloclt~ Error ~·. ~-

-~----- I I·
az +g --
ay,z are the Vehicle Accelerations Along the y and z Axes
Wx is the Platform Angular Rate About the x Axis
R is the Radius of the Earth

g is the Intensity of the Gravity Vector


Figure 3.13 Block diagram of a simplified inertial system x-axis error model for the !!-state
GPS-INS Kalman filter.
Gyro White
Azimuth
Misalignment ....---------+~ Position Error
Noise G>z

Gyro White Noise


Acceleration
White Noise
bAye .. r I e e G>x Tilt
Ay
Velocity Error

ax,z are the Vehicle Accelerations Along the x and z Axes

Wy is the Platform Angular Rate About the y Axis


R is the Radius of the Earth

g is the Intensity of the Gravity Vector

Figure 3.14 Block diagram of a simplified inertial system y-axis error model for the 11-statc
GPS-INS Kalman filter.

oe
Ul
86 MULTISENSOR NAVIGATION SYSTEMS

ACCELERATION
WHITE NOISE
bAz t--+--•Az
Vertical Position
Error
Vertical Velocity
Error

R is the Radius of the Earth

g is the Intensity of the Gravity Vector


Figure 3.15 Block diagram of a simplified inertial system z-axis error model for the
!!-state GPS-INS Kalman filter.

For east channel errors,

(3.19)

For north channel errors,

~Y = DAy+ (A 7 +g) · rf>x- Ax · rf>z

¢x = - ( ~-) ~Y- Wy · rf>z +Ex (3.20)

For vertical channel errors,

A ••
'-lZ = o''A ,- + ( -2g ) A~ (3.21)
R '-1.(

For platform azimuth misalignment,

rf>x = Ez (3.22)
GPS-lNS MECHANIZATION 87

The accelerometer measurement errors oA,, oAy, and oA, and the gyro drift
rates Ex, E_p and Ez are mutually independent white noise processes with known
spectral densities. Finally, the differential equations for the clock errors are from
the block diagram of Figure 3.12.
For GPS clock errors,

b = d + U!J
d= u" (3.23)

where uh and Ud are independent white noise processes.


The next step in the process modeling is to rewrite Equations 3.19 through
3.23 in state-space form. The 11-state variables are defined as follows:

x 1 is the east position error (meters)-.::lx


x 2 is the east velocity error (meters/sec )-.::l.X

x 3 is the platform tilt about y-axis (rad)~y

x 4 is the north position error (meters)-.::ly


x 5 is the north velocity error (meters/sec )-.::ly

x 6 is the platform tilt about x-axis (rad)~x

x 7 is the vertical position error (meters)-.::lz

x 8 is the vertical velocity error (meters/sec )-.::lz

x 9 is the platform azimuth error (rad)~2

x 10 is the user clock error in units of range (meters)-b


x 11 is the user clock drift in units of range rate (meters/sec )-d

Once the state variables are defined, it is a routine matter to put the differential
equations into state-space form:

x = Fx + t (3.24)

where F is the system state vector dynamics matrix describing the dynamic
coupling between the (error) states. In expanded form Equation 3.24 is
88 MULTISENSOR NAVIGATION SYSTEMS

0 0 0 0 0 0 0 0 0 0
0 0 -(g + Az) 0 0 0 0 0 Ay 0 0

X] 0 0 0 0 0 0 0 Wx 0 0 X!
R
X2 X2
X3 0 0 0 0 0 0 0 0 0 0 X3
X4 0 0 0 0 0 (g + Az) 0 0 -Ax 0 0 X4
xs xs
I
X6 0 0 0 0 --- 0 0 0 -Wy 0 0 X6
X7 R X7
Xg 0 0 0 0 0 0 0 0 0 0 Xg
Xg Xg
XJO
XJj
0 0 0 0 0 0 2( !) 0 0 0 0 XJ()
Xjj
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0
F

0
oAx
Ey
0
oAy
+ Ex (3.25)
0
OAz
Ez
Uh
UJ

The final step in determining the process model is to specify the ¢k and Qk
matrices. lf the update interval /11 is relatively small. ¢k can be approximated
with just the first two terms of the Taylor series expansion of eFt:.r:

¢'k c= I+ F!:lt (3.26)

This approximation must be used with care. For example, if the aircraft expe-
riences high dynamics, then !:lt will have to be quite small or else additional
terms must be included [6].
Computation of the Qk matrix in a high-dimensional system is usually an
onerous task, especially if there is nontrivial coupling among the various state
GPS-INS MECHANIZATION 89

variables as in this example. However, once the F matrix, the spectral densities
of the forcing functions, and the Llt propagation interval are specified, then Qk
is numerical computable [6, 7 J.
Four pseudorange measurements (one per satellite) are sufficient for a stand-
alone GPS solution. Each pseudorange measurement is of the general form

Total measured pseudorange = True range + Range bias

+White measurement noise

To use pseudorange in the Kalman filter mechanization, it must be linearized.


The observation processed by the Kalman filter is the difference between the
receiver-measured pseudorange and the estimated pseudorange based on the
corrected INS computed position and a corrected GPS receiver clock time. Only
the incremental perturbations in the INS and clock estimate errors then appear
on the right side of the measurement equation along with a white measure-
ment noise. Direction cosines will appear as coefficients of the INS position
errors, just as in the previous DME example. Clock bias enters as an additive
term in units of range. There will be a linearized pseudorange measurement
for each satellite, so the final form of the measurement model for the Kalman
filter is

z = Actual receiver measurement


-Predicted measurement based on corrected INS position and

corrected receiver time

The resulting H matrix and observation z are

z = Hx + v (3.27)

X4
0 0 0 0 0 0

n
hi7

{i]
hi4
[ h, xs
h2I 0 0 h24 0 0 h27 0 0
z= 0 0 0 0 0 0
X6
h3I h34 h37
X7
h4I 0 0 h44 0 0 h47 0 0
Xg
H X9
xw
XII
(3.28)
90 MULTISENSOR NAVIGATION SYSTEMS

where the hiJ are the respective direction cosines between lines of sight to the
various satellites and the navigation coordinate axes. The R matrix reflects the
additive white measurement noise components in which in this model would
be a 4 x 4 matrix:

(3.29)

R would usually be specified as diagonal with all terms along the diagonal being
equal. The numerical values of the terms are chosen to match the expected
variances of the white pseudorange measurement errors. As a practical matter,
these terms are usually specified to be larger than the expected error variances
to compensate for the inaccuracy in modeling the measurement errors as white
noise. The GPS measurement frequency (usually about I Hz) is very high rel-
ative to the characteristic Schuler time constant of the inertial system errors,
so implementing an R matrix with large values does not significantly degrade
performance due to the filtering provided by the INS.

3.13.4 Elaboration of the 11-State GPS-INS Error Model


The only way to improve the performance of the 11-state model is to add more
error states. This increases the dimensionality of the error state vector and asso-
ciated matrices, which in turn increases the computational burden on the system
computer. Better performance is possible, but a specific application trade-off
needs to be made to decide whether the added complexity will be of sufficient
benefit. Additional error states might be those discussed below:

Inertial Error States The inertial system error states that are generally the
next in importance are the "biases" associated with the accelerometers and
gyros. These are the random forcing functions that are calibrated at the start of a
mission but then slowly wander away from their initial values during the course
of the flight. It is especially important to keep calibrating these forcing func-
tions continuously if it is anticipated that the INS might have to dead-reckon in
the free-inertial mode for a significant time period without GPS measurements.
If the biases are estimated in flight, the system errors grow more slowly during
the free inertial period. Usually a Jirst-order Markov or a random-walk process
is used to represent an inertial instrument error source, so only one error state is
added to the model for each source. Such an elaboration of the inertial system
error model would add an additional three error states for three gyro drift rate
errors and an additional three error states for three accelerometer measurement
errors.
The primary trend is to employ low-cost, low-to-medium performance iner-
tial components to bridge the periods of GPS outage in a cost-effective manner.
These instruments will likely employ micro-machined silicon technology for
the accelerometers and fiber optics for the gyros. In 1996 these technologies
PRACTICAL CONSIDERATIONS 91

did not yet obtain the high-accuracy performance levels of the more expen-
sive technologies used in the past. These instruments require more elaborate
error modeling, including such additional states as scale factor error, mutual
mechanical misalignments of instrument sensing axes, and bias changes as a
function of measured environmental variations to obtain good calibration.
In addition to the instrument error states discussed above, the effects of
unknown variations in the gravity vector should be considered as error states in
some Kalman filter designs that include an inertial system. This consideration is
important because the gravity disturbance vector introduces errors in the force
measurements made by the accelerometers that can significantly affect some
applications.

GPS Error States Besides the receiver clock error model, filter designers have
also been concerned with the error in the pseudorange measurement and the
error in the Doppler or integrated Doppler (delta-range) measurement to each
satellite due to residual satellite clock and orbit errors and transmission path
effects. Inclusion of an error state for each of these measurements can obtain a
calibration of the measurement under certain conditions. For example, precise
knowledge of the vehicle location and velocity at the measurement time can
provide such a condition. Further, measurements from satellites currently being
tracked can be used to calibrate the measurements from a new satellite when
an initial track is established.
Other refinements of the basic !!-state filter are also possible. Refinements
in the error dyanmics of the model itself may be required in some applications.
The example used a north-oriented coordinate frame, but a different frame can
be used (e.g., the azimuth-wander frame discussed in Chapter 7). The !!-state
model includes all the basic quantities that need to be estimated since the filter
estimates position, velocity, instrument or platform coordinate frame orienta-
tion, and GPS receiver clock errors. All of these quantities are observable in
the !!-state filter once measurements to four or more satellites have been made
available. The observability implies that the usual problems of platform level-
ing, gyrocompassing, and damping the Schuler oscillation are all taken care of
automatically by the Kalman filter without any special ad hoc procedures. In
the past, the principal constraint on the elaboration of the error model has been
limited computer resources. However, in the 1990s, Kalman filter designs based
on models with several tens of error states were being implemented.

3.14 PRACTICAL CONSIDERATIONS

In practice, the implementation and validation of a Kalman filter design needs


to address a number of topics:

• Measurement synchronization
92 MULTISENSOR NAVIGATION SYSTEMS

• Measurement editing
• Tuning parameter adjustment
• Filter equation implementation

Measurements must be synchronized between an aiding sensor such as the GPS


receiver and the INS. Any difference in the time between when the GPS mea-
surement is made and when the inertial system estimate of that measurement
is valid can introduce significant errors in the observable difference processed
by the Kalman filter.
Measurement editing, sometimes called reasonability testing, is implemented
in most systems to avoid spurious errors in the observable difference being inad-
vertently processed by the Kalman filter and consequently contaminating system
performance. Typically, the observable difference is compared with some mul-
tiple of its standard deviation as computed from the error covariance matrix
propagated in the real-time Kalman filter solution. Excessively large observ-
able differences are discarded. Measurements are usually processed one at a
time in any practical system rather than, for example, simultaneously processing
the four GPS pseudorange measurements as defined above. With this "scalar"
versus "vector" approach, Kalman corrections are made sequentially until all
available satellite measurements are processed.
In the validation of any filter design, the testing with the actual hardware
results in an adjustment of tuning parameters to refine performance. Such a
process is necessary to compensate for the numerous error effects that are not
accounted for in the filter design even when very elaborate simulation programs
are employed in the design task. The parameters that are adjusted in the filter
"tuning" to accommodate these en·ors are usually the magnitudes of the distur-
bance process noise power spectral densities (Q matrix) and the random obser-
vation error variances (R matrix).
Since the Kalman filter is a recursive algorithm performed on a digital com-
puter, truncation or round-off error can lead to numerical difficulties as the num-
ber of iterations becomes large. Fortunately such problems can be nearly always
avoided if proper safeguards are taken. The Kalman filter has a degree of natural
stability if the system is completely observable and there is nonzero process noise
driving each of the state variables at each recursive step. If a steady-state solution
for the error covariance matrix exists, then this matrix will tend to converge to
steady state after a small perturbation provided that the matrix positive definite.
Some techniques that have been found useful in preventing numerical prob-
lems are the following:

I. Use of high-precision arithmetic. The advance in real-time digital com-


puter chips (to 32 and 64 bi1: arithmetic) facilitates this technique.
2. Symmetrize the Pk and Pic matrices at each step of the recursive process.
Symmetry is automatically obtained if only the upper trianglar portion of
the error covariance matrix is employed in the implementation.
FEDERATED SYSTEM ARCHITECTURE 93

3. Avoid undriven state variables in the process model (random constants).


This is the equivalent of ensuring that Qk is positive definite, and it
ensures that Y; will be positive definite, even if some of the measure-
ments at the prior step are treated as perfect.
4. If the measurement data are sparse, propagate Pk through smaller time
steps. Also be sure that the model parameters accurately represent the true
dynamics of the processes being modeled. Otherwise, incorrect cross-cor-
relations can be introduced in the propagation step, and they can adversely
affect the gain computation.
5. At measurement time points, use the following general quadratic from
[2] which is correct for an arbitrary gain matrix K to update the error
covariance matrix when a measurement is processed:

(3.30)

This form preserves the positive definiteness of the error covariance


matrix, whereas the theoretical form:

(3.31)

only guarantees this condition when the gain is optimal. Note that even
though the equation implemented in the computer is for the optimal gain

(3.32)

the numerical result may not realize the optimal gain.

In the past when digital computers of limited precision were available, an


alternative form of the Kalman filter known as U-D factorization [6, 7, 10],
a square-root formulation, was sometimes employed. The filter recursive equa-
tions are more difficult to program in U-D form than in the normal form shown
in Figure 3.8. The U-D formulation has better numerical behavior than the usual
equations, since the dynamic range for the equivalent error covariance infor-
mation is reduced by a factor of two. As high-precision processor chips have
become available, this technique has diminished in importance.

3.15 FEDERATED SYSTEM ARCHITECTURE

The architecture of avionics systems first started as decentralized, with each


sensor having its own analog computer. Since the first digital computers in the
94 MULTISENSOR NAVIGATION SYSTEMS

1960s were so expensive, centralized systems became universal. The develop-


ment of less expensive microprocessors led back to decentralized ("federated")
digital systems in the 1980s. In the 1990s, trends toward both centralized and
federated systems has been noticeable.
The federated system architecture that results in a cascaded Kalman filter
situation is shown in Figure 3.16. Federated system architectures have occurred
because, in the past, individual "black boxes" have been procured from different
specialized suppliers, each of whom has tended to implement a self-contained
function. Specialists in each technology develop their own software. However,
the system engineer is then faced with the task of integrating the outputs of these
individual sensor subsystems into a master Kalman filter. Unfortunately, there is
no simple optimal methodology for dealing with this problem. For an optimum
solution, the system designer must process the raw measurements provided by
each of the sensor subsystems and implement a single "centralized" Kalman
filter as opposed to the "decentralized" filter approach illustrated in Figure 3.16.
Another difficulty in the decentralized filter problem is the significant time
correlation structure of the estimation errors in the outputs of the sensor fil-
ters. One of the assumptions of the basic Kalman filter theory is that the mea-
surement error sequence contains some component of random error from one
measurement to the next. Two remedies to obtain this condition exist:

I. If the output data rate of a subsystem is relatively high, it can often be


sampled at much lower rate for the measurement that is input to the master
filter. There may be some lo~;s of information in doing this, but often this
is not severe. For example, suppose that one of the sensor subsystems
provides data at 50 Hz and that the correlation time of the estimation error
is about I sec. If the sampling rate of the master filter were reduced to 1
Hz, the measurement errors would be reasonably decorrelated, satisfying
the theoretical requirement.
2. The filter parameters in the subsystem filter can be readjusted so that the
output estimation errors are nearly white. This means that this filter will
be operating suboptimally but optimization can be obtained by the master
filter. The usual way of decorrelating the output errors of a Kalman filter
is to make the Qk matrix diagonal values artificially large and/or make
those in Rk small. This gives the filter a short memory and results in light
filtering of the measurement stream, which presumably contains uncorre-
lated errors. This method is often a viable option, because a change in
parameter values is usually a minor software change.

With the advent of more centralized computer management of all sensors


in future avionics system architectures with the utilization of high-speed data
buses, cascaded Kalman filters should fall into disuse. The synergistic utilization
of all sensor measurements should be realized more easily, including not only
the optimal Kalman filtering of all sensor measurements but also the cross-use
Sensor 1 ..
~
Kalman Filter (1)

\ Raw Sensor
Measure nent Inputs


........
......
"Master''
Kalman Filter
Estimates of Vehicle

• ~
Position, Velocity &

I
Orientation

SensorN ~ Kalman Filter (N)

L--..-----

Figure 3.16 Block diagram of a federated system architecture resulting in a "cascading" of


Kalman filters.

""
VI
96 MULTISENSOR NAVIGATION SYSTEMS

of sensor measurements between sensors for dynamic compensation. Further


reinforcing this trend is the fact that the software of centralized systems was
once recompiled and revalidated with any design change. In the 1990s, open
software operating systems have permitted changes to one subsystem's software
without affecting that of any other, greatly simplifying the design change and
validation problem.

3.16 FUTURE TRENDS

The future will see significant activity focused on integration of the naviga-
tion sensors with other avionics sensors on the aircraft. In military systems, the
information collected by many sensors is being fused into a central data base to
ensure their optimal use. Civil aircraft systems are likely to see a slower pace
of multisensor integration, largely because of the procurement practices of the
airlines. They purchase equipment from different suppliers based on compet-
itive prices and a desire to interchange "black boxes." Furthermore, reliabil-
ity and safety considerations may lead to a preference for functinoal redun-
dancy and a partially federated approach. In 1995, the civil aviation industry
began to investigate centrally fused data in a program called Integrated Modu-
lar Avionics (IMA), which is somewhat equivalent to the functional integration
and resource-sharing programs represented by the military ICNIA and ICNIS
programs. These efforts are likely to continue and accelerate in the twenty-first
century.
High-powered, low-cost digital processors will facilitate large-scale integra-
tion and enable redundant sensors w achieve a high level of fault tolerance. The
self-contained black-box subsystems that drove the federated avionics systems
architecture in the past will eventually disappear as the more information-rich
centralized system architecture evolves. Significant savings should be realized
in cost, size, weight, and power. The one major disadvantage of the federated
system is that the central-computer software engineers must be expert in each
sensor. Libraries of standard sensor modules may appear near the turn of the
twenty-first century.
Because of its significant cost/performance benefits, the combination of a
low-cost, low-to-moderate performance inertial measurement unit and a GPS
receiver will be a widely used multisensor system for many types of air vehicles
for years to come.

PROBLEMS

3.1. Refer to the simplified inertial system error model shown in Figure 3.14,
and make the following assumptions:
PROBLEMS 97

Ex = .0 I o /hr constant drift rate


oAv = 25 p.,g's constant accelerometer bias
Ax,z = 0 no acceleration
wy = 0 cos <I> is the north component of Earth rate,
where the latitude <I> = 45"

Compute the steady state values of (east) tilt ¢x and azimuth misalign-
ment ¢, that will result from observing (north) velocity error Ll.v during
the process of initial alignment of the inertial system
Hint: Also see Chapter 7. Ans.: ¢, = 200 arcsec ¢x = 5 an·sec
3.2. Assume an inertial system has been initially aligned as in Problem 2.1
(equilibrium conditions obtained), with a variance in (east) tilt of [5
arcsecf and a variance in azimuth misalignment of [200 arc-sec] 2 , and,
additionally, there exists an independent error in (north) position, with a
variance of [500 feet] 2 . What are the variances in tilt, azimuth misalign-
ment, and position error when a star tracker is integrated with the inertial
system and a star is observed directly overhead (along the local vertical),
if the tracker measurement error is zero and processed by a Kalman filter?
Ans.:a;, = [3.5 arc sec ]2 , ai,
= [353 .ft] 2 and a;,
= [200 arcsecf
What are these variances if a star is observed on the horizon along the
local north line?
Ans.: a;, = [5 arcsec] 2 , ai, = [500 .ftf and a;, = 0
3.3. Consider an inertial system initially aligned as in Problem 3.1 but inte-
grated with a reference speed sensor (e.g., a Doppler radar), where the
vehicle accelerates in the eastward direction at Ax = I 0 fps 2 for I 0 sec-
onds to a velocity of Vx = I 00 fps. What are the steady-state values of the
tilt and azimuth misalignment variances if Doppler speed measurements
with no error are processed by a Kalman filter?
Ans.: a;, = [5 arcsec] 2 and a;, = [200 arcsecf
What are the steady-state values of tilt and azimuth misalignment variances
if y (north) position change measurements (e.g., as obtained from a GPS
receiver) with no error are processed by Kalman filter?
Ans.: a;, = [5 arcsecf and a~, = 0
3.4. For an integrated reference speed sensor and inertial system, where the
speed sensor has a constant bias error with variance of II fpsf and the
inertial system has a constant gyro drift rate with variance of [0.0 I o /hr] 2 ,
compute the steady-state variance of the system velocity error, a~v·
Hint: Refer to Fip,ure 3.6. Ans.: azv = [.707 fpsf
What is the steady-state variance of this velocity error if, in addition, a
98 MULTISENSOR NAVIGATION SYSTEMS

constant step change of acceleration measurement error with variance of


[25 ~g] 2 is introduced?
Ans.: a~v = [.707 fps] 2
3.5. Suppose you wanted to measure the deflection of the vertical (unknown
deviation of the gravity vector from the local normal to the ellipsoid of rev-
olution, which approximates the gravitational potential field of the earth)
at a fixed point on the earth's surface for which the position is exactly
known. Which two navigation sensors discussed in this chapter are nec-
essary to perform this task, and what would the sources of residual error
be in the measurements?
Ans.: A stationary inertial system (after an initial alignment using
observations of velocity error) establishes the direction of the local
gravity vector with an error primarily due to the accelerometer mea-
surement error. A star tracker provides a measurement of the total
tilt of the inertial system due to both the accelerometer measurement
error and the deflection of the vertical. Error in this determination
of tilt is due to the error in the tracker measurement of star position
if the position on the surface of the Earth is exactly known.
If you wanted to measure the total deflection of the vertical in an airborne
vehicle moving over the surface of the Earth, which of the navigation sen-
sors discussed in this chapter would be employed?
Ans.: As the vehicle moves over the surface of the earth, error in the
navigated position of a stellar-inertial system will increase, degrad-
ing the ability of the system to determine deflection of the vertical as
achieved above for the stationary system. Consequently, the addition
of a navigation sensor that provides highly accurate positions, such
as obtained with a GPS receiver, is necessary.
4 Terrestrial Radio-Navigation
Systems

4.1 INTRODUCTION

This chapter discusses the basic principles of terrestrial radio navigation sys-
tems, the radio propagation and noise characteristics and the major system per-
formance parameters. The systems described in detail include all the impor-
tant point source systems, such as direction finders, VOR, DME, and Tacan;
and the hyperbolic systems, such as Loran-C, Decca, and Omega. All of these
systems have been used worldwide and have provided accurate and reliable
positioning and navigation in one or two dimensions for many years. In 1996
hundreds of thousands of civil and military aircraft throughout the world were
equipped with these systems, and many of these will be used for years to come.
Satellite radio nevigation systems (e.g., GPS) are discussed in Chapter 5 and
military integrated radio communication-navigation systems are described in
Chapter 6.

4.2 GENERAL PRINCIPLES

4.2.1 Radio Transmission and Reception


Figure 4.1 shows an elementary radio system. If a wire (antenna) is placed in
space and excited with an alternating current of such frequency as to make the
length of the wire equal to half a wavelength, almost all the applied ac power
that is not dissipated in the wire will be radiated into space. A similar wire,
some distance away and parallel to the first wire, will intercept some of the
radiated power, and an appropriate detector connected to this receiving wire can
indicate the magnitude, frequency, phase or time of arrival of the transmitted
energy. This is the basis of all radio-navigation systems. Half-wavelength wires
are called resonant dipole antennas.
To communicate from the transmitter to the receiver, it is necessary to mod-
ulate the alternating current in some manner. Early systems merely turned the
alternating current on and off in accordance with the Morse code. Numerous
other modulation systems have since come into use; the most important distinc-

Avionics Navigation Systems. Myron Kayton and Walter R. Fried 99


Copyright © 1997 John Wiley & Sons, Inc.
100 TERRESTRIAL RAD!O-NAVICAT!ON SYSTEMS

Display or
Modulator Transmiuer - R=iver - Processor - Data Bus
Interface

Figure 4.1 Elementary radio-navigation system.

tion, from a navigation standpoint. is whether the alternating current is left on


all the time (continuous wave) or whether it is turned off most of the time and
transmitted only as pulses. Within these broad categories there are many varia-
tions. For instance, the continuous wave signal may be modulated in amplitude,
frequency, or phase; pulses may be modulated in amplitude, time, or arranged
into various codes.
Early radio experiments were hampered by the difficulty of generating suf-
ficiently high frequencies or building sufficiently large antennas to secure effi-
cient transmission. They were also hampered by the low sensitivity of available
detectors. However, the invention and development of the vacuum tube and
solid state devices, including transistors, varactors, and many others, greatly
extended the frequency spectrum that could be used, with the result that prac-
tical radio systems use frequencies from I 0 kHz (30,000-meter wavelength) up
to I 00 GHz (3-mm wavelength), and progress is constantly being made toward
the use of still higher frequencies.
By general agreement, radio frequencies have long been categorized as fol-
lows [36]:

Abbreviation
Name Frequency Wavelength
Very low frequency VLF 3 to 30kHz 100 to 10 km
Low frequency LF 30 to 300kHz 10 to I km
Medium frequency MF 300 to 3000 kHz I km to 100m
High frequency HF 3 to 30 MHz 100 to 10m
Very high frequency VHF 30 to 300 MHz 10 to I em
Ultrahigh frequency UHF 300 to 3000 MHz I m to IO em
Superhigh frequency SHF 3 to 30 GHz IO to 1 em
Extremely high
frequency EHF 30 to 300 GHz 10 to I mm

At the higher frequencies, the following letter designations for certain fre-
quency bands have been widely accepted, although they do not have official
status (they are frequently related to standard wave-guide sizes) [35]:
GENERAL PRINCIPLES 101

Letter Frequency Letter Frequency


Designation Range Designation Range

L 0.39 to 1.55 GHz xh 6.25 to 6.90 GHz


Ls 0.90 to 0.95 GHz K! 10.90 to 36.00 GHz
s 1.55 to 5.20 GHz Ku 15.35 to 17.25 GHz
c 3.90 to 6.20 GHz Ka 33.00 to 36.00 GHz
X 5.20 to 10.90 GHz Q 36.00 to 46.00 GHz
1Includes K e band, which is centered at 13.3 GHz.

With progress constantly being made at still higher frequencies, other sys-
tems of nomenclature will be required. However, radio navigation, as defined in
this chapter, is confined primarily to the bands lying between VLF and UHF,
where the above nomenclature is likely to remain in use. Regardless of fre-
quency, the following general rules apply in free space:

I. The propagation speed of radio waves in a vacuum is the speed of light:


299,792.5 ± 0.3 kmjsec (usually taken as 300,000 kmjsec for all but the
most precise measurements).
2. The received energy is a function of the area of the receiving antenna.
If transmission is omnidirectional, the received energy is proportional to
the area of the receiving antenna divided by the area of a sphere of radius
equal to the distance from the transmitter:

Received power Receiver antenna area


( 4.1)
Transmitted power Area of a sphere(= 4?rR 2 )

where R is the range between antennas in the same units as those for the
antenna area.
3. Multiple antennas may be used at both ends of the path to increase the
effective antenna area. Such increases in area produce an increase in
directivity or gain and result in more of the transmitted power reach-
ing the receiver. The gain G of an antenna (in the direction of maxi-
mum response) is equal to its directivity D times its efficiency. The maxi-
mum effective aperture or effective area of an antenna is equal to D'A/47r.
It is defined as the ratio of the power in the terminating impedance to
the power density of the incident wave, when the antenna is oriented
for maximum response and under conditions of maximum power trans-
fer [ 19j. It is also defined as the physical area times the antenna aper-
ture efficiency (or absorption ratio) [19, 36]. The directivity or gain of
an antenna is usually expressed as a ratio with respect to either a hypo-
thetical isotropic radiator or a half-wave dipole. A dipole has an effective
area of about 0.13 times the square of the wavelength [36]. A transmitter
102 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

of power P and antenna gain G has an effective radiated power (ERP) of


PC along its axis of maximum gain.
For practical purposes, Equation 4.1 can be rewritten [ 19]

Received power ArAt


(4.2)
Transmitted power R 2 A2

is the effective area of receiving antenna


is the effective area of transmitting antenna
is the range between antennas
is the wavelength (the speed of light/frequency)

Thus, for fixed effective antenna areas, the power transferred from trans-
mitter to receiver increases as the square of the frequency. However, this
is accompanied by a corresponding increase in directivity. Such direc-
tivity is of no concern in fixed point-to-point service and is of advan-
tage in reducing external noise pickup. In many moving-platform appli-
cations, such as aircraft, a high level of directivity is a distinct disad-
vantage. However, when the use of tuned dipole antennas is assumed,
the power transferred decreases as the square of the frequency. This is
seen from Equation 4.1 where the receiver antenna area for a dipole, (i.e.,
0.13 A2 ) is substituted in the numerator.
4. The minimum power that a receiver can detect is referred to as its sensitivity.
Where unlimited amplification is possible, sensitivity is limited by the noise
existing at the input of the receiver. Such noise is of two main types:
a. External. Due to other unwanted transmitters, electrical-machinery
interference, atmospheric noise, cosmic noise, and the like.
b. Internal. Depending on the state of the art and approaching, as a lower
limit, the thermal noise across the input impedance of the receiver,
which is given by

(4.3)

where
NP is the noise power (in watts)
k is the Boltzmann's constant (1.38 X w- 23 JoulesjKelvin)
T is the temperature (in Kelvin)
A.f is the bandwidth (in Hertz)

The factor by which a receiver fails to reach this theoretical internal-noise


limit is often expressed as a ratio, in decibels, and is known as the noise
figure (NF) of the receiver [37]:
GENERAL PRINCIPLES 103

No
NF= - - - - (4.4)
kTob.fGa

where No is the noise power out of a practical receiver and N 1 is the noise
power out of an ideal receiver at standard temperature T 0 , of available
gain Ga, and of bandwidth D.f.
5. The minimum bandwidth occupied by the system is proportional to the
information rate. For most navigational purposes, the necessary informa-
tion rate is quite low. For instance, to navigate in a given direction to an
accuracy of 500 ft with an aircraft that cannot change its position more
than 500 ft in that direction in any one second, new information is needed
only once per second. However, most practical systems have employed
many times this minimum bandwidth. The reasons include (a) the need for
other services, such as communications on the same channel, (b) the use
of pulse techniques to aid in resolving multiple targets and to reduce the
effects of multipath transmission, and (c) the use of spectrum-spreading
techniques to improve signal-to-noise ratio (S/N), accuracy of range mea-
surements, reduction of effects due to interference of site errors. (Spread-
ing the spectrum beyond that needed by the information rate itself has the
same effect as increasing the power, provided that optimum techniques
are used at each end of the link [2, 27].)

In summary, to assess the free-space range of a radio system, It 1s neces-


sary to have at least the following facts: transmitter power and antenna gain,
receiver antenna gain and noise figure, the effective bandwidth of the system,
and the effect on system performance of external or internal noise. Combin-
ing the fundamental relations in Equations 4.1 to 4.4, results in the following
generalized link budget expression for the required radio transmitter power of
a radio system as a function of key system parameters [35]:

where
PT is the transmitter power
PN is the noise power in receiver
(SjN)REQ is the required signal-to-noise ratio in receiver
NF is the receiver noise figure
FN is the noise improvement factor due to modulation method and
bandwidth spreading (e.g., frequency modulation)
is the transmitter antenna gain
104 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

GR is the receiver antenna gain


Lp is the propagation path attenuation loss

It is assumed that the polarization of the transmitting and receiving antennas


are the same.
The radiation pattern from half-wave wires is a maximum along their per-
pendicular bisectors and a minimum along the axis of the wire, the equisignal
pattern thus forming a "doughnut." (An isotropic radiator converts the volume
of this doughnut into a ball, with uniform radiation in all directions. Such a radi-
ator would have to be a point source and is theoretically impossible at coherent
frequencies. However, in illumination engineering and in optics, from which
radio theory borrows, such point sources are assumed to exist.)
At the lower frequencies, along the surface of the Earth, vertical polariza-
tion is universally used (the wires being vertical) with minimum signal being
radiated into the ground. At frequencies where the antenna can conveniently
be placed half a wavelength or more above the Earth (generally in the high-
frequency band and above), either vertical or horizontal polarization is used,
depending on other factors.

4.2.2 Propagation and Noise Characteristics


In free space, all radio waves, regardless of frequency, are propagated in straight
lines at the speed of light. Along the surface of the earth, however, two other
methods of propagation are of importance: Up to about 3 MHz, an appreciable
amount of energy follows the curvature of the Earth, called the ground wave.
Up to about 30 MHz, appreciable energy is reflected from the ionosphere; this
is called the sky wave. The sky wave makes some types of long-range com-
munication feasible, but it is of less value to navigation systems because its
transmission path is unpredictable. At frequencies where the ground wave is
useful, such ionospheric reflection actually detracts from the value of the sys-
tem and requires special treatmem.

Ground Waves Groundwaves are familiar as those normally received when


listening to a standard AM broadcast transmitter during daylight hours. Daytime
range is better at the low-frequency end of the band (550 kHz) than at the
high end of the band ( 1650 kHz). Nighttime coverage is greater over the whole
band, but depends on the sky-wave transmission. Ground-wave propagation is
the primary propagation mode used in a number of modern radio navigation
systems (e.g., Loran-C; Section 4.5.1).
Propagation of ground waves is dependent on several additional factors.
First, ground-wave propagation i> dependent on the conductivity and dielec-
tric constant of the Earth in such a complex manner as to make the received
power more nearly a function of the inverse fourth power of the distance, rather
GENERAL PRINCIPLES 105

TABLE 4.1 Typical ground-wave attenuation values (dB)


Over Landb Over Seawater"

Frequency (kHz) 100 s. mi 1000 s. mi 100 s. mi 1000 s. mi

10 37 63 37 62
100 58 99 57 92
500 87 195 71 125
1000 110 245 79 145
2000 132 86 165

"Lossless isotropic antennas 30 ft above the surface. vertical polarization.


"Pastoral land: (J = 0.005 mho/m. E ~ 15.
cseawater: (J = 5 mhm/m. E = l:IO.
Source: Reference [35].

than the inverse square, as would occur in free space. In addition, further losses,
increasing with frequency, are encountered. Table 4.1 gives some typical exam-
ples. At low frequencies, ranges up to 5000 miles or more are obtainable, if suf-
ficient power can be generated to overcome atmospheric noise, path attenuation
and to compensate for low antenna efficiencies [42].
Second, at low frequencies, it is physically difficult to construct a vertical
transmitting antenna large enough to be half a wavelength (or its electrical
equivalent, i.e., a quarter-wave antenna above a perfectly conducting plane).
Therefore, the antenna is generally much shorter than the ideal and is res-
onated to the operating frequency by external series inductance of the lowest
possible losses. The result, despite the best engineering practice, is the radiation
of considerably less power over a very narrow bandwidth than that generated
by the transmitter. Nevertheless, ground-wave service is sufficiently attractive
for many applications so that low efficiencies are tolerated in some lower-fre-
quency applications.
Third, in most parts of the world and at most times of the year, atmospheric
noise at low frequencies is so much greater than receiver noise that additional
transmitter power must be used. This noise is generated mostly by lightning
flashes. As shown in Table 4.2, at the latitude of the United States, atmospheric

TABLE 4.2 Nighttime atmospheric noise in the United


States: 10 kHz bandwidth
Frequency (kHz) Noise (p., V/m)
10 2000
100 250
1000 20
10,000 0.8
100,000 Below receiver noise
106 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

noise power at the input to a receiver is typically a million times greater at


10 kHz than at 10 MHz (where it approximates typical receiver noise). At
the equator, atmospheric noise power is from I 0 to 25 times greater than at
U.S. latitudes, whereas at the poles it is about I 00 times less. The additional
transmitter power required to overcome atmospheric noise levels produces one
redeeming effect: Since a larger receiving-antenna area picks up more atmo-
spheric noise along with more signal, no benefit is derived by making antennas
larger than necessary. In fact, to nominally balance the effects of internal and
external noises on the overall received signal, the typical antenna requires an
effective area of less than one square meter.
Fourth, a characteristic of ground waves is that their propagation velocity
is not entirely constant. While the variation is quite small (in percent), it is
sufficient to limit the ability to obtain fixes at extreme ranges as good as the
instrumentation might otherwise permit [17].
Last, despite all the handicaps listed, ground waves at low frequencies offer
the only long-range radio communication means to vehicles that are not depen-
dent on the ionosphere or airborne or satellite-borne relay stations. For this
reason, there is a worldwide demand for frequencies in the VLF and LF bands.
The optimum frequencies cover only approximately I 00 kHz and are limited
at the low end by antenna efficiency and by atmospheric noise and at the high
end by poor propagation characteristics. Because of the long-range coverage,
almost every single station or system in the world requires a unique frequency.
In addition to the handicaps already listed, systems at these frequencies are
dependent on national policies of frequency assignment.

Sky Waves In a region lying between 50 and 500 km above the Earth's sur-
face, radiation from the sun produces a set of ionized layers called the iono-
sphere [38, 42]. The location and density of these layers depends on the time of
day and, to a lesser extent, the season and the !!-year sunspot cycle. The iono-
sphere acts as a refractive medium; when the refractive index is high enough
in relation to the frequency of a radio wave, it bends the radio wave and will,
under favorable conditions, return the wave back to Earth.
Figure 4.2 shows a simplified picture of the geometry involved. At A, the
radio wave strikes the refractive layer at too steep an angle and, although it is
bent, is not sufficiently affected to return to Earth; it continues out into space
(unless it encounters a more heavily ionized layer further out). At B, the radio
wave strikes the refractive layer a1: a more oblique angle, is bent sufficiently to
travel somewhat parallel to Earth, and is finally bent sufficiently to return to
Earth. At C, the wave arrives at the refractive layer with glancing incidence and
immediately returns to Earth. At D, the refractive index is too low in relation
to frequency to seriously deflect the radio wave, which then travels on out to
space; generally, this happens at frequencies above 30 MHz.
From this geometry it is evident that return to Earth occurs only at some
minimum distance for a given frequency and degree of ionization. This is called
the maximum usable frequency for that distance. Signals at higher frequencies,
GENERAL PRINCIPLES 107

·: :...
·.·.·.' :: . ... ... .
. . . . .. .
. . .' .
: ·.·.· .·

Figure 4.2 Effects of ionosphere on radio waves.

if returned at all, will be returned only at greater distances. This critical distance
is known as skip distance; inside it there is no return to Earth at the particular
operating frequency. If more than one ionizing layer are present, there may be
various skip distances for the same frequency.
At those frequencies and distances where ionospheric reflection occurs, the
attenuation of the radio signal is only that due to the spreading out of the power
over the surface of the Earth and is, consequently, proportional to distance.
Conversely, as indicated in the previous paragraphs, ground-wave attenuation
is very much greater, except at the lowest frequencies. At frequencies of around
I MHz, the signal level produced at the receiver by the two types of transmis-
sions is likely to look like that shown in Figure 4.3.
As the frequency increases, the ground-wave curve will move to the left and
the sky-wave curve to the right, leaving a gap (due to skip distance) where nei-
ther wave produces a usable received signal. In the region where the ground
wave and sky wave are about equal, severe fading will occur due to the ran-
domly varying phase of the sky-wave signal with respect to the ground-wave
signal. Even when sky-wave signal strength is adequate, serious distortion of
its modulation may occur due to the different paths simultaneously traveled by
the signal between transmitter and receiver. These are called multipath effects.
The differential time delay between these paths may reach several milliseconds,
thus preventing faithful reproduction of modulation frequencies above a few
hundred Hertz.
Therefore, sky-wave transmission is quite variable, and its efficacy is highly
dependent on the distance to the receiver, the frequency used, and the time of
day. For these reasons, the general practice in the 3- to 30-MHz communica-
tion band has been to use receivers and transmitters that would readily tune
over the whole band and to change frequencies from hour to hour, depending
on the distances required and on the condition of the ionosphere. Much work
108 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

"'c::
Oil
v;
-o
"'
>
iii
u
"'
a::

D1stance from transmitter

Figure 4.3 Ground-wave and sky-wave attenuation of radio signals.

has gone into the creation of charts predicting maximum usable frequencies and
skip distances [42]. More recent developments include propagation-frequency
evaluators that quickly evaluate (by frequency-scanning techniques) the best
path to be used for communication between two points at a given time. By
use of such techniques, ionospheric reflection has been a major long-distance
communication aid. Until the advent of wideband submarine cables and com-
munication satellites, these frequencies were the mainstay of the transoceanic
telephone and radio networks. The highly variable characteristics of the iono-
sphere, which cause different frequencies to travel by different paths, led to the
development of many ingenious sreerable antenna systems for this service. The
use of ionospheric reflection for navigation systems has been confined almost
exclusively to ground-based direction finders.
Conversely, sky-wave transmission is considered a handicap, rather than an
aid, to those navigation systems that depend on groundwaves. In such systems,
the almost direct, reasonably predictable ground wave is contaminated by sky-
wave energy that has arrived by a devious path. The mixture of the two often
produces serious errors not only in distance but also in bearing measurements,
since the effective reflecting point is not necessarily on the vertical plane join-
ing the transmitting and receiving stations. Methods for reducing such sky-wave
contamination include (I) the use of tall antenna structures for improved ver-
tical directivity resulting in transmission fields being concentrated along the
ground and less toward the sky, and (2) the use of only the leading edge of
pulse transmission, since this edge arrives sooner by ground wave then by sky
wave and is, therefore, uncontaminated (this usually requires greater bandwidth
than that required by the information rate).

Line-of-Sight Waves Above approximately 30 MHz, propagation follows the


free-space laws listed in Section 4.2, modified by the reflecting effects of vari-
ous objects on Earth. In general, the transmission path is predictable, and the
wavelengths are so short as to readily permit almost any desired antenna struc-
ture; engineering for a given performance is consequently relatively straight-
forward. Some anomalous sky-wave effects occasionally occur up to 100 MHz,
GENERAL PRINCIPLES 109

TABLE 4.3 Attenuation (loss) versus frequency due to fog


Visible Distance

Loss (dB/m) 100ft 200ft 500ft 1000 ft


10-3 20
10-4 7 12 20
10-5 4 7 12
to- 6 3

Note: Table entries are frequency in GHz.

but from approximately 100 MHz to 3 GHz, the transmission path is highly pre-
dictable and is unaffected by time of day, season, precipitation, or atmospher-
ics. Above 3 GHz, absorption and scattering by precipitation and by the atmo-
sphere begin to be noticed, and they become limiting factors above I 0 GHz.
Furthermore, above that frequency, atmospheric absorption does not increase in
a smooth manner but rather is characterized by narrow peak-absorption bands
and by narrow "windows" of relatively reduced absorption. Tables 4.3 and
4.4 [ 14] show attenuation effects due to fog and rain (in addition to free-
space loss).
Because of absorption above I 0 GHz, transmission at such frequencies is
severely limited within the Earth's atmosphere. High-flying aircraft and space
vehicles of course are under no such restrictions.
In designing antenna systems for line-of-sight frequencies, it often happens
that due to the relatively short wavelengths, the antenna is spaced away from a
reflecting object such as the ground by a critical number of wavelengths, which
has a marked effect on the overall antenna pattern. For instance, at 1 GHz, the
wavelength is about I ft. If this practice were used at 1 GHz, a quarter-wave
structure might be built with its base on the ground. Since even nearby blades
of grass would seriously mar its performance (not to mention persons and vehi-
cles moving about nearby), this would obviously be impractical. Instead, such
an antenna would likely be mounted on a pole, say, 10 ft high, so as to clear the

TABLE 4.4 Attenuation (loss) versus frequency due to rain


Heavy Moderate Light Drizzle
Loss (dB/m) (16 mm/hr) (4 mmjhr) (1 mm/hr) (0.25 mm/hr)

10 3 15 37 100
10 4 7 12 20 43
10-5 3 6 9 20
10 6 3 4 8
10-7 4

Note: Table entries arc frequency in GHL.


110 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Figure 4.4 Vertical reflection paths.

immediately surrounding structures. A reflection phenomenon, lacking at lower


frequencies, would now be encountered, as shown in Figure 4.4.
A receiver, at a point in space, now receives a direct ray from the trans-
mitter and a reflected ray from the ground. Because of the short wavelength,
the path difference is sufficient to cause addition or cancellation (for perfect
ground reflection) as the receiver moves up and down in elevation. The result-
ing vertical pattern (assuming perfect ground reflection and with an exagger-
ated scale for clarity) is shown in Figure 4.5. Deep nulls, of virtually zero signal
strength, are produced at those vertical angles at which the direct wave path and
the reflected wave path differ by exactly an odd multiple of half-wavelengths.
Maxima of signal strength occur where the two path lengths produce in-phase
signals. The number of nulls per vertical degree of elevation increases with the
height of the antenna and with frequency.
Such deep nulls, of course, occur only if the ground is smooth and perfectly
reflecting. However, even when this is not the case, the null structure can pro-
duce serious loss of signal. A number of corrective steps can be used to reduce
vertical nulls:

I. Raising the antenna high enough above the ground, m relation to fre-

Figure 4.5 Vertical lobbing.


SYSTEM DESIGN CONSIDERATIONS Ill

quency, to make the null structure so fine that the slightest irregularities
in the reflecting surface will break up the null pattern. At heights of I 00
wavelengths or more, the problem can usually be ignored.
2. Placing a horizontal counterpoise immediately below the antenna so as
to make its effective height quite small. The counterpoise shortens the
path of the reflected ray, thereby raising the angle at which cancellation
occurs. To be effective, such a counterpoise must be many wavelengths
in diameter.
3. Using high vertical antenna directivity (either by antenna arrays or by
parabolic reflectors) and then pointing the resulting narrow antenna beam
slightly above the horizon. The uptilt reduces the energy striking the
ground and, therefore, reduces the reflected wave.
4. Making the null on one frequency occur at the same time that a maximum
is occurring on another by using frequency diversity or wide-spectrum
modulation which allows several frequencies to be used simultaneously
[I].
5. Introducing vertical diversity up to an appreciable vertical angle via two
antenna systems, one at half the height of the other with the null of one
corresponding to the maximum of the other. However, the frequencies fed
to these two antenna systems cannot be coherent; otherwise a new null
pattern will appear at angles where the signals are otherwise equal. This
method is, in general, limited to receiving systems where two separate
receivers can be used.
6. Attempting other forms of diversity that make use of two or more paths
simultaneously. The term "diversity" in radio propagation refers to this
use of paths with different frequencies, polarizations, and so on in order
to make reflections occur at different points on each path.

Line-of-sight systems on Earth are, of course, subject to the limitations of


the horizon. The maximum range that can be obtained is illustrated in Figure
4.6. Beyond the line of sight, signal strength at these frequencies drops off
almost as suddently as does visible light when passing from day to night. Very
large powers and antenna gains are, therefore, needed to produce significant
performance beyond the line of sight, and such systems have not been found
to be of much value in aircraft communications and navigation systems.

4.3 SYSTEM DESIGN CONSIDERATIONS

4.3.1 Radio-Navigation System Types


From the beginning use of radio, the known (nearly constant) speed of radio-
wave propagation, coupled with an accurate measurement of time, has lead to
the use of radio for the measurement of distance. Furthermore, with a mea-
112 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

R in nautical miles
h in feet
Figure 4.6 Line-of-sight range.

surement of differential distance of two receiving antennas from one transmitter,


the direction of the transmitter can be determined. In some systems, the mea-
surement of time (and hence distance) and the angular measurement of direction
are combined to determine user position, as discussed later in this section.
In Figure 4.7, two receivers (A and B) are arranged on a known baseline,
which is assumed to be short with respect to the distance to the transmitter,
so that the transmission paths to the two receivers can be considered parallel.
A right triangle ABC may then be constructed and 8 readily calculated if r is
known. The value of r is found by comparing the outputs of the two receivers,
noting the time delay between the arrival of identical parts of the signal, and
dividing this time delay by the known speed of propagation. Such an arrange-
ment is commonly called a direction-finder and is a widely used radio-navi-
gation aid at all frequencies. (In many practical systems, the baseline AB is
physically rotated until the delay is zero; the direction of arrival is then on the
perpendicular bisector of AB.)
Some of the more common position determination methods are shown in
Figure 4.8. With knowledge of distance (rho) or direction (theta) to a ground
station, lines of position (LOPs) may be plotted on a map. The LOP of constant
direction is a radial from the station; the LOP of constant distance is a circle
centered on the station. The intersection of two LOPs provides a fix. It will be
seen that the greatest geometrical accuracy occurs when LOPs cross at right
angles.

r =distance due to known


delay between arrival
iliA and B

+------
Direction of arrival
of signal
+------

Known base line

Receiver B
------
------
Figure 4.7 Direction-finder principle.
SYSTEM DESIGN CONSIDERATIONS 113

"'N
I
I

(a) (b)

(c)
BD- AD= k on any one line
Three pairs of stations provide a fix
(d)

Figure 4.8 Common geometric position fixing schemes: (a) Rho-theta (p-{}); (b) theta-
theta({}-{}); (c) rho-rho (p-p); (d) hyperbolic.

Rho-theta systems provide a unique fix from a single station, and the LOPs
always cross at right angles. Theta-theta systems provide a unique fix from
two stations. The geometric accuracy is highest when the lines cross at right
angles and is poor on a line connecting the stations. Rho-rho systems provide
an ambiguous fix from two stations and a unique fix from three stations. Geo-
metric accuracy is greatest within the triangle formed by the three stations and
gradually decreases as the vehicle moves outside and away from the triangle.
The hyperbolic system uses LOPs that each define a constant difference in
distance to two stations. Such systems operate under conditions where the deter-
mination of absolute distance to the station is impractical. Three pairs of stations
are needed for a unique fix; however, for many practical applications, two pairs
suffice. Geometric accuracy is very much a function of the relative station loca-
tions. Poor geometry leads to a property frequently called geometric dilution of
precision (GDOP).
Another method, called pseudoranging, has been developed and is used in
certain modern radio navigation systems, such as GPS (Chapter 5) and one
114 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

mode of JTIDS-RelNav (Chapter 6). In this method the user receiver and the
reference station(s) are assumed not to be synchronized in time. By measur-
ing several (in general, at least four) such pseudoranges (versus true ranges
when time synchronization does exist), the user's three-dimensional position
and its time offset (from the transmitter or system time) can be determined
(Section 2.5). Three such pseudoranges are sufficient to determine the user's
two-dimensional position (provided that the user's altitude is known). Finally,
the term direct ranging (rho-rho and rho-rho-rho) has been applied to a hyper-
bolic system used in a true ranging mode by achieving some form of time syn-
chronization (see Sections 4.5.1 and 4.5.2).
The measurement of distance and direction by radio gives accurate results
only if the radio path between the points being measured is direct and the propa-
gation speed is known. In practice, the path between a transmitter and a receiver
on the Earth's surface may be quite devious and may, in fact, be a combination
of many paths. The problems of reducing such multipath transmissions and of
recognizing the direct path are major reasons for the multiplicity of radio nav-
igation systems that have been proposed or are in use.

4.3.2 System Performance Parameters


The utility of a radio-navigation system to a user is reflected by at least the fol-
lowing factors: accuracy, coverage, availability, integrity, ambiguity, and capac-
ity [48]. These factors may be summarized as follows:

1. Accuracy. The accuracy of a radio-navigation system is a statistical mea-


sure of performance and is typically given as a root-mean-square (rms)
measurement of its position error over some time interval. Many rms
errors are expressed in two dimensions; however, some systems (e.g.,
GPS) can provide three-dimensional rms errors. System accuracy mea-
sures are defined in Chapter 2. The major factors affecting radio-naviga-
tion system accuracy are m: follows:
• Absolute error. The absoilute or predictable error is the error in deter-
mining position relative to an Earth-referenced coordinate system. The
Earth frame [5] is one such coordinate system that has its origin at the
Earth's center of mass and its axes fixed in the Earth.
• Repeatable errors. Repeatability reflects the accuracy with which a user
can return to a position whose coordinates have been measured at a
previous time with the same radio-navigation system.
• Relative errors. This is the user's error with respect to a known point or
to another user of the same radio-navigation system at the same time.
The latter may also be expressed as a function of the distance between
the two users.
• Differential errors. These are the residual errors that remain after a user
has applied the corrections broadcast by a differential reference station
SYSTEM DESIGN CONSIDERATIONS 115

located in the general vicinity whose position is precisely known, and


the position calculated is compared using normal system signals with
its known position. Depending on the distance between the user and
the reference station, common errors, such as errors due to propagation
effects, are eliminated or greatly reduced, resulting in a differential error
that is much smaller than the absolute error. (Differential operation is
widely used in such radio navigation systems as Loran-C, Omega and
GPS.)
• Propagation effects. Radio-navigation system accuracy is affected by
the transmission of the radio signal through the atmosphere. Error
sources include reflection and refraction from the ionosphere and tro-
posphere, variations in the conductivity of the Earth, anisotropic sig-
nal propagation, and the like. Also, the transmitter to receiver signal
can simultaneously follow more than one path giving rise to multipath
effects.
• Instrumentation errors. Instrumentation errors are the errors introduced
by the radio and display equipment. These may include errors due to
receiver noise, time-of-arrival (TOA) measurement circuits or angular
measurement circuits, readout tolerance, and display resolution.
• Geometry effects. Geometry effects are typically expressed by a quan-
tity called the geometric dilution ofprecision (GDOP). GDOP maps the
basic (range or angle) measurements into position error and depends
solely on the user-to-system geometry (see also Chapters 5 and 6, and
Sections 4.5.1 and 4.5.2).
2. Coverage. The coverage area served by a radio navigation system is
defined in terms of the specified performance of the system. In general,
the coverage limit is defined by a requirement that a navigation receiver
be able to acquire the radio navigation signal as well as use it and that the
navigation solution meets a specific accuracy at a specified signal-to-noise
ratio (S/N) value.
3. Availability. Radio-navigation system availability is the probability that
the system is available for navigation by a user. In the United States,
navigation system availabilities below 99.7% will not meet the require-
ments of the Federal Radio Navigation Plan [48] for safety of navigation
purposes.
4. Integrity. Integrity in a radio-navigation system is the ability of the system
to provide the user with warnings when it should not be used for naviga-
tion. For example, VOR and Tacan perform integrity monitoring using an
independent receiver. When an out-of-tolerance condition is detected by
the integrity monitors, the VOR and Tacan receivers remove their signal
from use within ten seconds of this detection. Systems like GPS use of a
variety of integrity monitoring methods (see Chapter 5).
5. Ambiguity. System ambiguity exists when the radio-navigation system
116 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

identifies two or more possible positions of the vehicle, with the same
set of measurements, with no indication of which is the most nearly cor-
rect position. (This is not a problem with Loran-C, since the ambiguous
fix is a great distance from the desired fix. Ambiguous lines of position
(LOP) occur in the Omega system, since there is no means to identify
particular points of contact phase (lanes) that recur throughout the cover-
age area. Because of this ambiguity, Omega receivers must be initialized
to a known position, and the lanes counted as they are crossed . )
6. Capacity. Capacity is the number of users that the radio-navigation system
can accommodate simultaneously. For example, there is no restriction on
the number of receivers that may use Loran-C, Omega, or a VOR station
simultaneously; on the other hand, DME and Tacan are currently limited
to about II 0 users for traffic handling.

Therefore, in considering overall accuracy, most systems must be compared


on the probability of a certain accuracy being obtained under specified con-
ditions of use. In the final analy~is, such a probability must be a mixture of
many probabilities, including the probability that the user will read the instru-
ment correctly (typically called flight technical error) and the probability that
the entire radio equipment is functioning properly. In this latter connection the
fail-safe concept is generally strived for, on the theory that no information is
preferable to false information. Alrernatively, redundancy may be implemented,
with the outputs of several system~• compared and the most likely selected either
by human or automatic means.

4.4 POINT SOURCE SYSTEMS [5, 15, 16, 17]

4.4.1 Direction-Finding
Direction-finding represents the earliest use of radio for navigational purposes;
it continues to perform a useful function, particularly in those parts of the world
that have not yet adopted the more specialized navigation aids. Its chief attrac-
tion lies in the fact that, with the proper receiving equipment, the direction
of a transmitter can be found. Such transmitters do not necessarily have to be
specially designed for direction-finding; they can be broadcast stations, com-
munication stations, navigation stations, or any other kind of radiating system.
The chief drawback of direction-finding is that quite elaborate receiving
equipment must be used if the best accuracy is to be obtained. Most aircraft
are unable to accommodate such equipment. Direction-finders for aircraft nav-
igation may, therefore, be grouped into two broad classes:

I. Ground-based direction~finders. These take bearings on airborne trans-


mitters and then advise the aircraft of its bearing from the ground sta-
tion. Such stations can afford the necessary complex equipment, but the
POINT SOURCE SYSTEMS 117

operation is cumbersome and time-consuming, and requires an airborne


transmitter and communication link.
2. Airborne direction-finders and homing adapters. These take bearings on
ground transmitters. These direction-finders typically can afford only the
simplest of systems and must, therefore, tolerate large errors. However,
even the largest bearing errors will not prevent an aircraft from homing
in on the source of that bearing, though not necessarily by the most direct
route. Direction-finding therefore continues to be used as a backup aid to
more accurate systems.

With simple antenna systems, reliable directional information can be


obtained only from ground waves or from line-of-sight waves. In the low-
and medium-frequency bands, ground waves are useful to hundreds of miles;
however, they are subject to sky-wave contamination (especially at night) at
much shorter distances. This night effect is now recognized by users of low-
and medium-frequency direction-finders. Reliance is placed on the readings
only when the direction-finder is close enough to the station to be within good
ground-wave coverage (typically 200 mi at 200 kHz and 50 mi at 1600 kHz).
During thunderstorms, these distances are further reduced.
The state-of-the-art has progressed through the following stages:

1. Fixed loop. Intended for flying radial courses to and from the ground
station by orienting the aircraft for minimum signal
2. Rotatable loop. Hand-operated systems that were abandoned because of
the work load they imposed on the pilot
3. Rotating loop. Driven by a motor and forming part of a servo system
that automatically rotates the loop until a null is found and then stops,
sometimes referred to as a radio compass. Early loops were about nine
inches in diameter and were housed in teardrop-shaped plastic enclosures
about one foot away from the aircraft skin.
4. Fixed, crossed loops, with a motor-driven goniometer. Forming part of a
servo system that automatically displays bearing in the cockpit. The prime
advantage of this system over those using the physically rotating loop is
that all moving parts (except the indicator) are in the radio-receiver box.
Antenna projection from the aircraft with such a system in as low as one
inch, with horizontal dimensions of about one foot. Typical airline-type
equipment weigh less than 20 lb.

Loop Antenna Direction-Finder Principles. This type of receiver is no longer


in production, but its basic principles still apply to the current generation of
equipment. The basic principle of direction finding is the measurement of dif-
ferential distance to a transmitter from two or more known points. To reduce
instrument errors, it is desirable to use common circuitry at both of the measur-
118 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

c/
Direction of
transmitter

~~~--~
Figure 4.9 Direction-finding loop.

ing points. Furthermore, for opera1ional convenience, it is desirable to have the


two points close together. The loop antenna fulfills both of these requirements
admirably.
The loop antenna in Figure 4.9 i:> a rectangular loop of wire whose inductance
is resonated by a variable capacitor to the frequency to be received. The signal
is assumed to be vertically polarized, and, consequently, it induces voltage in
the arms AB and CD of the loop. If the loop were constructed accurately, these
currents would be equal in amplitude and phase when the plane of the loop is
exactly 90° to the direction of arrival of the signal. This is referred to as the
null position of the loop (zero loop current). Physically rotating the loop to the
null position indicates the direction to the transmitting station (i.e., the station
is 90' from the plane of the loop).
If the correct amplitudes and phases are maintained, the horizontal antenna
pattern is a figure of eight as shown in Figure 4.10 by circles A and B. This
pattern has two null positions 180° apart. This ambiguity will cause the antenna
system to give the same indication whether it is pointing toward a station or

Combined
Pattern of
/ .. ------ ......... ,/patterns
omnidirectional
antenna alone \
\
\
I
Plane I
of loop :n
I
I
I
I
Pattern of /
I

------ _,..,"
loop alone

Figure 4.10 Loop and sense antenna patterns: A = left-hand loop pattern; B = right-
hand loop pattern, 180° out of phase with A; C = omnidirectional pattern, 180° out of
phase with A; D = C + B - A.
POINT SOURCE SYSTEMS 119

away from it. A sense antenna can be added when the signal ambiguity must
be resolved. The sense antenna adds an additional 90° phase shift. As the loop
changes direction, its phase will vary with respect to the constant sense antenna
voltage resulting in the cardioid pattern shown in Figure 4.1 0. The combined
pattern has only one null position. Since the omnidirectional antenna, and its
phase and amplitude relation to the loop, are less precisely definable than the
loop itself, it is customary to use the loop alone for precise directional measure-
ment. The sense of the bearing can then be determined by coupling the vertical
or sense antenna to the loop and rotating the loop 90° in a specified direction,
noting whether the signal increases or decreases as the loop antenna is rotated.

Goniometer Direction-Finder Principles It is not strictly necessary to rotate


the loop mechanically in order to find the null. Instead, two fixed loops may
be placed at right angles to each other and their electrical outputs combined in
a goniometer. The goniometer has two sets of fixed windings at right angles
to each other, each set connected to one loop. In the magnetic field of these
windings is a rotor, with a winding connected to the receiver. The goniometer, in
effect, translates the received radio field at the loops into a miniature magnetic
field in which the rotor can operate. The nonrotating antenna can be attached
to the skin of the aircraft, thereby reducing drag and increasing reliability. The
accuracy of these systems is of the order of 2", exclusive of errors induced
by aircraft structure. These errors are of considerable magnitude, except in the
fore-and-aft directions, where aircraft symmetry helps to minimize them. Some
of these errors can be calibrated out for a given airframe, but this calibration is
correct only for one frequency and/or condition of pitch and roll. As a result,
low- and medium-frequency airborne direction-finders that use ground waves
cannot produce reliable results better than ±5°. Sky-wave contamination can
raise this figure to 30° or more.

Airborne VHF /UHF Direction-Finder Systems Civil-aviation communica-


tion over land is conducted chiefly in the 118- to 156-MHz band, whereas the
military use the 225- to 400-MHz band. Since virtually every aircraft carries a
receiver for one or the other of these bands, a direction-finding "attachment"
for these frequencies could be of interest to many aircraft operators. Consider-
able work was done during World War II in both of these bands, much of it
involving variations on the Adcock principle. However, it was apparent that,
in order to avoid prohibitively high site errors, the antennas would have to be
of large aperture and project into the slipstream, thus generating drag. All that
survives from these efforts is some VHF equipment used by the Coast Guard
for air-sea rescue on the 121.5 MHz distress frequency and a military direction-
finding attachment in the 225- to 400-MHz communication band on the distress
frequency of 343 MHz. This equipment is of two possible types, depending on
whether it is designed strictly for homing or for both homing and direction-
finding. Equipment designed only for homing may use a fixed-antenna sys-
tem that generates two sequentially switched cardioid patterns whose equisig-
120 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

nal crossover direction is found by turning the aircraft toward the transmitting
station. Equipment designed for both direction finding and homing uses a rotat-
ing antenna that generates a similar pair of cardioid patterns, whose equisignal
crossover direction is found. Accuracy is about sc along the axis of the aircraft
but reaches 30° broadside. The direction-finding attachment is carried by many
U.S. military aircraft and is useful for air-to-air direction finding and homing
during rendezvous and refueling. It is also of value in locating downed flyers
who carry small UHF rescue beacons.

4.4.2 Nondirectional Beacons


The universal use of low- and medium-frequency airborne direction-finders in
commercial aircraft has prompted the installation of special transmitters whose
sole function is to act as omnidirectional transmitters for such direction-find-
ers. Aircraft use radio beacons to aid in finding the initial approach point of
an instrument landing system as well as for nonprecision approaches at low-
traffic airports without convenient nonprecision or precision approach systems
(Chapter 13). Operating in the 200- to 1600-kHz bands, they have output power
ranging from as low as 20 watts up to several kilowatts. Modern designs are
I 00% solid state.
Nondirectional beacons are connected to a single vertical antenna and pro-
duce a vertical pattern, as shown in Figure 4.11. In addition to the directional
information given to direction-finders some distance away, such beacons have
another useful property; namely, there is a sharp reduction in signal strength
as the aircraft flies directly over the beacon, thereby providing a specifically
defined fix. The accuracy of the fix produced by this "cone of silence" is
somewhat dependent on the airborne antenna. It is improved if the airborne-
antenna pattern contains a null in the downward direction and is degraded if
the airborne-antenna null is off to the side.
Generally, purity of signal is obtained only from ground waves uncontam-
inated by sky waves; even in the absence of skywaves, considerable trouble
has been experienced with the ground wave in terrain of nonuniform character,
particularly near mountains. These two drawbacks (night effect and mountain
effect) limit the usefulness of nondirectional beacons. They have retained their

Cone of silence
I
I I
I I

wgggyc~,--
Figure 4.11 Nondirectional beacon, vertical pattern.
POINT SOURCE SYSTEMS 121

popularity because (l) they are inexpensive, (2) they are omnidirectional, and
(3) they place responsibility for accuracy entirely on the airborne receiver.
Nondirectional beacons are probably the least expensive way by which a
government can claim that it has equipped its airways with "radio aids to nav-
igation." In 1996, many thousands were in service around the world, and the
United States maintains approximately 177,000 nondirectional beacons for civil
aviation use. This number is expected to increase by about 7000 a year for the
next ten years [48].

4.4.3 Marker Beacons


Aside from the null measured by flying directly over the station, all the facilities
so far described provide only directional information to the aircraft. To provide
better fixes along the airways, so-called marker beacons were developed. These
marker beacons all operate at 75 MHz and radiate a narrow pattern upward from
the ground, with little horizontal strength, so that interference between marker
beacons is negligible. Each beacon generates a fan-shaped pattern, the axis of
the fan being at right angles to the airway, as shown in Figure 4.12. The antenna
pattern can be generated by an array of the type shown in Figure 4.13.
Four horizontal half-wave radiators are arranged in line with the airway and
are fed so that their radio-frequency currents are all in phase. At right angles to
their own axis, they generate a narrow vertical beam. A wire-mesh counterpoise
below this array reinforces the upward beam. By placing the counterpoise a few
feet above ground, the effects of vegetation and snow are reduced.
The transmitter is crystal-controlled and delivers up to 100 w. It is tone mod-
ulated, with its identity in Morse code indicated by gaps in the tone. The air-
borne equipment is a crystal-controlled superheterodyne receiver, with its out-
put supplying the cockpit audio system and an indicator lamp. Automatic gain
control is required to prevent saturation when the aircraft is passing directly
over the marker station at low altitude. Some receivers also provide a high-low
switch to increase the receiver sensitivity at higher altitudes [6].
The airborne antenna comprises a quarter-wave element on the bottom of
the airplane, parallel to the axis of flight and as far aft as possible. Its own
pattern consequently has directivity downward, increasing the directivity of the
fan marker when crossing it. On high-speed aircraft, the antenna is foreshort

Airway To four-course
-----------~range, VOR,
or ILS

Fan-marker pattern,
viewed from above
Figure 4.12 Fan-marker pattern.
122 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Antenna patterns

Along Across
airway airway

Direction
of airway

Figure 4.13 Fan-marker antenna.

cned by capacitive loading, recessed into the aircraft, and covered by a dielec-
tric sheet. Streamlined antenna packages are 3 ~ by 6 in. weighing 18 oz. The
marker beacons are gradually being phased out as an en-route aid in view of the
implementation of area-coverage systems, such as VOR/DME RNAV, Loran-C,
and GPS. However, along instrument landing approaches, the 75 MHz marker
remains a standard piece of equipment (Chapter 13).

4.4.4 VHF Omnidirectional Range (VOR)


Since the VHF band was being adopted for voice communication, it was only
natural to consider the combination of communication and navigation within
one band. Some early schemes involved VHF two- and four-course ranges.
However, by 1946, the VHF Omnidirectional Range (VOR) had become the
U.S. standard and was later was adopted by the International Civil Aviation
Organization (ICAO) as an international standard.
The VOR [5, 17] operates in the 108- to 1 I 8-MHz band, with channels
spaced 100 kHz apart. As soon as present low-selectivity airborne receivers
(mostly used by small aircraft) can be served by better channel arrangement,
the number of available channels will be doubled by spacing them 50 kHz apart.
The principle of operation is simple and straightforward. The ground station
radiates a cardioid pattern that rotates at 30 rps, generating a 30-Hz sine wave
at the airborne receiver. The ground station also radiates an omnidirectional
signal, which is frequency modulated with a fixed 30-Hz reference tone. The
phase difference between the two 30-Hz tones varies directly with the bearing of
the aircraft. Since there is no sky-wave contamination at very high frequencies
and no interference from stations beyond the horizon, performance is relatively
POINT SOURCE SYSTEMS 123

consistent and is limited by only two major factors: (1) propagation effects,
including vertical pattern effects, and site and terrain errors, and (2) instrument
errors in reading 30-Hz phase differences in the airborne equipment.

Transmitter Characteristics VOR adopted horizontal polarization, even


though aircraft VHF communication uses vertical polarization. Each radiator
in the ground station transmitter is an Alford loop. The Alford loop generates a
horizontally polarized signal having the same field pattern as a vertical dipole
[2] and is shown in Figure 4.14.
Four radiators are arranged in a square, whose plane is horizontal. Each radi-
ator is less than a quarter-wave long and is end loaded with capacity so as to
place the maximum current at the center of the radiators. At any one instant,
the currents in all the radiators are equal and alternately move clockwise and
counterclockwise. The result is a doughnut-shaped field pattern, with zero radi-
ation in the upward and downward directions, exactly like a vertical dipole (but
horizontally polarized). Four such loops generate a rotating figure eight.
The omnidirectional signal comprises a continuous wave that can be voice
modulated and carries Morse-code identity keying a 1020-Hz tone. Present at
all times is another tone of 9960 Hz, which is sinusoidally varied (± 480 Hz)
at a rate of 30 Hz. This is the bearing reference frequency. A simplified block
diagram is shown in Figure 4.15.
The transmitter is crystal-controlled, with a power output of 200 w. It is
amplitude modulated to a depth of 30% by the output of a mechanically driven
tone wheel, which has 332 teeth and is driven by an 1800-rpm motor. The
teeth are slightly staggered so as to impart a cyclical variation from 9480 to
10,440 Hz with the rotation of the motor. On the same shaft is a capacitive
goniometer, fed by the same transmitter via a modulation eliminator that strips
off the amplitude modulation by means of diodes. About one-quarter of the
applied power appears at the output of the modulation eliminator.
The goniometer feeds the unmodulated transmitter power first to the

Antenna patterns

Feed point
8 " '" '
View from above

Figure 4.14
m
Alford loop.
Vort'"'
124 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Four Alford loops


NW SE NE sw
0 00 0

Bndge Bridge

Continuous wave,
modulated by tone
wheel, voice, 1020 Hz
Identity tone

9960Hz
FM ±480Hz
at 30 Hz Modulation
eliminator

Tone wheel

Continuous wave

Figure 4.15 VOR block diagram.

northwest-southeast pair of Alford loops and then, 90" later, to the northeast-
southwest pair of Alford loops. When combined with the modulated energy
applied simultaneously to all loops, this variation generates a rotating cardioid.
Each pair of loops is fed via a balanced bridge network. Each bridge has three
arms that are each abou( one-quarter wavelength long, the fourth arm being half
a wavelength longer. Energy fed into one corner of the bridge does not appear at
the diagonally opposite corner. The bridge, therefore, allows the mixing of two
signals and application of the result of two loads without the loads affecting
each other and without the signal sources affecting each other. The phasing
between tone wheel and goniometer and the physical placement of the Alford
loops are such that the two 30-Hz signals are exactly in phase when viewed
from magnetic north.
This seemingly elaborate arrangement serves two main purposes:
POINT SOURCE SYSTEMS 125

1. The division of power between the antennas is a function of passive ele-


ments only. The power of the transmitter can go up or down without
affecting bearing information.
2. The two 30-Hz signals are rigidly locked together by being derived from
a common rotating shaft. Motor-speed variation can alter their frequency
slightly, but their phase relationship will not change.

The four Alford loops are arranged in a tight square and then placed half
a wavelength above a metal-mesh counterpoise about 39 ft in diameter. This
counterpoise also acts as the roof of the transmitter house. The loops are pro-
tected from the weather by a plastic randome, often hemispherical in shape. If
a Tacan antenna is collocated with the VOR, the randome is conical in shape,
somewhat resembling an Indian tepee.

Receiver Characteristics The airborne equipment comprises a horizontally


polarized receiving antenna and a receiver. This receiver detects the 30-Hz
amplitude modulation produced by the rotating pattern and compares it with the
30-Hz frequency-modulated reference. The basic receiver functions are shown
in Figure 4.16.
At the output of the receiver is an amplitude-modulation detector. Its out-
put comprises (I) a 30-Hz tone produced by the rotating cardioid, (2) voice
modulation (if used at the transmitter), (3) a morse-code-modulated I 020-Hz
identity tone, and (4) a 9960-Hz tone, frequency modulated (±480 Hz) by the
30-Hz reference tone. The voice frequencies and the identity tone are fed to the
aircraft's audio-distribution system. The 30-Hz information is filtered to remove
the other components and fed to the phase-comparison circuitry. The 9960-Hz
information is filtered out, limited (to remove the 30-Hz amplitude modulation),
and then applied to a frequency-modulation detector whose output is the 30-Hz

Audio output

Receiver
108-118 MHz

Bearing

Figure 4.16 VOR receiver.


126 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

reference frequency. After filtering, this is compared with the variable phase.
Several grades of receivers are currently in use.
The airline type of equipment uses a remotely tuned crystal-controlled super-
heterodyne receiver and has at leasr two types of display. One display compares
one 30-Hz sine wave with the other 30-Hz sine waves, the two signals being
brought into phase by a motor-driven phase shifter forming part of a servo loop.
The shaft position of this motor, therefore, displays bearing directly and may
be remoted by selsyns to other parts of the aircraft and to auto-pilots. Another
display shows (on a vertical left-right needle) the phase difference between
one 30-Hz signal and a manually phase-shifted 30-Hz signal representing the
desired bearing. The sensitivity of the vertical needle is usually arranged for
a full-scale deflection of ±I oc around the manually selected bearing and thus
shows angular deviation from the desired track.
The simplest types of receivers use manual tuning and only the left-right
type of display around a manually selected bearing. Both types of receivers are
commonly arranged to also receive the 108- to 112-MHz instrument-landing-
system localizer signals. Typical receivers weigh 20 lb for the airline type and
5 lb for the simplest type, exclusive of antenna. Over 200,000 airborne sets
have been installed, about half of them for light aircraft.
It was previously mentioned that one of the problems of the VOR is the
difficulty of accurately measuring phase shifts at 30Hz. Much circuit refinement
has taken place for the better grades of receiver. This includes, for instance,
the use of identical circuits for both 30-Hz signal paths wherever possible so
that temperature effects will be common to both. The result is that instrument
accuracy of better than 1o is achieved in airline-type equipment.

4.4.5 Doppler VOR


Doppler VOR [3) applies the principles of wide antenna aperture to the reduc-
tion of site error. The solution used in the United States by the Federal Avia-
tion Agency involves a 44-ft diameter circle of 52 Alford loops, together with
a single Alford loop in the center. In the Doppler VOR, the role of the cen-
tral radiator and the role of the array are reversed; however, the phase relations
remain the same, allowing a standmrd airborne receiver to operate without any
modification. A simplified descriptton is given below.
The central Alford loop radiates an omnidirectional continuous wave that
is amplitude modulated at 30 Hz by any conventional means; this forms the
reference phase. The circle of 52 Alford loops is fed by a capacitive commutator
so as to simulate the rotation of a single antenna at a radius of 22 ft. Rotation
is at 30 rps, and a carrier frequency 9960 Hz higher than that in the central
antenna is fed to the commutator. This 9960-Hz higher frequency is frequency
modulated by the simulated rotation of the antenna, increasing in frequency as
the antenna appears to move toward the receiver and decreasing in frequency
as it recedes from the receiver. With a 44-ft diameter and a rotation speed of
30 rps, the peripheral speed is on the order of 1400 meters per second, or about
POINT SOURCE SYSTEMS 127

480 wavelengths per second at VOR radio frequencies. The 9960-Hz frequency
difference is consequently varied by ±480 Hz at a 30-Hz rate, with a phase
dependent on the bearing of the receiver.
In the receiver, the output of the amplitude-modulation detector contains all
the signals present with the conventional VOR. Phase comparison between the
two 30-Hz sine waves is performed as before, the only difference being that the
30-Hz amplitude-modulated signal is the reference and the 30-Hz frequency-
modulated signal is the variable. Since the instrumentation is concerned only
with the difference between the two, normal operation results with a standard
VOR receiver.
However, since the aperture of the ground antenna is approximately five
wavelengths, as compared with less than half a wavelength with the four Alford
loops in a standard VOR ground station, a tenfold reduction in site error is the-
oretically possible. Actual measurements at formerly "impossible" sites verify
this. At a good site, maximum deviations measured during a 20-mi orbital flight
were reduced from 2.8° with a standard VOR to 0.4° with a Doppler VOR [3].
Residual errors can probably be reduced to 0.1 o.
The importance of the Doppler VOR lies in the improvement it provides
without any change being made to the airborne equipment. Every airborne set
can benefit from it.

4.4.6 Distance-Measuring Equipment (DME)


DME is an internationally standardized pulse-ranging system for aircraft, oper-
ating in the 960- to 1215-MHz band. When the ground station is collocated
with a VOR station, the resulting combination forms the standard ICAO rho-
theta short-range navigation system [21 ]. In the United States in 1996, there are
over 4600 sets in use by scheduled airlines and about 90,000 sets by general
aviation.
The operation of DME can be described by means of Figure 4.17 where the
aircraft interrogator transmits pulses on one of 126 frequencies, spaced 1 MHz
apart, in the 1025- to 1150-MHz band. The pulses are in pairs, 12 JJ.Sec apart,
each pulse lasting 3.5 JJ.Sec, with the pulse-pair-repetition rate ranging between
5 pulse-pairs per sec up to a maximum of 150 pulse-pairs per sec. The peak
pulse power is on the order of 50 w to 2 kw. Paired pulses are used in order
to reduce interference from other pulse systems.
The ground beacon (or transponder) receives these pulses and, after a
50-JJ.sec fixed delay, retransmits them back to the aircraft on a frequency 63
MHz below or above the airborne transmitting frequency. The peak power of
this beacon is in the range of 1 to 20 kw. The airborne interrogator automati-
cally compares the elapsed time between transmission and reception, subtracts
out the fixed 50-JJ.sec delay, and displays the result on a meter calibrated in nau-
tical miles, each nautical mile representing about 12 JJ.Sec of elapsed round-trip
time.
Each beacon is designed to handle at least 50 aircraft simultaneously, with
128 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Distance
reading
Airborne
interrogator

Aircraft skin

Ground
transponder
or beacon

Figure 4.17 DME operation.

100 being a more typical number. The pulse-repetition rate of the interrogators
is deliberately made somewhat unstable, and the interrogator is designed to
recognize only those replies whose pulse-repetition rate and phase are exactly
the same as its own.
In any line-of-sight geographical area, there is the possibility of providing
136 beacons, each handling I 00 or more aircraft. Since each beacon's duty cycle
is only 2% under these conditions., room exists to expand the system to handle
heavier traffic. Modern techniques permit the airborne interrogation rate to be
decreased substantially.

Receiver Characteristics Since the airborne receiver always operates 63


MHz away from its own transmitter frequency, a common crystal-controlled
local-oscillator chain may be used for the local oscillator during reception
and for transmission, provided that the receiver intermediate frequency is 63
MHz. A typical block diagram if: shown in Figure 4.18. A photograph of a
DME interrogator/receiver for commercial aviation applications is depicted in
Figure 4.19.
Frequency generation commences with a crystal oscillator at about 45 MHz.
This may be a single oscillator with 126 crystals or a mixture of two oscillators,
one with 13 and the other with 10 crystals. This frequency is multiplied by 24
and furnishes about one milliwatt of energy to the receiver mixer crystal. It is
also amplified in a chain of triodes that are pulsed during transmission to a level
of SO watts in the simplest equipment (intended for 100-mi range) or to one
POINT SOURCE SYSTEMS 129

To and from
antenna
Beacon
identrty
code

Tacan
bearmg

01stance
drsplay

Figure 4.18 Airborne DME l"unctional block diagram.

kilowatt in airline equipment (intenJcd for 300-mi range). A common antenna


is used for transmission and reception ; mixer-crystal overload is prevented by
the preselector, which is tuned 63 MHz away from the transmitte r.
Amplification occurs in a 63-MHz inte rmediate-frequency amplilier with
automatic gain control. The reply is then compared with the transmitted signal
in the ranging circuit. This ranging circuit also sees all other pulses transmitted
from the ground beacon (about 3000 pps), and it therefore must perform at least
two major functions: (I) Recognize its own replies and reject all others (this is
called scorching) and (2) convert these into a meaningful display (this is called
!rocking).

Figure 4.19 Typical airborne DME int errogator/rece ive r.

I
130 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

2400 1-1sec --------~


I
I
Receiver

1--:r~
noise

I I
150 .;ec~~~
~ t'"'-less than beacon recovery time
I I
I I

~.,...,...~~~~.,..,__,..
~:
Own Own~:
interrogations (5) replies ( 4) 1

Figure 4.20 Received DME pulses.

Many different forms of circuit have been devised for these functions. They
all depend on the sequence of wave forms shown in Figure 4.20. This figure
shows five consecutive snapshots of an imaginary oscilloscope whose sweep is
started by the airborne interrogation from a single aircraft and whose deflection
circuit is connected to the output of the receiver in that aircraft.
In this instance, if one assumes a maximum desired range of 200 nm, the
sweep is 2400 p.sec long. Since the ground beacon is transmitting an average of
3000 pulse-pairs per sec, each sweep will display, on the average, about seven
pulses. These will be quite randomly spaced, except those generated in response
to our own interrogation. At an interrogation rate of 30 per sec, even the fastest
aircraft does not move by as much as a pulse width from one interrogation to
the next. The desired replies therefore occupy an almost fixed position on the
oscilloscope display, whereas those intended for other aircraft move in a random
manner. The dotted line shows the fixed (or slowly changing) position of the
desired reply. On scan 3, the desired reply is missing; this is because the beacon
has just replied to another aircraft and has not yet "recovered." Recovery time is
typically on the order of I 00 p.sec. Desired replies may also be missing because
of other random effects. However, all airborne DME ranging circuits are based
on the principle that, within a given time slot, many more desired replies will
be received than undesired replies.
The basic objective of all DME ranging circuits is to locate the time slot
in which the desired replies are actually occurring. This is the search process,
and it is usually conducted at the highest permissible pulse-repetition rate ( 150
pulse-pairs per second) in order to save time, which, depending on the tech-
nique, may vary from I to 20 sec. Once this time slot has been found, the track
mode commences and can be conducted at a much lower pulse-repetition rate,
usually between 5 and 25 pulse-pairs per second.
Search is typically performed as follows: A gate is generated I 0 p.sec wide,
POINT SOURCE SYSTEMS 131

at the transmitter-interrogation rate. This gate is slowly moved outward, from


a 0- to 300-mi delay with respect to the transmitted pulse. This gate strobes
the received pulses; only when there is coincidence between the gate and a
received pulse is the received pulse passed to an integrating circuit. During the
search period, the interrogation rate is held at 150 pulse-pairs per sec. The gate
is therefore opened !50 times per sec, for 10 1-1-sec at a time, a total of 3000
1-1-secjsec, or 1/333 of the total time. If the beacon is trasmitting 3000 random
pps, 9 random pps pass through the gate, on the average. However, if the gate
is moved at about 10 mi per sec, almost 30 desired pulses will pass through the
gate per second when the gate delay corresponds to the round-trip delay time.
This ratio of 9 to 30 allows ample margin to switch the operation automatically
from search to track.
In this example, a gate movement of 10 mi per sec would mean that 30 sec
would be needed to search over the 3600 1-1-sec period corresponding to 300 mi.
Other circuits have been built in which gate movement is very fast until a pulse
is received; with such circuits, 300 mi may be searched in less than a second.
In the track mode, the same 20-1-1-sec gate follows the desired reply, con-
trolled by the integrator. If the reply falls in the early part of the gate, the gate
advances; if it falls in the late part of the gate, the gate is delayed. Since the
possible change in aircraft position is quite small from one pulse to the next,
the interrogation rate can be safely reduced during track. Typically, this is about
25 pps; however, with newer equipment, rates as low as 2 pps have been used,
even at tracking rates of 3000 knots. The gate is usually arranged to have some
memory so that it does not immediately go into search upon loss of signal. For
I 0 sec or so, it is arranged to say at its last position (called static memory)
or to move at its last rate (called velocity memory). The position of the gate
is determined in the simplest equipment by an analog voltage that operates a
voltmeter, calibrated in miles. Such equipment has accuracies of about 3% of
full scale and often uses two ranges, such as 0 to 20 and 0 to 100 mi. In more
elaborate equipment, the position of the gate is a function of an analog shift
rotation involving fine and coarse time delays; these have accuracies of 0.1 mi,
independent of distance. Still other equipment uses digital counting techniques,
with accuracies of 0.01 mi. In the latter equipment, overall accuracy is limited
by system accuracy, which includes stability of beacon delay, accuracy of pulse
rise times, and so on.
In 1996 typical high-performance airborne equipment had a weight of less
than 13 lb. All circuits were solid state, and used digital microprocessors with
the exception of the pulsed transmitter-amplifier chain. Airborne antennas are
typically quarter-wave stubs (about 3 in. long) projecting from the bottom of
the aircraft. Vertical polarization is used.

Transmitter Characteristics Whereas the airborne equipment must operate on


126 channels, the ground beacon usually stays on one channel for long peri-
ods of time. The beacon consequently may, for a given state of the art, use a
more sensitive receiver and a more powerful transmitter. One typical set pro-
132 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

vides 3-kw peak output, together with an antenna gain of 9 dB. Otherwise, the
ground circuits follow the same principles as the airborne ones, with a 63-MHz
intermediate-frequency receiver amplifier being used. The number of aircraft
that a beacon can handle is usually based on the assumption that 95% of the
aircraft will be in the track mode at not over 25 interrogations per sec; 5% are
in the search mode, at not over !50 interrogations per sec. For I 00 aircraft, this
means about 3000 pulse-pairs per sec.
The duty cycle of the ground transmitter is therefore much greater than that
of the airborne equipment, and the average power consumption is also greater.
Most beacons are operated on the constant-duty-cycle principle, whereby
receiver gain is increased until 3000 pps appear at the output of the receiver. In
the absence of interrogation, these pulses will all be due to receiver noise; with
interrogation from less than I 00 aircraft, they are a mixture of noise and inter-
rogations; with interrogation from more than I 00 aircraft, they are the inter-
rogations from the 100 nearest aircraft. After the 3000-pulse limit is passed,
the gain is automatically reduced. This constant-duty cycle has the following
advantages:

I. The beacon is automatically maintained in its most sensitive condition.


2. The transmitter duty cycle i.s maintained within safe limits.
3. The airborne automatic gain control circuit always has a constant number
of pulses to work on, thereby simplifying its design.
4. In case of interrogation by too many aircraft, the nearest aircraft are the
last to be deprived of service.

For the simplest, most reliable circuitry, the beacon is arranged not to
receive while transmitting (self-oscillation could otherwise result); furthermore,
to reduce interrogation by multiputh echoes of strong interrogation pulses, it is
desirable to reduce receiver gain for a short while after each genuine interroga-
tion. Some interrogations are consequently lost; the amount of this countdown
is typically on the order of 20%. Thus, an airborne equipment interrogating at
25 pps receives only 20 replies. Airborne tracking circuits are, however,
designed to operate at this reduced rate.
The delay between transmission and reception is nominally 50 fJ.Sec. For
greatest accuracy, this must be maintained constant; considerable circuit refine-
ment is used to retain this value, independent of interrogation strength and envi-
ronmental effects. Typical en-route-type beacons exhibit a total variation for
±0.5 fJ.Sec, corresponding to a distance error of ±0.04 mi [30]. Beacons asso-
ciated with instrument-landing systems may be designed to be more accurate,
due to the smaller spread of interrogation-signal levels. The ICAO requires an
overall system accuracy of 0.5 mi or 3%, whichever is greater.
Under the control of an external keyer, usually common to the associated
VOR, the beacon transmits an identify signal. Typically, this occurs for about
3 sec every 37 sec. During this time the random pulses are replaced by regu-
POINT SOURCE SYSTEMS 133

larly spaced pulses at 1350 pulse-pairs per sec. These activate a 1350-Hz tuned
circuit in the aircraft and are keyed with a three-letter Morse code, ~ sec per
dot and ~ sec per dash. During this time, the airborne ranging circuit is in the
memory condition.
Since the DME system, unlike the VOR system, is not a passive system, it
has an inherent capacity limitation. The value generally quoted is 110 aircraft
per beacon.

4.4.7 Tactical Air Navigation (Tacan)


Tacan [7] is a military omnibearing and distance measurement system using
the same pulses and frequencies for the distance measurement function as the
standard DME system. A Tacan beacon comprises a constant-duty-cycle DME
beacon and antenna, to which the following additions are made:

I. A parasitic element rotating around the antenna at 900 rpm, generating


an amplitude-modulated pattern at 15 Hz, with phase proportional to the
bearing of the receiver.
2. Nine other parasitic elements, also rotating at 900 rpm, generating a mul-
tilobe pattern at 135 Hz, to improve the bearing accuracy.
3. Reference pulses at 15 and 135 Hz to which the above variable phases
are compared in the aircraft, to establish its bearing.

The Tacan airborne equipment comprises a DME interrogator to which


Tacan bearing circuits have been added. All Tacan beacons provide full ser-
vice to all DME interrogators, and all DME beacons provide distance readings
to all airborne Tacan sets. The principal advantages of Tacan over VOR/DME
are the following:

I. Because of its higher frequency (960 to 1215 MHz versus 108 to 118
MHz), the Tacan beacon antenna can be smaller, it is therefore more suit-
able for shipboard and mobile use.
2. The multilobe principle, to enhance bearing accuracy, is built into all
equipment, ground based and airborne.
3. Both distance and bearing are obtained via the same radio-frequency
channel, providing certain equipment economies.

The system is in general use by the U.S. Navy and Air Force, and by NATO
military forces. In 1996, over 800 facilities were maintained for the U.S. DoD
with a DoD user population of 13,000 [48].

Transmitter Characteristics A diagram of the Tacan ground beacon is given


in Figure 4.21. The antenna comprises a central radiator, broadbanded to cover
134 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Plastic cylinders
rotating at 15 rps ~----- ------ Central radiator, stationary
~- One parasitic element, rotating
--........JIUL--~
A
Nine parasitic elements, rotating

Speed
control

1350-Hz identity tone


and speed control

Receiver

Figure 4.21 Tacan ground beacon.

the 960- to 1215-MHz range. Equipment has been built with from 1 to 11 ver-
tical elements, depending on the kind of site for which the set is intended.
All transmission and reception is by this central radiator. At a radius of about
3 in. and usually mounted on a plastic cylinder is the 15-Hz parasitic rotat-
ing element. At a radius of about 18 in. is another plastic cylinder on which
are mounted nine parasitic elements, 40° apart. These superimpose a 135-Hz
amplitude modulation on the tram:mitted signal. Depth of modulation is about
20% for each of these signals. On the same shaft that rotates the parasitic ele-
ments are three reference-pulse disks. These generate I, 9, and 90 low-level
pulses per revolution, respectively, by varying the magnetic inductance of a
solenoid. These pulses are fed down to the transponder. The motor that rotates
this whole assembly is usually of ac type, its speed controlled to better than
1% by a servo system in which the reference-pulse frequency is compared to
a frequency standard, such as a tuning fork.
When installed aboard a ship, the Tacan antenna is stabilized in two planes.
In the horizontal plane, compensation is provided to ensure that the reference
pulses do not shift with the heading of the ship but remain oriented to north.
In the vertical plane, compensation is provided for the roll of the ship. (Early
systems also provided for pitch compensation, but this was subsequently found
to be unnecessary.)
The transponder is a constant-duty-cycle DME beacon to which the bearing-
reference pulses have been added. Once per revolution, coincident with the
maximum of the antenna pattern pointing east, a so-called north reference pulse
POINT SOURCE SYSTEMS 135

L Auxiliary reference bursts


f+----- North reference bursts - - - - i > i
~------ h
sec-----~

Spaces between reference bursts filled with


2700 random DME replies per second

Figure 4.22 Transmitted Tacan signal.

code is emitted. This comprises 24 pulses, the spacing between pulses being
alternately 12 and 18 p.sec. When these pulses are decoded in the airborne
equipment, they become 12 pulses. spaced 30 p.sec apart. This pulse train is
initiated by the one-per-revolution reference from the antenna.
Eight times per revolution, the 135Hz reference pulse group is emitted. (The
ninth group coincides with the north pulse and is intentionally omitted.) This
comprises 12 pulses spaced 12 p.sec apart. The circuitry of the transponder is
arranged in such a way that the reference pulse groups take priority over the
normal constant-duty-cycle pulses. The overall transmitted pulse envelope is
shown in Figure 4.22.
The 1350-Hz identity tone. transmitted every 30 sec, is derived from the 90
pulses-per-revolution disk on the antenna shaft. thus producing phase coherence
between identity and reference pulses and allowing each to be received without
interference from the other. The identity code comprises 1350 groups per sec,
each composed of four pulses spaced 12. 100. and 12 p.sec, respectively. The
reason for the I 00 p.sec spacing between the 12 p.sec pairs is that this combi-
nation produces the least bearing error during identity transmissions. reducing
the necessity for bearing memory circuits in the airborne equipment.
The DME interrogations are amplitude modulated by the rotating antenna.
reducing the effective sensitivity of the Tacan beacon about 3 dB below that of
an ordinary DME beacon. Although the use of a separate, nonmodulated receiv-
ing antenna would avoid this loss. such an arrangement has not been found
necessary in actual practice.

Airborne Receiver Characteristics The airborne receiver comprises a DME


interrogator to which the Tacan bearing circuitry has been added. The DME
interrogator must have an effective automatic gain control, so as to preserve
136 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

the amplitude modulation of the pulses over the required range of expected
signal strengths. This is usually taken to vary from minimum usable signal up
to about I mw of signal at the receiving antenna.
Figure 4.23 is a generic block diagram of the airborne Tacan bearing circuit.
Following decoding, the amplitude-modulated signal is filtered into two sine
waves, one at 15 and one at 135 Hz. The "north" pulse activates a 33.3 kHz
ringing circuit, whereas the 135 Hz reference pulse group activates an 83.3
kHz ringing circuit. These reference pulses are continually compared with the
two sine waves and actuate two motor-driven servo systems, geared together
9: I. Whenever the 135-Hz signal is present and the 15-Hz signal is within
±20" of its correct position, the 135-Hz signal controls the servo. In effect, the
bearing accuracy is determined by the nine-lobe antenna pattern of the ground
beacon, with the one-lobe pattern used to resolve ambiguity, which otherwise
would occur every 40". As with DME, both static and velocity memories have
been applied to airborne bearing circuits to carry them through short-term signal
dropouts. Solid-state airborne equipment typically weights 20 lb and occupies
about 1ft3 . Modern receivers incorporate digital implementations of some of
the receiver functions depicted in Figure 4.23.

System Considerations Tacan was the first operational rho-theta system to


exploit the multilobe bearing principle. At "perfect" sites, bearing errors mea-
sured under carefully controlled conditions were on the order of ±0.1 o for 77%
of the readings and ±0.2° for 93% of readings. Compared with previous sys-
tems, these results were sensational and led many users to plan applications for
which the system was not designed. The chief misconception concerning the
performance of Tacan stemmed from the basic fact that the nine-lobe system
gives an improvement only if the one-lobe system is functioning properly. Since
the airborne equipment is controlled most of the time by the nine-lobe system,
there was little opportunity to evaluate the performance of the one-lobe system
by itself.
The one-lobe system suffers from many of the siting problems common to
other point-source bearing measuring systems. Chief among these are the effects
of reflecting objects near the transmitting antenna. This problem is greatly
increased if the antenna is mounted relatively close to the ground and has lit-
tle or no vertical directivity. The resulting strong vertical lobe-and-null struc-
ture may then create the condition where the aircraft is in the direction of a
null, whereas an unwanted reflection is in the direction of a lobe. High vertical
directivity, with uptilt, has been the most effective means to reduce this prob-
lem. Being a pulse system, Tacan is somewhat less susceptible to site errors
than continuous-wave systems, since reflections from objects farther away than
the pulse duration (about 1 mile) are of much less importance. The U.S. 1994
Federal Radio Navigation Plan [48] cites the signal-in-space Tacan 2a (95%
probability) azimuth accuracy to be ±I% (±63 m at 3.75 km) and the distance
accuracy to be 185 m (±0.1 nmi). The capacity for distance measurement is
110 aircraft, and it is unlimited for azimuth measurement.
I
Auxiliary
.---"' burst
decoder
Motor : iff

i
135-Hz Phase
filter shifter f-- Comparator f---
L________ _ _

From DME
nterrogator
Peak
rider
~ I
9: 1 .gear
y
I
Servoamplifier

I
15-Hz Phase I
f-.-- Comparator 1--- I
filter shifter I
f-----'

0
I
Bearing
display

North
L-..;.. burst
decoder

l
-
VJ
-...!
Figure 4.23 Airborne Tacan bearing circuit.
138 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

4.4.8 VORTAC
Since Tacan beacons can be more readily installed on ships and at tactical sites
than VOR beacons, large numbers of military aircraft are equipped with Tacan.
To save these aircraft the cost of carrying additional equipment for navigating
the ICAO VOR/DME airways, several countries, including the United States,
use the VORT AC system. In this system each VOR station, instead of being
collocated with a DME, is collocated with a Tacan beacon (which also provides
DME service) to provide rho-theta navigation to both civil and military aircraft.
Civil aircraft read distance from the Tacan beacon and bearing from the VOR
beacon. Military aircraft read both distance and bearing from the Tacan beacon.
Thus each type of aircraft fits into the same air-traffic management system,
regardless of which type of airborne equipment it carries. In 1996, it is estimated
that there are more than 200,000 users in the United States alone.
At the ground station, the VOR central antenna is housed in a plastic cone
that supports the Tacan antenna. Leads to the Tacan antenna pass through the
middle of the VOR antenna, along its line of minimum radiation, and do not dis-
turb the VOR pattern. In the case of Doppler VOR (Section 4.4.5), the antennas
are arranged in a circle outside the cone.

4.5 HYPERBOLIC SYSTEMS

Hyperbolic navigation systems are so designated because of the hyperbolic lines


of position (LOP) that they produce rather than the circles and radial lines asso-
ciated with the systems that measure distance and bearing (see Figure 4.8d)
[4]. The Loran-C, Omega, Decca, and Chayka systems are described in this
section. They differ in that Loran-C and Chayka measure the time-difference
between the signal from two or more transmitting stations, while Omega and
Decca measure the phase-differences between the signals transmitted from pairs
of stations.

4.5.1 Loran
Loran (long-range navigation) is a hyperbolic radio-navigation system that has
evolved over a period of years, beginning just before the outbreak of World
War II in Europe. The Loran-C ~·ystem [8, 11, 44] has benefited greatly from
analysis of the shortcomings of previous systems, It uses ground waves at low
frequencies, thereby securing an operating range of over I 000 mi, independent
of line of sight. Second, it uses pulse techniques to avoid sky-wave contami-
nation. Third, being a hyperbolic system, it is not subject to the site errors of
point-source systems. Fourth, it uses a form of cycle (phase) measurement to
improve precision. It inherently provides a fine-coarse readout of low inherent
ambiguity. All modern Loran systems are of the Loran-C variety. (Loran-A and
Loran-D configurations no longer exist.)
HYPERBOLIC SYSTEMS 139

Loran-C users fall into the two general categories: navigation users and pre-
cise time and time interval (PTTI) users. By far the larger population of direct
users is in the navigation category. An even larger group of indirect users ben-
efits from a PTTI application of Loran-C, in which digital switching, signaling,
and timing of the nation's telephone system is accomplished using Loran-C.
Every telephone subscriber in the United States is an indirect beneficiary of the
Loran-C system.

Principles and System Configuration Loran-C is a low-frequency radio-nav-


igation aid operating in the radio spectrum of 90 to II 0 kHz. It consists of
transmitting stations in groups forming chains. At least three transmitter sta-
tions make up a chain. One station is designated as master, while others are
called secondaries. The chain coverage area is determined by the transmitted
power from each station, the geometry of the stations, including the distance
between them and their orientation. Within the coverage area, propagation of
the Loran-C signal is affected by physical conditions of the Earth's surface
and atmosphere, which must be considered when using the system. Natural
and man-made noise is added to the signal and must be taken into account.
Receivers determine the applied coverage area by their signal-processing tech-
niques and can derive position, velocity, and time information from the time
difference (TD) between the time qf arrival (TOA) of a radio wave from a sec-
ondary minus the TOA of a radio wave from the master station. Methods of
application provide for conversion of basic signal time of arrival to geographic
coordinates, bearing and distance, along track distance and cross-track error,
velocity, and time and frequency reference.
Each of the stations in all Loran-C chains transmit pulses that have standard
characteristics. The pulse consists of a l 00-kHz carrier that rapidly increases in
amplitude is a carefully controlled manner and then decays at a specified rate
forming an envelope of the signal. Each station in a chain repetitively transmits
a series of closely spaced pulses called a pulse group at the group repetition
interval (GRI) of the chain. The GRI uniquely identifies the chain. When the
chain is synchronized to universal time (UT) the master station also sets the time
reference for the chain. Other stations of the chain are secondaries and transmit
in turn after the master. Each secondary pulse transmission is delayed in time so
that nowhere in the coverage area will signals from one station overlap another.
The number of pulses in a group, pulse spacing in a group, carrier phase
code of each pulse, time of transmission, the time between repetition of pulse
groups from a station, and the delay of secondary station pulse groups with
respect to the master signals constitute the signal format. Each station in a chain
is assigned a signal format based on its function.
The signal format is modified by blinking certain pulses to notify the user of
faulty signal transmission. The signal format is also modified to accommodate
a single transmitter station in two chains. This is accomplished by permitting
transmission for one of the chains to take precedence over the other when the
140 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

signal format calls for simultaneous transmission in both chains. This function
is called blanking.

Wave Form and Signals in Space Each station transmits signals that have
standard pulse leading-edge characteristics. Each pulse consists of a 100-kHz
carrier that rapidly increases in amplitude in a prescribed manner and then
decays at a rate that depends upon the particular transmitter and transmitting
antenna characteristics. The leading edge of the standard Loran-e pulse antenna
current wave form, against which the actual antenna current wave form is com-
pared, is defined as i(t):

-2(t-T)}
i(t) = A(t-T) 2 exp { sin (0.27rt+¢) for T < t < 65+7, (4.6)
65

where

i(t) = 0 for t < T,

and A is a normalization constant related to the magnitude of the peak antenna


current in amperes, tis time in f.J.Sec, Tis the envelope-to-cycle difference (EeD)
in f.J.Sec, ¢is the phase code parameter (in radians) which is 0 for positive phase
code and 7r for negative phase code. EeD is the displacement between the start
of the Loran-e pulse envelope and the third zero crossing of the Loran-e car-
rier (phase). EeD arises because the phase velocity and the group velocity of
the signal differ. As the signal propagates over seawater, the EeD decreases
because the phase velocity exceeds the group velocity.
The pulse trailing edge is defined as that portion of the Loran-e pulse fol-
lowing the peak of the pulse or 65 f.J.Sec after the pulse is initiated. The pulse
trailing edge is controlled in order to maintain spectrum requirements. At differ-
ent transmitting sites, or with different transmitting equipment, the pulse trailing
edge may differ significantly in appearance and characteristics. Regardless of
these differences, for each pulse the antenna current i(t) will be less than or
equal to 1.4 mamps or 16 mamps depending on transmitter type.
To prevent contamination of the rising edge of a Loran-e pulse by the tail
of the previous pulse, ideally, the amplitude of the tail should be well attenu-
ated before the next pulse starts. Because of the sky-wave effects, a Loran-e
pulse should be attenuated as fast as possible after attaining its peak amplitude.
Unfortunately, a serious constraint in the form of the frequency spectrum bound
must be considered. A compromi:;e between these two requirements lead to a
pulse length of 500 f.J.Sec. By requiring the amplitude of the pulse at 500 f.J.Sec to
be O.OOIA (-60 dB) where A is the peak amplitude of the pulse, the spectrum
specification can be met and the pulse tailjsky-wave contamination problem
can, in most cases, be avoided.
Figure 4.24 shows a Loran-e pulse [44]. Zero crossing stability is important
HYPERBOLIC SYSTEMS 141

Sampling point
50% amplitude

j,,. .
30 ~o~sec

Figure 4.24 Single Loran-e pulse.

because it affects the system phase accuracy. The standard sampling point for
a Loran-e receiver is the positive going zero crossing of the phase decoded
pulse on its third cycle (approximately 30 f.J-Sec) after the arrival of the ground
wave. This tracking is accomplished by a phase locked loop. In addition, it
affects the apparent signal-to-noise ratio as seen by the receiver and, therefore,
the available receiver accuracy at a given averaging time. Amplitude stability
is important, because it affects the EeD of a transmitted Loran-e pulse and
thereby affects the ability of a receiver to lock-on and track the correct cycle.

Propagation Effects The low-frequency propagation of 100kHz is influenced


by the properties of the Earth's surface as well as the ionosphere. Because of the
ionospheric changes, the portion of the propagated wave that is reflected from
the ionosphere (sky wave) is not very stable. To make the received 100 kHz
signal more stable and reliable within a given coverage area, the Loran-e radio
navigation system is designed as a pulse system, which enables it to separate
the ground wave from the sky wave. The ground wave consists of a space wave
and a surface wave. The space wave is made up of the direct wave (a signal
traveling between the transmitter and the receiver via the direct path) and the
ground-reflected wave (a signal that arrives at the receiver after being reflected
from the surface of the Earth). The space wave also includes the diffracted
waves around the Earth and the refracted waves in the upper atmosphere. The
surface wave is guided along the surface of the Earth, which absorbs energy
from the wave causing its attenuation. When both antennas (transmitter and
receiver) are located very near the surface of the Earth, the direct and ground
reflected terms in the space wave cancel each other, and the transmission of
Loran-e signal takes place entirely by means of surface wave (assuming that
no sky wave exists). Figure 4.25 shows ground-wave and sky-wave modes of
propagation [34].
Propagation-induced errors arise from variations in the ground-wave signal
propagation velocity, which are caused by the Earth's ground conductivity along
the signal path and to a lesser extent by atmospheric effects. These propagation
anomaly errors represent a major contributor to the total Loran-e error budget.
142 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Ionosphere

Groundwave

Transmitter~~ - Ground
/
/
A /, Signal Propagation Mechanisms

Space Wave

Tran:~~ ____________ p~e~t _____ - - - - ~ ;.e Rece1ver


', ____ ____ ~ :\:J
-/
Surface Wave
:~,----- ----;~~-

/
,.,,....,.
---- ................
........
........
', /
/
,.,."'
/ .........
...__ ........

'' ___. ___. ___. (Ground) Reflected Wave

Earth
Figure 4.25 Ground-wave and sky-wave modes of propagation.

Over the chain coverage area, the propagation anomalies exhibit both spatial
and temporal variations.
The temporal variations fall into two primary categories: diurnal and sea-
sonal. The diurnal variations are short-term propagation effects caused primar-
ily by local weather changes and day /night transitions along the signal path.
Variations in the refractive index of the atmosphere versus height from the
ground (vertical lapse rate) contribute to the short-term propagation errors. The
diurnal time-difference (TD) variations tend to be relatively small, on the order
of tens of nanoseconds.
The larger category of temporal variations are the seasonal effects, which
are most pronounced over land paths. These long-term errors tend to be peri-
HYPERBOLIC SYSTEMS 143

odic with an annual cycle and can result in peak-to-peak TO excursions of up


to several hundred nanoseconds. The seasonal variations are caused primarily
by changes in ground conductivity and vertical lapse rate between winter and
summer conditions.
The other major category of propagation anomaly is the spatial variation. The
spatial portion of the Loran-e propagation error is caused by fixed topographical
and surface conductivity properties along the signal path. The most uniform
propagation occurs over all seawater paths. Using the known, constant speed of
Loran-e signals over seawater, the predicted Loran-e signal phase variation can
be computed as a function of path length, commonly called the secondary phase
factor (SF). Signals over land travel more slowly, by an amount that varies with
the conductivity and terrain irregularity. Differences in conductivity vary from
0.5 x 10- 3 mhos/ meter for snow covered mountains to 5.0 mhos/ meter for
seawater. The additional phase error effect due to the land mass is commonly
referred to as the additional secondary phase factor (ASF). As in the case of
the temporal errors, the ASF errors tend to have long correlation distances, on
the order of 90 to 100 nm.
The previously discussed characteristics of signal phase were related to
ground-ground-type measurements. If the receiver is raised to any altitude, as
in the airborne application, then the altitude effect must be considered. Theo-
retical predictions yield altitude, conductivity, and distance from transmitter as
the important parameters. Also, beyond a distance of 250 mi from the trans-
mitter, the correction is essentially constant. In this case, if one is operating
at this distance or greater from two transmitters and trying to form hyperbolic
LOPs, then the altitude effect on both paths will cancel. If one of the ranges is
much shorter than the other, errors of a few tenths of a microsecond can easily
result if this height gain function is not considered. A variety of formulations
for predicted Loran time difference as a function of propagation effects have
been used in receivers. Examples of these are given in Chapter 2, Section 2.5.2
of the first edition.

Absolute Accuracy Performance Within a published coverage area, Loran-e


will provide the user who employs an adequate receiver with predictable accu-
racy of 0.25 nm 2drms or better. The repeatable and relative accuracy of Loran-
e is usually between 18 and 90 meters. The total accuracy is dependent upon
the geometric dilution ofprecision (GDOP) factor at the user's location within
the coverage area.
GDOP is a dimensionless factor that expresses the sensitivity of position fix
accuracy to errors in TO measurement. As GDOP increases in a given area, the
impact of atmospheric noise, interference, and propagation vagaries inherently
increases. GDOP is a function of the gradient of each LOP and the angle at
which LOPs cross. Lines of constant GDOP are lines on which fix accuracy is
expected to be equal.
For a triad, GDOP is defined as
144 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

GDOP= 1 f_l_ + 1 + 2rcos(¢ 1 +¢2) (4 .7 )


sin Cct>1 + ¢2) V;;,

1
2
sin ¢ 2 sin cP1 sin cP2

where cP; is the half-angle subtended by station i and r is the correlation coeffi-
cient between two LOPs, which i~: taken to be 0.5. Note that ¢ 1 +ct>2 is equal to
the crossing angle of the LOPs (8). The relationship between 2drms and GDOP
IS

2drms = 2a-.-
Ko { ~--
1
+ + -- 2rcos 8
--- (4.8)
sm e sin 2 cP! sin 2 ¢2 sin cP! sin ¢2

where a is the timing error and K 0 is the constant 500 ft/ f.J-Sec.

Availability Although individual Loran-e transmitting equipment is very reli-


able, redundant equipment is used to reduce system downtime. Loran-e trans-
mitting station signal availability is therefore greater than 99.9%.

Reliability Reliability is a measure applied to system equipment such as


receivers, timers, and transmitters. The weakest link in Loran-e system reliabil-
ity is the highest-stressed component, the ground transmitter. Redundant equip-
ment at tube-type transmitting stations, and a "graceful degradation" capability
at solid-state transmitter stations, keep the system in an almost fail-safe mode.
The only significant failures in service have occurred when transmitting anten-
nas have collapsed or a severe lightning strike has completely destroyed the
output modules in a solid-state transmitter.

Repeatability The Loran-e system repeatability is excellent in terms of days


to weeks or longer. This means that, once the location of a reference point or
waypoint is known in the Loran-C frame of reference (grid), a navigator can
return to that point with very high accuracy. The frame of reference can be
either in Loran-e TDs or latitude/longitude coordinates. Repeatability is a par-
ticular strength of the Loran-e system for the majority of uses. Repeatability
declines as the period of time between measurement of the reference point loca-
tion and return to that point increases, due to seasonal effects on Loran-e signal
propagation. If a plot of time difference or Loran-e latitude/longitude is made
for a fixed user location over a period of several years, a definite periodicity
in the data is clearly seen. The data have a sinusoidal pattern in a period of
one year and are generally repeatable year to year. Figure 4.26 is an example
of the seasonal variation in TD data and clearly shows why repeatability of
the Loran-e system is good over the days-to-weeks time period but is poorer
over a period of months, reaching its worst over the winter-to-summer time
interval. Over one-year intervals or over the spring-to-fall interval, repeatabil-
HYPERBOLIC SYSTEMS 145

500r-------------------------------------------------------~

400
300
u 200
5l0 100
c:: 0
~
~-100

~ -200
-300
-400
-500 0 100 150 200 250 300 350
50
Day
Figure 4.26 Seasonal variation of repeatable accuracy.

ity is very good. The 1994 Federal Radio Navigation Plan [48] cites a Loran-C
repeatability error range at 60-300 ft 2drms.

Integrity Loran-C stations are constantly monitored to detect signal abnor-


malities that would render the system unusable for navigation purposes. The
secondary stations blink to notify the user that a master-secondary pair is unus-
able. Blink is a repetitive on-off pattern (approximately 0.25 sec on, 3.75 sec
off) of the first two pulses of the secondary signaL which indicates that the
baseline is unusable for one of the following reasons:

• TD out of tolerance
• ECD out of tolerance
• Improper phase code or GRI
• Master or secondary station output power or master station off-air

When a secondary station is blinking it continues to transmit its normal eight


pulses at the normal GRI. However. it is only during a 0.25-sec period that all
eight pulses are present. During the next 3.75 sec only the last six pulses of
the eight pulses are present, the first two having been turned off. The on-off
cycle is repeated until normal operations are resumed. This blink period should
be sufficient to permit automatic blink detection circuits in receivers to activate
and warn users that the baseline is unusable. In 1996. the USCG and the FAA
are pursuing an "aviation blink.'' based on factors consistent with aviation use.

Direct Ranging There are some Loran-C users who do not employ Loran-C in
the hyperbolic mode but rather in the direct range rho-rho-rho mode or the rho-
rho mode. The rho-rho-rho process involves a minimum of three transmitters
and use of an iterative computation to obtain a fix. Direct range rho-rho mode
146 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

requires two stations as a minimum, a highly stable user frequency standard,


and precise knowledge of the time of transmission of the signal. A direct range
from each station is developed, and the intersection of these circles produces a
fix (actually two possible fixes that must be resolved by either a third range or
prior knowledge of assumed position). Direct ranging can be used in situations
where the user is within reception range of individual stations but beyond the
hyperbolic coverage area. In 1996, the high cost of the stable frequency standard
limited the use of this mode.

Differential Loran-C In using differential Loran-e, a reference station is


established, and a nominal set of TDs are determined. Thereafter, the reference
station broadcasts the offset of the measured TDs from the established nominal
values. The user's equipment incorporates the differential values to produce a
highly stable and long-term high-precision position accuracy in the vicinity of
the reference station. These corrections are generally valid for the "correlation
distance" of approximately 100 nmi from the reference station. Real-time cor-
rections to remove both seasonal and diurnal errors can be broadcast. For most
users, correction of the seasonal variation alone would be sufficient. Diurnal
variations tend to be small enough so that, within areas of good GDOP, suf-
ficient accuracy is obtained to meet most requirements. Studies have shown
that publishing the previous day"s correction to the baseline TDs is entirely
satisfactory. This approach reduces the electronic equipment requirements and
complexity for both the user and the provider of the service and is a process
that may fit within the envelope of aviation flight planning.

Grid Calibration Techniques A variety of theoretical methods can be used


to calibrate the spatial propagation variations over the Loran-e coverage area.
One such technique, known as Millington's method [34], breaks the signal path
between the transmitter and receiver into finite segments of different conduc-
tivity levels, based on conductivity maps. The incremental phase delay is then
computed as a function of range and conductivity for each path segment in
both the forward and backward propagation directions, summed, and averaged
to provide an estimate of the ASF.
While purely theoretical models of ASF calibration can substantially improve
the overall accuracy of the Loran-C solution, combining theoretical models with
empirical data can improve accuracy further. This semiempirical grid calibra-
tion approach has been used successfully to calibrate coefficients of a Loran-e
signal propagation error model using surveyed data points of empirically mea-
sured TD errors. Because of the long correlation distances of the spatial prop-
agation errors, only a few widely separated survey data points are sufficient to
provide a reasonably good grid calibration using this semiempirical modeling
approach.
Modern Loran-e receivers often include some type of grid calibration algo-
rithm or table lookup process to correct for estimated ASF errors. Such cali-
HYPERBOLIC SYSTEMS 147

bration can substantially minimize the effects of spatial propagation anomalies


on accuracy.

Transmitter Characteristics The Loran-e transmitting system has evolved


over the years from the original tube-type transmitter, some of which are still
operating, to higher-powered tube transmitters and eventually to solid-state
transmitters. Highly precise cesium frequency standards are employed at each
transmitting station to time the transmitted signals.
The modern solid-state transmitters are described in this section. Each trans-
mitter station is physically divided into two groups of units to provide system
redundancy. At the appropriate interfaces switching units are provided between
them. In Figure 4.27 a simplified block diagram depicts one set of the redundant
equipment.
The timer provides all timing signals to the transmitter, including 5-MHz
clock signals. Dual redundant pulse amplitude and timing controllers (PATeO)
accept timing signals from the timer and derive from this all the signals needed
by the transmitter. Signals generated by the PATeO include start triggers, charg-
ing triggers, digital amplitude reference signals, amplitude compensation sig-
nals, and megatron reference trigger. Each PATeO contains an EeD module
to make small changes in the amplitude of each pulse group and allow fine
adjustment to be made to the pulse shape. The transmitter power level is also
monitored in the PATeO to identify problems with the half-cycle generators
(HeG).
The HeGs are the basic building blocks of the solid-state transmitter. Each
HeG contributes a portion of the power contained in the Loran-e pulse. Thirty-
two HeGs comprise the standard set. The basic set can be expanded in mul-
tiples of eight HeGs, with associated ancillary equipment, to transmitting sta-
tions of 40, 48, 56, and 64 HeGs. The power output by the transmitting sta-
tion depends on the HeG configuration and the type of transmitting antenna
used. When used with standard transmitting antennas of 625 to 1000 ft, power
outputs can range between 400 kw and 1000 kw for the 32 through 64 HeG
configurations.
Each HeG takes the PATeO signal and generates a 5 J1Sec pulse in the
megatron of 4000 amps peak that is shaped like a half-cycle of a I 00-kHz
sine function. These pulses are sent to the coupling and output networks during
each Loran-e pulse. The coupling network unit is a passive pulse shaping net-
work that contains coupling capacitors, coupling inductors, and tailbiter mod-
ules. The coupling network receives four S-11sec pulses and transforms them
into the required Loran-e pulse with a peak at 65 11sec. After the peak of the
pulse, the tailbiter module controls the shape of the pulse so that it exponen-
tially decays to zero. The output network presents the correct impedance to the
coupling network for the transmitting antenna and provides isolation between
the coupling network and the antenna ground system.
The transmitter operational control (TOPeO) and display units perform sev-
eral primary functions. They permit selection between redundant PATeOs, cou-
......
~
oc

Antenna
Switch
To
Antenna
Network
,.....,...._

Figure 4.27 Loran-C transmitter block diagram.


HYPERBOLIC SYSTEMS 149

piing networks, and output networks. The TOPeO and display units contain
built-in monitoring and fault detection circuitry, and if a fault is detected the
TOPeO automatically switches coupling and output networks. The TOPeO and
display units serve as centralized alarm panels and status displays for the trans-
mitting station.

Receiver Characteristics In the past, navigation users employed receivers that


read out Loran-e TD coordinates, but by the 1970s Loran-e receiver designers
had automated much of this process to the extent of selecting the best triads,
ensuring correct signal lock-on and computing a latitude and longitude from the
time-differences. Many receivers contain correction tables in memory to make
the navigation solution as accurate as possible with relation to the geodetic grid.
By 1996, computational and processor power had resulted in user equipment
with the capability to store multiple waypoints and indicate data to the user that
include course and distance to go to the waypoint, and course and speed made
good. Very few users operate in a time-difference output environment today.
An airborne Loran-e receiver block diagram is shown in Figure 4.28; a pho-
tograph of a Loran-e receiver for commercial aviation applications is depicted
in Figure 4.29.
Loran-e receivers are commonly referenced by the rate (number of chains
tracked), the source of the time reference, number of stations tracked, and
the measurement type. For example, a single-rate, master-referenced, two-pair,
time-difference receiver tracks a single chain selected by the user; time ini-
tialization is obtained from the master station; and two stations are tracked
to obtain a TD measurement. Dual-rated receivers track two chains to pro-
duce a single position solution, while cross-chain receivers use stations from
two chains to define LOPs. In 1996, master-independent cross-chain receivers,
which use a priori information to define LOPs between secondary stations in
different chains, were being investigated.
All Loran-e receivers used for navigation enter four functional states: ini-
tialization, acquisition, pulse group time reference (PGTR) identification, and
tracking. Initialization is the process of providing to the set all the a priori
knowledge of the signals to be tracked and adjusting the set to minimize the
effects of interference. Initialization may include GRI selection, estimates of
secondary time difference, adjustment of interference filters, setting of clipping
levels, and the determination to search for the strongest path.
Acquisition is the process of searching for and locating the signals identified
during initialization. Generally, a receiver will locate the signals from each sta-
tion in time slots, called intervals, that repeat at the GRI. The search mechaniza-
tion is dependent on the extent of automation (manual or automatic search) and
displays built into the set. There are two general forms of manual search: pulse
alignment and phase code. Automatic search invariably operates on the phase
code of the signals and cross-correlates an approximation of the known signals
with the received signals. Generally, automatic search will operate at lower S/N
ratios than manual search and can be designed to search faster, using multipoint
.....
Ul
c

y_
RF
Front End
RF
Limiter
-- ir
I I Envelope II Limiter 1...
I
r
I
Sum J:
I I____.._
I r --Delay I,

Data
Latch
.

Data
• .
Microprocessor
Controller
I ,.. ..,I Input/Output

~
Latch

+ I Timing And
Strobe ,..
I --~

Figure 4.28 Airborne Loran receiver biock diagram.


HYPERBOLIC SYSTEMS 151

Figure 4.29 Airborne Loran receiver.

search for master and limited range search for secondaries. When the correla-
tion reaches a certain threshold, after ensuring no false locks have occurred on
secondary signals, acquisition is complete.
PGTR identification is the process of ensuring that the receiver is operating
on the ground wave of the signals. Ensuring operation of the ground wave,
sometimes called guard sampling or ground-wave location, operates on the
principle that the ground-wave signal from a station always arrives at a receiver
before the sky wave because of the longer sky-wave path. It is necessary to find
the ground wave because its timing, and hence its position-locating qualities, is
stable, while the sky wave is not. Typically, the acquisition process locates the
sky wave because of its much larger amplitude. Ground-wave location proceeds
by using signal detection algorithms at the signal 30 to 60 f-1-Sec ahead of the
receiver's reference time. If signals are found, the receiver timing is advanced
and the process repeated. This continues until no signals are found at two or
more successive locations. Often, multiple independent tests are made after no
signal is detected to account for the possibility of the ground-wave and sky-
wave signals summing out of phase and creating a null that might otherwise be
presumed to be the start of the ground wave.
Tracking is the process of maintaining a constant timing relationship between
the receiver's time reference and the PGTR for each signal being tracked. In an
automatic tracking receiver, circuits within the equipment automatically adjust
the time reference and update the display to provide continuous readings. These
receivers also provide alarms or warnings advising the operator of undesirable
signal conditions or transmitter blinking.
Loran-e signal reception can be impaired by interference from other signals
broadcast on slightly different frequencies (generally low-frequency commu-
nications). To avoid the degradation in S/N associated with these interfering
sources, Loran-e sets are equipped with notch filters that can be used to atten-
uate the interfering signal. Some receivers are equipped with preset notch filters,
others with adjustable notch filters, and yet others that automatically search for
interfering signals near the Loran-e band and dynamically notch out any inter-
ference.
152 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

TABLE 4.5 Existing continental U.S. Loran-C


chains (excluding Aslaka)
GRI
Chain Name sec)
( !J.

Canadian east coast (5930) 59,300


North-east United States (9960) 99,600
South-east United States (7980) 79,800
U.S. west coast (9940) 99,400
Canadian west coast (5990) 59,900
Great Lakes (8970) 89,700
North central United States (8920) 89,200
South central United States (961 0) 96.100

Global Coverage Loran-C was used worldwide in 1996; it covered maritime


Canada, the North Atlantic, Norwegian Sea, Mediterranean Sea, the Bay of Bis-
cay, an area of Russia, China, and India, South Korea and the Sea of Japan, the
Northwest Pacific, the Gulf of Alaska, and the entire continental United States.
New system developments are underway or in the planning stages for north-
west Europe, Argentina, Brazil, and inland Canada. A Loran-C chain consists
of a master station and a number of secondaries, usually no more than four.
Each chain is uniquely identified by its GRI, which represents the number of
microseconds between subsequem transmissions of the master station signal.
Table 4.5 lists the Continental U.S. chains and their GRls.
Charts are published by the U.S. National Oceanic and Atmospheric Admin-
istration (NOAA) that depict the geographic coverage area served by the Loran-
e system. These depicted coverage contours define the geographic limits at
which a receiver with a 20-kHz bandwidth will acquire and track a master
and two secondary stations, each providing a signal-to-noise ratio (S/N) bet-
ter than -10 dB and a fix accuracy of better than 0.25 nm 2drms (95% of the
time).
The difference between acquisition of the signal by the receiver and track-
ing of the signal is important. Since a receiver should always be able to track
a signal it has previously acquired (under the same S/N environment) and also
to continue to track a signal at a much lower S/N than that which it experi-
enced when it acquired the signal. acquisition is the more difficult process for
the receiver and is the limiting factor in receiver performance. Coverage area
must therefore be defined in terms of acquisition, since that process defines
the operational limits at which a navigation solution can be initiated. S/N is
the major factor in determining a receiver's ability to acquire the signal. This
factor is the ratio of the field strength of the Loran-C signal, attenuated over
the propagation path from transmitter to user, to the field strength of noise in
the receiver's bandwidth at the user location. Noise in this context is generally
assumed to be atmospheric noise.
HYPERBOLIC SYSTEMS 153

Figure 4.30 Hyperbolic lines of constant TD for a typical master-secondary pair.

Chain Geometry Figure 4.30 shows schematically a set of hyperbolic LOPs


for a typical master-secondary Loran-C transmitter pair. Each hyperbolic line
contains all points having the same time difference between arrival of signals
from the master and secondary. Along the baseline itself, the distance between
lines of equal TO is smallest, and increases to each side of the baseline. The
term applied to this parameter is gradient, and has units of feet per microsec-
ond. The gradient is at a minimum along the baseline and deteriorates as the
user moves away from it. Since the LOPs are much closer together along the
baseline than they are at large distances away from it, a 100 ns standard devia-
tion of the TO estimate represents much less error in position near the baseline
than in the extremities of the coverage area. Refer to Figure 4.31 which shows
two secondaries and a common master, along with hyperbolic LOPs for each
master-secondary pair. If the location of each LOP is assumed to be normally
distributed with a standard deviation of 100 ns, then a minimum area will be
covered at the intersection of two LOPs when they cross at right angles. This
area (area of fix uncertainty) will increase as the crossing angle decreases (see
Figure 4.32 and 4.33). The combined effect of crossing angle and gradient is
called GOOP. (See previous discussion of GOOP expression.)
Simple analysis reveals that along baseline extensions, a singularity exists.
The measured TO does not change (being equal to the time from master to
secondary), and hence no solution can be found.
The U.S. Coast Guard publishes predictions of the Loran-C ground-wave cov-
erage and geometric fix accuracy limits. These coverage diagrams describe the
area over which the signal-to-noise ratio of the master and two secondary stations
is -I 0 dB or better for a nominal expected atmospheric noise level in that geo-
graphic region, and the position accuracy is 0.25 nm 2drms (95% of the time).
154 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Figure 4.31 Hyperbolic lines of constant TD for a typical triad.

M-X

AREA OF FIX
UNCERTAINTY

M-Y

Figure 4.32 90° crossing angle.


HYPERBOLIC SYSTEMS ISS

M-Y

Area of Fix
Uncertainty

Figure 4.33 Shallow crossing angle.

4.5.2 Omega
Principles and System Configuration In 1996, the Omega VLF radio-naviga-
tion system comprised eight transmitting stations located throughout the world.
At each station, continuous-wave (CW) signals are transmitted on four com-
mon frequencies and one station-unique frequency. The signal frequencies are
time-shared among the stations so that a given frequency is transmitted by only
one station at any given time.
To support medium accuracy navigation, the signal transmissions from all
stations are phase-synchronized to about l p,sec. For purposes of time transfer
and to facilitate the systemwide synchronization procedure, the signal timing is
maintained to within an accuracy of about 0.5 p,sec with respect to coordinated
universal time (UTC).
Omega signals are subionospheric; that is, they are propagated between the
Earth's surface and the D-region of the ionosphere. Because VLF signal attenu-
ation is low, the signals are propagated to great ranges, typically 5000 to 15,000
nmi. Signals with amplitudes as low as l 0 p, V/meter can often be detected and
used for navigation. Of primary interest to navigation users is the signal phase
which provides a measure of transmitter-receiver distance. The fractional part
of a cycle (or lane, which is the equivalent distance measure) is generally the
156 TERRESTRIAL RADIO- NAVIGATION SYSTEMS

only measurable component of· the signal phase. thus leading to lane ambiguity.
However, the lane ambiguity problem is reduced through the use of multipl e
frequencies and is resolved for navigation through a process of continuous lane
count.
When used as a stand-alone system for navigation, an Omega receiver pro-
vides an accuracy of 2 to 4 nmi 95 % of the time 143, 48J. In the differential
mode of operation, where a receiver utilizes Omega signal phase corrections
transmilled from a nearby monitor station, a position accuracy of about 500
meters can the attained. Because Omega is a continuous VLF phase-measur-
ing system, it has been appropriately integrated with noncontinuous , high-accu-
racy sensors. The resulting system has an accuracy that is comparable to the
high-accuracy navigation aid and degrades relatively slowly in time when the
high-accuracy aid is unavailable. As commonly used in overocean civil air-
line configurations, an Omega receiver is combined with an inertial navigation
system, so that the Omega system error effectively "bounds'' the error of the
inertial system.
The signals from the eight Omega transmitting stations shown in Figure 4.34
provide continuous signal coverage over most of the globe. The suite of elec-
tronics equipment (mainly signal generation, control , and amplification units) is
virtually the same for all stations in the system , but the station antennas differ
substantially. Because they radiate long-wavelength VLF signals. the antennas
are the largest physical structures at the stations. Three types of antennas are
employed in the Omega system: (I) grounded tower, (2) insulated tower, and
(3) valley-span. Each has an associated signal monitoring facility about 20 to
50 km from the effective phase center of the antenna. These unmanned facilities
perform several functions, including monitoring the performance of the asso-

II
;
i
, :1'' ··.- , __;

·; ',·
' tJ']~pan
i,Hawaii

i .• .... .
Liberia i

.
·.:11 /]' ~
i ; • La Reunion 1·
Australia
() ·~· ;:_,:
' '
{!Argentina

L __ _ __ _ ___;;____;_, ____ __ __ _ _

Figure 4.34 Omega station conliguralion.


HYPERBOLIC SYSTEMS 157

ciated station, providing data necessary to phase-synchronize the stations, and


detecting solar-terrestrial events that cause anomalous shifts of the propagated
signal phase.
The Omega signal transmission format is illustrated in Figure 4.35. Across
each of the eight rows in the figure is a I O-see sample of the signal frequencies
transmitted by a particular station. Important features of this time/frequency
multiplex format include these four:

I. Four common transmitted signal frequencies: I 0.2, II t, 13.6, and 11.05


kHz.

Segments

Station

Norway A

ubena B

Hawaii c

Nonh Dakota D

La Reun1on E

Argentina

Australia G

Japan H

Signal Frequency in kHz

Figure 4.35 Omega system signal transmission format.


158 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

2. One unique signal frequency transmitted by each station.


3. A separation interval of 0.2 sec between each of the eight transmissions.
4. Variable-length transmission periods.

The fourth feature makes it possible to synchronize an Omega receiver to the


signal format with no additional external information. For example, if a user
determines that a I 0.2-kHz transmission segment (repeated every 10 sec) is 1.2
sec in duration, then according to Figure 4.35, the transmitting station could
be either station D (North Dakota) or station G (Australia). However, a mea-
surement of the duration of the succeeding transmission segment at a frequency
of 13.6 kHz would discriminate between station D (1.1 sec) and station G
(1.0 sec).
Navigational use of Omega is based almost entirely on measurement of the
signal phase transmitted from one or more stations. This is because the total
signal phase is closely related to the distance from a transmitting station with
known coordinates. The total phase, or cumulative phase, developed by a signal
between a station and a receiver is composed of a whole number of cycles and
a fraction of a cycle. In virtually all cases, the whole number component of
the cumulative phase is not measurable. Since only the fractional part of the
phase is measured, an ambiguity (termed lane ambiguity) exists with regard to
the whole-cycle count. In terms of distance-unit quantities, the equivalent of
the phase cycle is the signal wavelength. If a marker is made at the station and
at each wavelength on the station-receiver path, the intervals between markers
serve to define lanes, which are one wavelength in width and over which the
phase varies from 0 to 271' rad.
The existence of lane ambiguity is not generally a problem for a user if
Omega is used in a navigation mode from a known initial position. Succes-
sive positions are computed from corresponding incremental phase change mea-
surements. Also, the presence of multiple signals reduces the chances of incor-
rect lane identification (or count). However, if lane count is temporarily lost
(e.g., because of a receiver outage or propagation anomaly), the Omega for-
mat is designed to help resolve the ambiguity by using difference frequency
techniques.
To determine a receiver's position based on the measurements of phase
from the multiple station signals/frequencies, two basic methods are employed:
hyperbolic and direct ranging. Hyperbolic methods use phase difference as the
unit of measurement, while ranging techniques utilize phase measurements.
Since about 1980, the ranging method has been employed almost exclusively
in airborne receivers, while the hyperbolic method is reserved for specialized
applications, such as submarine navigation.

Wave Form and Signal in Space The principles of Omega navigation usage
depend almost entirely on the assumed relationship between the signal phase
HYPERBOLIC SYSTEMS 159

received from a transmitting station and the station-receiver distance.


Ideally, changes in Omega signal phase, as measured on a moving platform,
bear a fixed linear relationship to corresponding changes in the position of
the platform over the surface of the Earth. With this idealization, navigation
and positioning become relatively simple procedures requiring only the sta-
tion locations as external knowledge, in addition to measured quantities
(internal to the receiver) such as the signal frequency and phase. However,
since Omega signals propagate to very long ranges, they are substantially influ-
enced by electromagnetic and geophysical variations in the Earth and iono-
sphere. These effects on the signal phase lead to a marked departure from a
linear dependence on distance, thus complicating direct use of the signals for
navigation.
Several methods have been developed for eliminating or reducing the com-
plex signal propagation effects on navigation and positioning. One method is
simply to subtract the signal phases at two of the frequencies (e.g., 10.2 and 13.6
kHz) transmitted from the same station. To eliminate propagation effects, this
procedure (which is similar to the method for resolving ambiguities) relies on
the assumption that propagation effects at the two frequencies are completely
correlated. In reality, the propagation effects at the two frequencies are only
partially correlated, so the complexity of the resulting signal is lessened but
not eliminated. A related, but improved method is to take an appropriate lin-
ear combination of the signal phases at the two frequencies that minimize the
variation over 24 hours (diurnal variation). This technique, known as compos-
ite Omega [29], reduces the diurnal variation but does little to reduce the wide
variation in phase behavior exhibited by paths of equal length over substantially
different electromagnetic/geophysical environments. By far the most common
method in use is the application of propagation corrections (PPCs) to the mea-
sured signal phase. In contrast to PPCs (which are predicted variations from the
nominal phase based on semiempirical models of geophysical effects and the
"normal" ionosphere), real-time corrections are provided by differential Omega
systems in local operating areas.
Omega PPCs are those predicted phase values that, when applied to the
received Omega signal phase measurements, provide an idealized phase func-
tion (at spatially distinct points) which depends linearly on distance. Thus, for
a receiver on a moving vehicle, two successive phase measurements are pro-
portional to the distance traveled: !J..¢ = k!J..r, where !J..¢ is the phase difference
corrected by the PPC and !J..r is the corresponding distance difference. The pro-
portionality constant k is called the wave number. The reduction of the phase
measurement to a linear function of distance can be traced back to when naviga-
tional charts were used for manual plotting of Omega lines of position (LOPs).
It is much easier to plot these LOPs if the phase (difference) is linearly related
to the distance (difference) from the transmitting station(s). The particular wave
number used to construct the charts is known as the "nominal" wave number,
which is simply the ratio of the cumulative "idealized" phase developed by a
signal to the distance over which the signal is propagated.
160 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

In free space, the wave number is given by 1 k 0 =f/c, wheref is the fre-
quency of the signal and c is the speed of light. The nominal wave number is
given by knom = 0.9974ko, which is chosen as an intermediate value between
observed night and day wave numbers on seawater paths. The exact value of
the nominal wave number is not critical; it is only important that the value be
near the average over all time and space conditions.
The Omega PPC may be thought of as the variation of the "true" Omega
signal phase (¢) from the nominal phase (¢nom):

PPC =¢nom-¢ (4.9)

where r:f>nom = knomr, and where r is the distance between a transmitter and a
receiver over the surface of the Earth. The assumption of a fixed wave number
that relates nominal phase and distance over the surface of the Earth is the basis
for the so-called nominal model of Omega signal phase/distance relationships.
The calculation of the PPC is based on a semiempirical model of phase vari-
ation as a function of the electromagnetic characteristics of a signal path from
transmitter to receiver [22-23, 24-25].
Thus, by Equation (4.9), the PPC and the nominal model together determine
the predicted phase for a given station, signal frequency, position, and time.
In this relation the nominal phase is the "dominant" term in the sense that it
accounts for approximately 99% of the cumulative phase from the signal source,
that is, the distance between the transmitting station and the receiver in units
of wavelength. Measured in cycles of nominal signal wavelength (somewhat
larger than a free-space wavelength) at I 0.2 kHz, the nominal phase is 100 to
500 for typical paths, whereas the PPC is usually between -3.00 and +3.00
cycles, with a resolution of 0.01 cycle (a unit referred to as a centicycle). The
predicted phase has a typical diurnal variation of 0.5 to 2 cycles, amounting to
about 0.2 to 2% of the nominal phase.
Figure 4.36 shows a typical diurnal observed phase profile (measured with
respect to a precise time or frequency standard) in which the path illumina-
tion conditions, nominal phase, and two sample PPC values are identified. The
figure illustrates the higher (retarded) phase during path night and the lower
(advanced) phase in path day, with a total diurnal shift of about 0.65 cycle.
Since the phase is a function of effective ionospheric height which varies with
the relative sun angle (solar zenith angle), the observed phase exhibits a "bowl-
shaped" profile during the clay with less variation at night. The phase profile
changes from day to night behavior during path transition when the sunset ter-
minator cuts the path. The figure illustrates the time-independence of the nom-
inal phase and the consequent time-dependence of the PPC values.

Propagation Effects In the wave-guide model of VLF wave propagation [40],


the region in which the Omega signals are confined is known as the Earth-

1An alternative definition of the free-space wave number used in many texts is ko = 27r.f /c.
HYPERBOLIC SYSTEMS 161

120

100 Observed Phase

iii'
Ill
u>- eo
~
Nominal Phase
'EIll 60
.!:!..
Ill
en 40
~
.c:
~

20
Transition -+--- Night ---+-- Transition _...,.,___ _ Day ---~
0
0 02 04 06 08 10 12 14 16 18 20 22 24
UT
Figure 4.36 Typical diurnal phase behavior.

ionosphere (EI) wave guide. Propagation of Omega signals is mostly confined


in the EI wave guide for three principal reasons:

I. The lower boundary (the Earth"s surface) has a relatively high conduc-
tivity (greater than 10- 3 mhojm over most areas of the Earth). so waves
do not readily penetrate the surface.
2. The Earth's atmosphere (at altitudes between 0 and 70 km) has an
extremely low concentration of charged particles and thus acts as a vac-
uum to VLF waves.
3. The D and E regions of the ionosphere (70 to II 0 km) have low average
conductivity (about I o-s mhojm) but have a steep conductivity gradient
between 70 and I 00 km which serves to reflect VLF waves.

The above conditions also lead to low attenuation of the propagated signal; for
example, over a range of about 1000 km the signal amplitude is reduced (on
average) by a factor of two.
Factors effecting Omega signal propagation include the action of the Earth's
magnetic field, the structure of the ionosphere, solar control and the effects of
the !!-year sunspot cycle, and the presence of two or more propagation modes.
The Earth's magnetic field introduces an anisotropy into the behavior of
VLF waves interacting with the ionosphere. That is to say, signal propagation
depends upon the direction of propagation. This anisotropy is strongest on paths
perpendicular to the geomagnetic field (east-west paths). The presence of two
or more signal propagation modes with comparable amplitude will cause the
phase to become a strongly oscillatory function of distance, thus rendering the
signals unusable for navigation/positioning.
162 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

The ionosphere is quite sensitive to the net incident solar illumination. Dur-
ing the day, solar photoionization maintains a small, but stable ionized compo-
nent which is not present in nighttime regions. Solar control of the ionosphere
introduces a strong diurnal dependence on Omega signal propagation.

Position Determination and Accuracy Two methods of successive position


determination are currently used in Omega receivers: hyperbolic and direct
ranging. In the hyperbolic mode of position determination, the difference in
the phase of signals received from two distinct transmitting stations is mea-
sured. In this mode navigation equipment is assumed to know the identity of
the hyperbolic lane in which it is located, either from initialization at a known
starting point or from the results of continuous navigation. Since phase dif-
ference is equivalent to distance difference, the phase-difference measurement
locates the receiver on a hyperbolic curve within the known lane (see Figure
4.37).
A hyperbolic lane has a width that is one-half the signal wavelength on
the baseline between the two stations located at the foci of the hyperbolas.
This can be seen by first noting that a lane is defined as those locations for
which the phase-difference measurement (¢A- ¢B) with respect to two stations
(A and B) varies between 0 and I cycle. Let the distance corresponding to this
phase-difference change be !:J.r, and assume that the wave number k is constant.
Without loss of generality, it can be assumed that the initial phase difference

<PAB for P 1 = <PAs for P2


Baseline

Station A Station B

Each curve is parameterized


by a unique phase difference
between signals from stations
Contstant Phase
A and B (.PAs)
Difference

Figure 4.37 Hyperbolic geometry for phase-difference measurements.


HYPERBOLIC SYSTEMS 163

(before moving distance Llr) is zero. However, after moving distance Llr toward
the station B along the baseline, the Jane phase difference is given by

Lane phase difference= I (cycle)


= (¢A + kL1r)- (¢s- kM)
= ¢A - ¢B + 2kLlr
= 2kL1r (4.10)

Since the wave number is given by k = ljf.., where f.. is the signal wave-
length, it follows from the above that Llr = l/(2k) = f../2. At points away from
the baseline, the Jane width increases with the diverging hyperbolic curves as
f../(2 sin (1/; As/2)), where 1/; AB is the angle subtended by the two stations at the
receiver location.
A hyperbolic Jane is actually the family of all hyperbolic curves with phase
differences between 0 and I cycle that lie within the Jane boundaries. A hyper-
bolic curve normally has two branches, corresponding to the positive and nega-
tive values of the range difference. In the case of Omega, the sign of the phase
difference is known from measurement, which limits the receiver location to
a single branch. In the vicinity of the receiver (near the baseline), the hyper-
bolic curve resembles a planar hyperbola with foci located at the two associated
transmitting stations. On scales of 1 to I 0 nmi, these hyperbolic curves are well-
approximated by straight lines. At points well away from the baseline. however,
the spherical shape of the Earth causes the hyperbolic curve to close on itself in
a quasi-elliptical shape, with one of the stations at one focus and the antipode
of the other station at the other focus.
If the appropriate Jane is shown for a second pair of stations (which may
include a station common to the first pair), then a phase-difference measurement
with respect to these two stations establishes a second hyperbolic curve, whose
intersection with the first curve determines the receiver position (see Figure
4.38). It is possible that these two curves could intersect in two locations, but the
correct intersection is easily resolved for one or more of the following reasons:

I. One of the two intersections is usually relatively far from the known
approximate location of the receiver.
2. An independent third pair of stations (if available) provides another hyper-
bolic curve that passes near one of the two intersections.
3. A moving receiver shows successive fixes consistent with vehicle speed
for one of the intersections and inconsistent for the other.

In cases where more than two independent hyperbolic curves are available
(i.e., more than four usable Omega station signals are accessible), the multiple
curves do not, in general, intersect at a single point due to the effects of noise
164 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Station
''' Station
A ', B
---$------------
Baseline
------~---
''
'

Figure 4.38 Hyperbolic fixing for three stations.

and station signal-dependent prediction errors. In these cases, a least-squares


technique is often used to obtain a good estimate of the true position.
Direct ranging techniques for Omega position determination are divided into
two principal types; rho-rho and rho-rho-rho. These types are differentiated
because of the different receiver equipment required for each. In the ranging
mode, individual phase measurements on each station signal are made so that
the lane width is a full signal wavelength at all locations.

Rho-Rho Method The rho-rho technique requires only two range measure-
ments for a fix. As in the hyperbolic case, it is assumed that the correct lane is
initially known and successive measurements are processed over small enough
distance/time intervals so that lane changes are readily tracked. To obtain an
accurate estimate of the distance traveled based on successive phase measure-
ments, the processor must have access to a frequency /time reference (clock)
of sufficient stability so that the reference is effectively synchronized to the
Omega station during the period between precision updates.
The change in station-receiver distance, obtained from two successive phase
measurements of the station signal, places the receiver's new position on a cir-
HYPERBOLIC SYSTEMS 165

cular curve (centered on the station location) within the appropriate lane. Since
the receiver's previous position is assumed known, some points on the new
circle are more likely candidates than others for the new position, based on
platform velocity and maneuvering limits. However, the new position is accu-
rately determined only when a distance change to a second station is obtained
from successive phase measurements. In this case, a second circle is established
that intersects the first at the new receiver location. Although two intersections
are possible, the correct intersection can be resolved using methods listed above
for the hyperbolic case. This method corresponds to a system of two equations
and two unknowns. In cases in which more than two usable station signals are
available, a least-squares technique can be used to resolve the multiple inter-
sections that arise as a result of measurement noise or phase prediction error.

Rho-Rho-Rho Method The rho-rho-rho type of ranging is similar in princi-


ple to the rho-rho method except that it requires less precision in the onboard
clock, essentially permitting oscillator (clock) self-calibration. In particular, the
method assumes that the clock has a fixed frequency offset from its "correct"
value during the period between precision updates. A fixed offset in frequency
implies that the phase reference supplied by the clock drifts linearly in time
away from the correct phase reference with a slope /::,.¢/ /::,.t proportional to the
frequency offset. The unknown slope of this clock drift introduces a third vari-
able into the problem of determining the two components (e.g., north and east)
of the incremental position change. For this reason, three phase change mea-
surements (corresponding to three independent equations) are required during
successive updates. Though nearly as accurate as rho-rho ranging, rho-rho-rho
has the disadvantage that usable signals from three stations are required, instead
of two. When more than three usable station signals are available, the redundant
information is best handled by a using a least-squares method.

Position Accuracy Total Omega position error can be traced to a variety of


sources, including station synchronization offset, receiver processing (e.g., lane
slipjjump), operator mistakes (e.g., initializing with coordinate insertion error),
and temporal anomalies (e.g., a polar cap disturbance (PCD)). The predominant
error source, however, is the PPC.
The PPCs are obtained from a semiempirical model of Omega signal phase
behavior which is calibrated largely from phase measurements at globally dis-
tributed fixed Omega monitor sites. Analysis of these measurements reveals
important features of Omega phase behavior as well as provide insights into
Omega PPC error. A basic property indicated by these measurements is that,
at a fixed site, Omega phase (and phase error) generally varies more over 24
consecutive hours (see Figure 4.36) than over a year at a fixed hour. Because
the observed phase measurements show little systematic change over a month
at a fixed hour, the average observed phase over 15 to 30 consecutive days is
a robust aggregate measure of the phase for a given hour and specific month.
The predicted phase (obtained from the PPCs) over the same time period is
166 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

nearly constant but often differs significantly from the average observed phase.
This difference is referred to as the PPC bias error which varies in magnitude
from 0 to 30 centicycles (cecs). Also occurring in this 15-30 day period at a
fixed hour are random (nonsysternatic) day-to-day variations in the observed
phase on the order of I to 5 cec;. Since these random variations (which are
due to ionospheric fluctuations) are not reflected in the PPCs, they make up the
random component of PPC error.
When converting phase measurements to position, the bias and random com-
ponents of phase error produce corresponding bias and random components
of position error. The transformation of the phase error components to posi-
tion error components depends upon the individual phase errors of all signals
received and the geometrical configuration of the receiver and stations corre-
sponding to the received signals. If the magnitude of the random phase errors
is assumed to be the same for all ~ ignals received and the bias error is assumed
to be zero, then the radial position error standard deviation (a r) can be obtained
by multiplying the phase error standard deviation (aq,) by a scalar factor known
as the geometric dilution of precision (GDOP). For a least-squares method of
position determination, used when multiple redundant signals are present, the
following form 2 of GDOP [24, App. B] is obtained:

1
GDOP=
2
q I 'I 1/2

L ~~ 2
sin (((3;- (3i )/2)

q-2 q-1 q
L L L sin 2
(((3k- ~~j )/2) sin 2 (((3;- (3k)j2) sin 2 (((3j- (3;)/2)
i= I j=i+ I k=j+ I

(4.11)

where q is the number of usable signals received and (3; is the bearing to the
ith station (corresponding to the ith usable signal). The GDOP becomes very
large whenever at least q - I stations have bearings which are nearly equal.
Another property of the GDOP is that the GDOP for q station signals is never
greater than the GDOP for any subset (> 3 stations) of q. This means that for
least-squares position processing, the use of additional (usable) signals does not
degrade, and typically improves, the resulting position accuracy.
For moving vehicles performing navigation, the bias error is effectively
removed at initialization, leaving only phase error due to noise (typically less
than 1 cec ). However, the paths from the station to the receiver eventually
change (both in space and time) enough so as to become decorrelated with
the original configuration of station signal paths to the receiver and the initial

2 GDOP = a,j('Acr&) where 'A= signal wavelength.


HYPERBOLIC SYSTEMS 167

correction no longer applies. From this decorrelation time until the next preci-
sion update, the Omega receiver is subject to PPC bias and random errors and
the effect of GDOP. Omega-only accuracies have been reported for aircraft of
2.7 to 3.3 nmi 95% of the time l32, 33].

Differential Omega Like the differential techniques associated with many


other radio-navigation systems, differential Omega systems provide a way of
enhancing the position accuracy in a local region through the transmission of
local corrections. The corrections are obtained from a central monitoring facility
that compares observed signal phase readings from each of the Omega stations
with the "correct" phase (using a nominal model) based on the location of the
monitor and the signal frequency. The accuracy of the correction depends on the
spatial correlation of the Omega signal phase between the position of the mon-
itor and the user's position. Within a radius of about 50 km from the monitor,
the correlation peak is within about 1 cec (for typical time constant receivers);
for greater distances, the degree of correlation gradually degrades.
Operational differential Omega systems in place in 1993 were tailored pri-
marily to marine users, although a number of experimental differential systems
for aircraft have been tested. The correction information for marine use is nor-
mally broadcast to all users in the local area (having a typical radius of 200
to 500 nm) using a 20-Hz modulation of LF beacon signals with frequencies
between 285 and 415 kHz. Measured position errors vary from 0.3 nm (I 00 nm
from the monitor station) to about I nm (500 nm from the monitor station) 95%
of the time [261. As of 1996, 30 differential Omega systems were in operation
throughout the world, including the Atlantic coasts of Europe and Africa, the
Mediterranean Sea, the Caribbean, Eastern Canada, India, and Indonesia [6].

OmegajVLF Operation The most common external radio navigation source


integrated with Omega in aircraft receiver systems arise from the network of
VLF communication stations. Unlike the Omega stations, the VLF communica-
tion stations are not synchronized, so only phase changes from each station can
be processed in a navigation mode. This means that VLF signal processing is
used to supplement Omega navigation rather than act as a substitute. Moreover,
these communication signals are broadcast for national/international security
purposes, so stations can switch frequency, change modulation, or temporarily
cease operation with no advance warning. Thus, although VLF signals serve a
very useful supplementary function in many airborne modern Omega receivers,
they do not play a primary navigational role, because the VLF communication
signals are not intended for navigation.
One important feature of Omega/VLF receivers is the difference in the algo-
rithms for processing of Omega and VLF signals. Some of these distinctions
arise from inherent differences in the two transmitting systems. For example,
since the stations in the VLF network are not synchronized (although the car-
rier signals are synthesized from precise standards), no receiver acquisition of
a time-frequency pattern is required as for Omega signals. This also means
168 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

that signal phase from different stations cannot be compared (in an absolute
sense) to determine position. Because the received VLF signal is generally sta-
ble in time, VLF navigation requires an initial calibration to permit subsequent
phase tracking of the signals from selected VLF stations. Accurate phase track-
ing requires an on-board precise frequency standard or a correction based on
an estimate of the frequency /time offset of the receiver's internal clock. This
estimate is usually obtained from Omega signal processing in the rho-rho-rho
mode.
In addition to internal differences in signal processing, signals from the two
systems are processed differently regarding external information. For example,
all known OmegajVLF receivers use externally supplied PPCs to correct the
measured Omega phase prior to navigation use, whereas few, if any, currently
operational receivers correct VLF signal phase measurements. This means that,
for most receivers, the received VLF signal phase is not accurately related to
distance over the Earth's surface, a problem that is not necessarily amelio-
rated by redundant measurements. External deselection data regarding modal
and long-path signals are available for Omega but not for VLF. Failure to de-
select VLF modal signals is potentially a more serious problem for navigation
than the lack of VLF PPCs, since modal phase excursions can be large and
sudden, often resulting in cycle slips or advances.
As a result of the signal-processing differences, due to the internal and exter-
nal information bases, receiver-processing algorithms treat Omega and VLF sig-
nals differently. Once acquired and initialized, Omega signal-processing alone is
robust and will fail only under unusual circumstances (e.g., cycle shifts or fewer
than three signals above the minimum SjN). VLF signal-processing schemes
generally rely on the presence of Omega signals and other aids in the receiver's
navigation filter. In most receivers VLF signals are closely monitored with
frequent cross-consistency checks. Normally, OmegajVLF receivers are pro-
grammed to exclude initialization with VLF signals alone, since this repre-
sents a "degraded" mode. Current FAA certification procedures require than an
OmegajVLF receiver system operate satisfactorily with Omega signals alone.

Transmitting Station Characteristics The transmitting equipment at each sta-


tion is generally described as belonging to one of three functional groups, or
subsystems:

1. Timing and control subsystem


2. Transmitter subsystem
3. Antenna-tuning subsystem

The principal functions of the timing and control subsystem are signal gener-
ation and phase control. The signal source is a precision cesium beam frequency
standard of 9.193 GHz with a stability of 5 parts in 10 12 . Three cesium stan-
dards are used for frequency drift comparison and control, and are maintained
HYPERBOLIC SYSTEMS 169

as reserves in the event of failure of the on-line standard. Phase control is main-
tained by comparing the RF signal phase to the phase of the antenna current
reference signal fed back from the antenna tuning subsystem. The signal phase
is advanced or retarded to insure that its phasing at the antenna coincides with
the appropriate UTC epoch.
The transmitter subsystem consists of those devices that amplify the signal
generated in the timing and control subsystem. The RF signal from the timing
and control subsystem is first raised to a level of 160-V RMS by the input
amplifier. The driver amplifier further raises the signal level to a nominal 520-
v RMS and the final amplification is performed by the power amplifier that
boosts the signal voltage and current to a peak power of 150 kw. Following
this final amplification stage, the signal is fed to the antenna tuning subsystem.
The antenna-tuning subsystem is designed to tune the antenna at the RF sig-
nal frequency by impedance matching the antenna to the input circuit. This
ensures the maximum effective radiated power at the antenna for a given input
signal power. Based on the long keying pulses from the timing and control
subsystem and the current samples received from the current transformer, the
antenna-tuning control first implements fine inductive tuning through the var-
iometers. The antenna-tuning control signal activates a mechanical drive that
moves the variometer coil to the appropriate position for matching impedance.
The long keying pulses activate antenna relays that connect the appropriate var-
iometer into the main antenna circuit. The RF signal is then transferred to the
"helix," a large helical coil that acts as a coarse tuning device for the antenna.
The helix is equipped with separate taps for each signal frequency transmitted.
Finally, the RF signal is conducted to the antenna structure itself from which the
signal is radiated. The structural feature which principally differentiates Omega
stations is the antenna structure. Two basic designs are utilized: tower and val-
ley span. The tower antennas are further classified as either the grounded or
insulated type.

Receiver Characteristics Narrow-band Omega signals and noise from all


sources (including harmonic interference) are received at the £-field (probe) or
H-field (loop) antenna having a bandwidth of about 4kHz. The electromagnetic
energy is passed to the detector stage of the Omega receiver for conditioning.
Filtering is performed at the front end of the detector, reducing the bandwidth
to about 100 Hz. The signal is also amplitude limited at this stage to prevent
swamping from large impulsive noise spikes. In successive stages of the detec-
tion process, the signal is either processed at its original frequency (tuned RF)
or mixed with a reference signal to produce a lower-frequency signal (hetero-
dyning). In either case, the signal is compared with a reference signal having
sufficient stability over one or more I O-see Omega format periods.
In most modern Omega receivers, the signal is acquired and tracked by
means of digital techniques. With these techniques, phase is usually measured
as the interval between a reference clock pulse and the next zero crossing of
170 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

the input signal in units of clock cycles. 3 A digital phase tracking technique
used in many Omega receivers is the phase lock loop in which the reference
phase is shifted by an amount that depends on the previous phase measure-
ment and the time-averaged phase computed at the previous measurement time.
The time-averaging refers to a moving average that differs from the average at
the previous loop cycle by a weighted value of the previous measurement. A
second-order phase lock loop is designed to track the time rate of change of
phase in a manner similar to that of the first-order loop.
Like most signal-tracking circuitry. the basic function of the phase lock loop
is to reduce the effective bandwidth (inversely proportional to the effective time
constant) so as to best reproduce the desired signal. For aircraft receivers. time
constants typically range from I 00 to 200 sec. Shorter time constants do not
provide sufficient averaging or noise rejection. and longer time constants may
exceed the time required for aircraft maneuvers. such as sharp turns. Since the
duty cycle for each of the common frequency Omega signals is I 0%. the e,ffec-
tive phase measurement time comtant is I 0 to 20 sec. Using standard assump-
tions [24]. these time constants correspond to noise equivalent bandwidths of
0.025 to 0.013 Hz. When compared to the input bandwidth of I 00 Hz. these
narrow output bandwidths correspond to gains of better than 35 dB. Thus. sig-
nals with SjN as low as -20 to -30 dB in the 100Hz receiver input bandwidth
can be effectively utilized in aircraft Omega receivers.
After the signal phase measurements are made. PPCs are computed using an
appropriate modeljalgorithm and added to the measured phase to produce an
"idealized" phase value that can be readily used in the subsequent positioning
calculations. Although the PPCs require receiver position as an input. the PPCs
are not sensitive to precise position since they vary less than 0.05 cycle over
ranges of 50 to I 00 km. Thus, the PPCs can be accurately computed from only
approximate knowledge of position.
Before determining position, the (idealized) signal phases are usually
weighted based on the expected relative accuracy of the phase measurement.
This accuracy is most commonly determined by the estimated SjN. which. for
phase lock loop receivers. is clo~:e!y related to the rms loop error. If the esti-
mated Sj N is below a preset threshold. such as -30 dB in a I 00-Hz bandwidth.
the signal phase is usually excluded (given zero weight) in the position solu-
tion. In addition to these weighting and exclusion procedures based on internally
derived SjN data. the signal phases are edited by invoking external information
concerning the signals. External information usually refers to signal deselection
data that are generally extracted from known coverage information. including
modal "maps'' and data on the occurrence of long-path signals. The resulting
signals that arc not deselected or excluded are further screened for acceptable
geometry. In some receiver mechanizations. all common frequency signals from
a station must be acceptable to be used in the position fix; in others. only a sin-

0The reference clock/oscillator commonly has a frequency of I to 5 MHz but may be converted
to a lower frequency.
HYPERBOLIC SYSTEMS 171

gle acceptable signal frequency from a station is necessary for inclusion in the
fix algorithm. Position change estimates are then formed from the weighted
and edited Omega signal phase data at the common Omega signal frequencies.
The estimates are computed by means of a least-squares or Kalman estimation
technique (see Chapter 3). The Omega-based calculation of position change
is frequently combined with the aircraft-supplied true airspeed and heading or
inertial system information to furnish the best position estimate.
An airborne Omega receiver block diagram is shown in Figure 4.39; a pho-
tograph of an Omega/VLF receiver for commercial aviation applications is
depicted in Figure 4.40.

4.5.3 Decca
Hyperbolic systems other than Loran and Omega exist and are used for nav-
igation. One such example is the Decca system [31] developed by the British
and used extensively during the later stages of World War II. In 1996, its major
area of implementation is in northwestern Europe where it is primarily used by
shipping companies.
Decca is based on the measurement of differential arrival times (at the vehic-
ular receiver) of transmissions from two or more synchronized stations (typi-
cally 70 mi apart). As an illustration. consider two stations (A and B) I 0 mi apart
and each radiating synchronized radio-frequency carriers of I 00 kHz. Assume
that there is some way by which each station can be identified. The wavelength
at this frequency is 3000 meters, or about 2 mi. On a line between the stations.
the movement of a vehicle D one mile toward one station and one mile away
from the other station will cause the vehicle to traverse one cycle of differen-
tial radio-frequency phase. There wilL therefore, be I 0 places along the line AB
where the signals from the two stations will be in phase. As the vehicle moves
laterally away from this line, isophase LOPs can be formed (each line being a
hyperbola) with the stations as foci and BD- AD as a constant for each LOP.
Site error virtually vanishes in such a system. and the accuracy depends
entirely on the constancy of propagation between the stations and the vehicle.
In an effort to avoid line-of-sight limitations, Decca uses a low frequency (70
to 130 kHz), which is subject to sky-wave contamination. and uses continuous
waves. which preclude the separation of ground waves from sky waves. Thus,
despite the low frequency (whose ground-wave range is on the order of I000
mi), practical Decca coverage is limited to areas where sky-wave strength does
not exceed about 50% of ground-wave strength. This is typically 200 mi.
A typical Decca chain consists of a master station and three slave stations.
A typical station has a 2-kw crystal-controlled transmitter feeding a 300-ft
antenna. The slave stations are referred to by the color of the phase meter asso-
ciated with each at the receiver. Each station transmits a stable continuous wave
frequency that bears a fixed relationship to the frequencies of the other three
stations. Phase comparison therefore produces a family of hyperbolic LOPs of
.......
-...l
N

Atmosphenc
Noise

• I r j -r- local NOISe

Internal Noise

(POSitiOn. limel

CDU
Signal Form Instruments
Cond1t10n1ng Phase
POSitiOn Autopilot
(Ga1n L1m1ting, Track
Loop Estimate
Filtenng) Miss1on Com-
SwitChing
puter

Reference
Phase
L_____ w~~~-~-----J
Editing/ 1 I

I Aiding (Heading, TAS) I


Figure 4.39 Airborne Omega receiver block diagram.
HYPERBOLIC SYSTEMS 173

Figure 4.40 Typical airborne Omega/ YLF receiver.

constant phase. The spaces between these lines are called lanes. The intersect ion
of two LOPs provide a position fix.

4.5.4 Chayka
ehayka (meaning "sea gull") is a pulse-phase radio-navigation system similar
to the Loran-e system. It is used in Russia and surrounding territories and seas.
By using ground waves at low frequencies, the operating range is over 1000 mi;
by using pulse techniques, sky-wave contamination can be avoided. The system
is designed to provide both a means of determining an accurate use r positi on
and a source of hig h-accuracy time sig nals. The system can support a n unlim-
ited number of users since the computations are pe rformed at the user receiver
and position determination is possible at any time of day or year, regardl ess of
meteorological conditions.

Principles and System Configuration e hayka is analogous to the Loran-e


system employed in the United States and other parts of the world (see Section
4.5. 1). e hayka is a low-freque ncy pulse-phase radio-navigatio n system with
a pulse-modulated frequency of I00 kHz. Contrary to the m ulti ple chains of
Loran-C. the C hayka system consists of only two " networks" of stations. each
with a master and fo ur slave statio ns. Receivers measure the time differe nce
between the arrivals of a given wave form from the master and a ny parti c ular
s lave station. This time-differe nce informa ti o n can then be converted into posi-
tion. veloci ty. and time and freq ue ncy refe rence information. Additio nal pro-
cessing can produce bearing, distance, and a long-track and cross-track errors.
174 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

Each of the stations in the Chayka networks transmit pulses with standard
characteristics. The pulse consists of a I 00-kHz carrier wave that increases from
zero to a maximum and then decays at a specific rate to form the envelope of
the signal. All slave stations transmit signals in packets of eight pulses; the
masters emit a ninth pulse for identification. The interval between pulse onsets
is 1.0 ± 0.05 J-tSec. In addition, in order to provide the possibility for automatic
detection and identification of signals and to reduce the influence of multiply
reflected signals, the signals are phase-coded with the slaves all having the same
phase (i.e., oo or 180°) and the master phase differing by exactly 180°.
The repetition periods of the radio signals are selected based on a trade-off:
maximizing the average power of the signals at the receivers while preventing
any signal overlap within the network operating region. Since all slave stations
transmit signals with identical phase codes, each slave station transmits with
its own specific code delay relative to the master signal. The magnitude of the
code delays are selected such that the order of reception of slave station signals
is identical everywhere within the network operating region.

Wave Form and Signals in Space Each station transmits signals with stan-
dard pulse modulation charactcris1:ics. Each pulse consists of a 100-kHz carrier
wave modulated by an envelope that depends on the specific transmitting sta-
tion equipment. Two types of radio transmitter (RT) stations are currently in
use: those with vacuum tubes and those using impact excitation of the linear
output circuit. The envelope of the vacuum tube RTs can be approximated by

U(t) = Um [ _t_ e( I - (tit,")l] 2 (4.12)


tm

where U 111 is the pulse amplitude and t 111 is the time interval from the onset to
the peak of the pulse.
The envelope of the impact excitation RTs can be approximated by

, . [ sin {3t o:r] '"


U(t) = Umax --(3- e (4.13)

where m = I or 2 for two or three coupled circuits, respectively, including the


circuit of the transmitting antenna, and a and (3 are approximation parameters
chosen to provide a given steepness of growth and speed of attenuation of the
radio wave envelope. Existing impact excitation RTs use linear circuits consist-
ing of three coupled circuits (i.e., m = 2).

Coverage In 1996 the Chayka system consisted of only two networks, Euro-
pean and Eastern. Coverage from the European network is centered near
Moscow and includes most of the area between 5c and 50' East longitude and
FUTURE TRENDS 175

40° and 65° North latitude (e.g .. Eastern Europe. western portion of the former
Soviet Union. the Black Sea, and part of the Caspian Sea). Coverage from the
Eastern network includes most of the area between 135° and 160° East lon-
gitude, and 35° and 65° North latitude (e.g .. eastern shoreline of the former
USSR. portions of Japan. and the surrounding areas of the Pacific Ocean).

4.6 FUTURE TRENDS

Terrestrial radio-navigation systems will continue to play a major role for air-
craft navigation throughout the world for many years. Since the U.S. satellite-
based GPS had achieved full operational capability (FOC) in 1995. followed
in 1996 by the Russian GLONASS. there had been expectations that these
satellite systems would quickly replace the terrestrial systems such as VOR,
VOR/DME. Loran, and Omega. However, this was not the case and is not
likely to occur in the near future. The reasons for this include (I) the widespread
implementation of equipment by aircraft owners and the cost of replacement by
satellite receivers. (2) the lack of available air-traffic management operational
procedures compatible with satellite-based systems. (3) the absence of full sole-
means navigation system status of GPS. and (4) the fact that issues involving
system accuracy, integrity. availability. and continuity of service of the satel-
lite systems had not been fully resolved (Chapters 5. 13, and 14 ). Therefore.
the terrestrial radio-navigation systems will continue to be used for many years
on a global basis. In the more distant future, some of these systems will be
decommissioned when their utility will have been fully replaced by that of the
satellite systems.
By 1993, the U.S. Coast Guard had implemented full coverage of the conti-
nental United States by Loran-C chains and the FAA had authorized Loran-C
for supplemental navigation for en-route and nonprecision approaches. At least
ten U.S. airports had received approval for Loran-C approaches. As a result.
there was extensive use of airborne Loran-C receivers on U.S. General Aviation
aircraft and that usage is likely to continue for some time until GPS receivers
are widely implemented on General Aviation aircraft.
Since 1990. a number of major studies have been conducted and published
that show the advantages and discuss techniques of combining data from Loran
and GPS for aircraft navigation [45. 46, 47]. Among the major advantages are
the mitigation of the effects of GPS coverage outages caused by satellite shut-
downs or poor geometry and. conversely. that of Loran coverage outages due to
ground station shutdowns. high atmospheric noise levels. or precipitation static.
In addition Loran data could provide on-board fault detection and isolation of
GPS satellites, in connection with GPS Receiver Autonomous Integrity Mon-
itoring (RAIM. Section 5.7.2). The combining of GPS and Loran data (e.g.,
with a Kalman filter) can be at the pseudorange level and mutual time synchro-
nization can also be included [46]. Therefore. research and development on the
integration of Loran and GPS is likely to continue in the future.
176 TERRESTRIAL RADIO-NAVIGATION SYSTEMS

In 1995, a considerable number of countries had indicated that they would


continue or increase use of Loran-C. These included countries in Northern
Europe, the Mediterranean and the Far East, including Russia, France, Saudi
Arabia, the People's Republic of China, and India. Also, Russia promoted the
use of their Chayka and their VLF Radio-Navigation System (called Alpha) for
civil use.
Cooperative efforts between the United States and Russia resulted in imple-
menting a Loran-C/Chayka chain w provide aviation and marine coverage over
the five-hundred-mile coverage gap that existed in the North Pacific, between
the North Pacific Loran chain and the Northwest Pacific and Eastern Russian
chains [41 ]. These cooperative efforts are likely to continue in the future.
Several options were being com;idered to integrate the Russian Alpha and the
Omega system [28], and this effort may also continue for some time. However,
direct participation by the United States in Omega may terminate in the not too
distant future, possibly as early as September 1997.

PROBLEMS

4.1. A I 000-MHz DME transponder on the ground is triggered by a signal


10 dB above its receiver-noise level. The receiver-noise figure is 10 dB.
What transmitter power is needed on an aircraft to produce triggering from
a distance of I 00 nm? Assume simple dipoles at each end of the link, no
transmission-line losses, a transponder-receiver bandwidth of I MHz, and
a temperature of 293c Kelvin.
Ans.: 8 w.
4.2. What is the principal advantage of an omnidirectional range on the ground
versus a direction-finder in an aircraft?
4.3. What are some techniques used to reduce site errors in directional-antenna
systems?
4.4. What magnetic variation must be used to plot the true course of a VOR
radial?
(a) Variation at the aircraft location
(b) Variation shown on the chart for the VOR station location
(c) Variation used to calibrate the VOR station by maintenance personnel.
Ans.: c.
4.5. How does magnetic variation affect the accuracy of a Loran-C position?
Ans.: It has no effect.
4.6. What variation should be used to plot an initial course from an aircraft's
Loran-C position to an airport 200 mi away.
Ans.: The magnetic variation shown on the chart for the aircraft's position.
PROBLEMS 177

4.7. What are the factors that impact the accuracy of a Loran-e fix?
Ans.: Signal-to-noise ratio at the receiver, crossing angle of the
Loran-C lines qf position, calibration r~f the Loran-C time difference
to latitude/longitude coordinate converter.
4.8. What are the two categories of Loran-e system accuracy? What do they
mean?
Ans.: Repeatable accuracy to which one can return to a point vis-
ited before; absolute or predictable accuracy of the fix against some
external reference grid such as latitude and longitude.
4.9. Suppose that an Omega receiver processes four I 0.2 kHz signals from
stations with geographic bearing angles (at the receiver) of 31 o, 121 o, 211 c,
and 30 I o using a least-squares algorithm to estimate position change.
(a) What is the GDOP?
Ans.: 3/2Vl.
(b) If the signal phase error associated with each of the four stations is
4 cecs, what is the corresponding position error in kilometers.
Ans.: a,.= 1.25 Km.
(c) If one of the station signals becomes unusable (e.g., due to modal inter-
ference as the path becomes dark), by what factor is the position accu-
racy degraded?
Ans.: a,. becomes larger by a factor of 4/3.
5 Satellite Radio Navigation

5.1 INTRODUCTION

Since the 1960s, the use of satellites was established as an important means of
navigation on Earth. The earliest :;ystems were designed primarily for position
updates of ships, but were also found useful for the navigation of land vehicles.
Beginning in the early 1970s, satellite-navigation systems for aircraft (as well
as other platforms) were under intense development. Those efforts benefited
from the techniques used and the experience gained with the earlier systems.
In the 1980s, systems suitable for aircraft became mature and by 1996 their use
for aircraft navigation was increasing at a widespread and rapid pace.
The satellite-navigation systems described in this chapter are comprised of a
system of satellites that transmit radio signals. Appropriately equipped aircraft
receiving these transmitted signals can derive their three-dimensional position
and velocity and time. Two systems are described in detail, namely the U.S.
Department of Defense's NAVSTAR Global Positioning System (GPS) and the
Russian Federation's Global Orbiting Navigation Satellite System (GLONASS).
The International Civil Aviation Organization (ICAO) and RTCA, Inc. have
defined a more global system that includes these two systems, geostationary
overlay satellites, along with any future satellite navigation systems, in what
has been named the Global Navigation Satellite System (GNSS) [1, 2]. A third
major system, the United States Navy's Transit System, also called the Navy
Navigation Satellite System (NNSS), is a low-altitude Doppler satellite radio
navigation system. In Russia, a similar system was developed, called Tsikada.
Since GPS was fully operational, after 32 years the U.S. Navy ceased operations
of Transit on December 31, 1996 [ 124]. It will not be discussed here further.
(Design details are given in references [3] and [4].)
The systems described in this chapter provide users with a passive means
of navigation; that is, there is no requirement for their equipment to transmit,
only to receive. Both GPS and GLONASS are ranging systems. They provide
both range and range rate (or change in range) measurements. Once initialized,
they provide an instantaneous and continuous navigation solution in a dynamic
environment. Details of these solutions are described later in this chapter.
The advantage of satellite navigation systems is that they provide an accurate
all-weather worldwide navigation capability. The major disadvantages are that
they can be vulnerable to intentional or unintentional interference and tempo-
rary unavailability due to signal masking or lack of visibility coverage. In some

178 Avionics Navigation Systems. Myron Kayton and Walter R. Fried


Copyright © 1997 John Wiley & Sons, Inc.
INT'ROD UCT ION 179

critical applications, external augmentation is required. Various means of aug-


mentation arc also described later in this chapter.

5.1.1 System Configuration


The overall system configuration for these two systems are in common. They
all consist of three system segments- a space segment. a control segment. and
a user segment (Figure 5.1 ). The space segment is cumprised or the satellite
constellation made up of multiple satellites. The satellites provide the basic
navi gation frame or reference and transmit the radio signals from which the
user can collect measurem ents required for his navigation solution. Knowledge
of the satellites ' position and tim e history (eph emeris and tim e ) is also required
for the user's solutions. The satellites also transmit that information via data
modulation or the signals .
The control seg ment consists of three major elements:

I. Monitor stations that track the satellites' transmitted signals and collect
measurements similar to those that the users collect for their na vigation.
2. A master control station that uses these measurements to determine and
predict the satellites' ephemeris and time history and subsequently to
upload parameters that the satellites modulate on the transmitted signals.

---- ~ ---------7 --------------------- ' • of

'I
/I
~rc-7.~/
,- - )f 1 .,.r WF~:~{:~:'"~: ,,;'r?~:,~:, '~:tLCo
~lj\;,{\ GROUND
1j! 1 ~ , , ANTENNA
L' ' ~~ USER SEGMENT

MASTER CONTROL
STATION 1-4-___J
CONTROL
SEGMENT

Figure 5.1 Satellite radio-navigation system configuration.


180 SATELLITE RADIO NAVIGATION

3. Ground antennas that perform the upload and general control of the satel-
lites.

A secondary, but very important, purpose of these elements is to monitor the


satellites' health and to control their operation. These elements are connected
via appropriate communication links, which themselves may be via communi-
cation satellites. Normally, orbit injection of the satellites is controlled via an
independent system.
The user segment is comprised of the receiving equipment and processors
that perform the navigation solution. These equipments come in a variety of
forms and functions, depending upon the navigation application. Details of the
user equipment appear later in this chapter.

5.2 THE BASICS OF SATELLITE RADIO NAVIGATION

The concept of satellite radio navigation is illustrated in Figure 5.2. Although


types of user equipment may differ, they all solve a basic set of equations for
their solutions, using the ranging and/or range rate (or change in range) mea-
surements as inputs to a least-squares, a sequential least-squares, or a Kalman

Z. SATELUTE2

X
Figure 5.2 Ranging s;atellite radio-navigation solution.
THE BASICS OF SATELLITE RADIO NAVIGATION 181

filter algorithm. The measurements are not range and range rate (or change
in range), but quantities described as pseudorange and pseudorange rate (or
change in pseudorange). This is because they consist of errors, dominated by
timing errors, that are part of the solution. For example, if only ranging type
measurements are made, the actual measurement is of the form

(5.1)

where PR 1 is the measured pseudorange from satellite i, R; is the geometric


range to that satellite, llt, 1 is the clock error in satellite i, lltu is the user's
clock error, c is the speed of light and EPR, is the sum of various correctable
or uncorrectable measurement errors. These errors andjor corrections are com-
prised of atmospheric delays, the Earth's rotation correction, and multi path and
receiver noise. These errors will be discussed in more detail later in this chapter.
Equations for pseudorange rate or change in pseudorange are based on Equation
5.1 by differentiating or differencing.
Neglecting for the moment the clock and other measurement errors, the range
to satellite i is given as

(5.2)

where X, 1, Y,;, and Z,; are the Earth-centered, Earth-fixed (ECEF) position com-
ponents of the satellite at the time of transmission and Xu. Yu. and Zu are the
ECEF user position components at that time. For the three satellites in Fig-
ure 5.2, Equation 5.2 represents the equations for spheres whose centers are
located at the satellites. The user position is the reasonable intersection of the
three spheres. (There is another solution, but not near the Earth.)

5.2.1 Ranging Equations


Equation 5.2 is obviously nonlinear. The standard solution is a linearization of
that equation, resulting in

oR;= R,m- R,
182 SATELLITE RADIO NAVIGATION

(5.3)

where oR; is ~he range measurement residual, R;111 is the range measurement
to satellite i, R; is the estimated range to satellite i, lx;, I yi. and I zi are the
components (directional cosines) of the estimated line-of-site (LOS) unit vector
1; between the user and satellite i and oX11 , oY,, and oZ11 are the components of
the vectm;.oX11 of differences between the position solution Xu and the estimated
position X 11 • This vector represents an offset from the intersection of the three
spheres. Figure 5.3 illustrates this linearization in two dimensions, where the
inner circles represent the measured ranges and the outer circles represent the
"computed" ranges based on the estimated position. The shaded areas represent
the range measurement residuals.
Solving three equations representing range measurement residuals from three
satellites give a solution for the position correction vector, provided that the
geometry is sufficient (i.e., the solution exists). If the differences are large, as in
the exaggerated example of Figure 5.3, so that they exceed the range of the lin-
earization, an iterative solution is generally required. This can be accomplished
by using either the same set of measurements or subsequent measurements,

Figure 5.3 Navigation solution linearization.


THE BASICS OF SATELLITE RADIO NAVIGATION 183

where the user position in the computation of new LOS vectors is propagated
from the previous solution.

5.2.2 Range-Rate (Change-in-Range) Equations


In a ranging system, the Doppler (range-rate) measurement can be used as a
measure of user velocity. The range-rate measurements can be derived from
Equation 5.1 in three ways: by differentiating Equation 5.2 and linearizing,
or by differentiating or differencing Equation 5.3. Differentiating Equation 5.2
yields

(5.4)

where
V; =[X; Y; Z;f is the known satellite i velocity vector
Vu =[Xu Yu Zuf is the unknown user velocity vector
X;= [X; Y; Z;f is the known satellite i position vector
Xu= [Xu Yu Zu]T is the unknown user's position vector

Note that Equation 5.4 is also nonlinear because the LOS vector is also a func-
tion of the user's unknown position. However, linearizing about an estimate of
position and velocity yields

oR;= l; · oVu (5.5)

where oVu is the perturbation of user velocity.


Differentiating Equation 5.3 yields

_:__

oR; = l; · oXu +I; · oXu


"'l; · oVu (5.6)

The second term of the equation can be neglect~d under normal circumstances
because the rate of change of the LOS vector I; is small, which is due to the
large distance to the satellite.
In some precision landing applications, the measurements may be Doppler
count measurements. Then, change in range can be computed by differencing
Equation 5.3 over a time interval [tj _ 1, tj ], resulting in the equation for a change
in range as
184 SATELLITE RADIO NAVIGATION

/::,. bRij = bRij - bR;,j _ I = l;,j · bXuj - l;,j- I · bXu,j _ I (5 .7)

where the subscript j indicates measurements taken at time ti. Note that over
short time intervals, this change in range measurement can be used to estimate
range rate by dividing by ti - fj _ I.

5.2.3 Clock Errors


Equations 5.2 through 5.6 are based on the assumption that epoch times in the
satellites and the user equipment are known. The epoch times in the satellites
are generally known to the users to the required accuracy, because the control
segment determines their offsets in time and frequency. These values are sub-
sequently loaded into the satellites and broadcast to the users as part of the
modulated data stream. The user's time error, however, is usually not known to
the accuracy required for a good navigation solution. Thus, if it is included in
Equations 5.2 through 5.6, the term -eMu is added to Equations 5.2 and 5.3,
and the term

/::,.fu
-c(/::,.tuj - /::,.tu,J- I) = -c - - (tj - tj _ I) (5.8)
fo

is added to Equation 5.7, where /::,.fu!fo is the fractional frequency offset of


the user equipment oscillator with respect to system time. In the case of the
pseudorange rate measurements of Equations 5.5 and 5.6, the added term due
to clock drift is simply Equation 5.8 divided by the short time interval tj- ti _ I,
resulting in a solution for the oscillator fractional frequency offset.
The result of these time and frequency errors is that one more linearly inde-
pendent measurement is required for a solution for the one more unknown. The
effect of not doing this is that the spheres of Figure 5.2 or circles of Figure 5.3
no longer intersect at the user, unless the clock error correction is made. A
fourth satellite (or fourth set of measurements) provides the ability to make
this correction.

5.3 ORBITAL MECHANICS AND CLOCK CHARACTERISTICS

5.3.1 Orbital Mechanics


All of the equations in Section 5.2 have the satellite's pos1t10n and velocity
in ECEF coordinates as variables, either directly or as part of the LOS vec-
tor. The linear independence of the equations, which dictate the observability
of the navigation solution, is a function of the relative position of the satellites
in orbit. Thus, the placement of these orbits in a constellation and the evaluation
ORBITAL M EC H AN ICS AND CLOCK CHA RACTER ISTICS 185

of the satellites' positions and velocities is of primary importance, and orbit


mechanics is fundam ental to the navi gation problem.

Orbital Elements The orbit of an Earth satellite is nominally a plane (Figure


5.4) IS 1. The satellite moves in a nearly elliptical orbit in the plane. with per-
turbations , in inertial space (Figure 5.5). In the satellite systems desc ribed in
this chapter. the orbits are nominall y circul ar. which is simply a special case of
an elliptical orbit. Howe ve r, to maintain the accuracy of th e description of the
orbits. they must be represented as elliptical orbits.
The line of nodes in Fi gure 5.4 is the intersection of the orbit plane with
the Earth's equatorial plane. The ascending nod e is th e point where th e satel-
lite crosses the equatorial plane from the so utherly latitude to north erly. The
inclination of the orbit is the angle betwee n the orbital plane and the Earth 's
eq uatorial plane.
Six independent constants are needed to spec ify the nominal orbit. These can
be the three components of position and velocity at any instant of tim e, as used
in th e equations of Section 5.2. or the classical orbital elements . The advantage
of the latter is that they represent th e total orbit, rather than one point in the
orbit. from which the position and ve locity at any time can be derived. Three
of these orbital eleme nt s are th e orientation fHlrulll eters shown in Figures 5.4
and 5.5 :

z1
/Earth 's polar axis
'
v

Satellite

x3

i inclination
y
uinol'.
'Jerna\ eC\_ __

X~ ~--- / Yj
n ~~- -------------~
'..Ascending node Equatorial plane
X/
Figure 5.4 T he orhital plan e.
186 SATELLITE RADIO NAVIGATION

APO~EEI
~
·--~~~~----~--~
LINE OF ABSIDES

ASCENDING
NODE

LINE OF NODES

Figure 5.5 The elliptical orbit.

I. Geocentric longitude of the ascending line of nodes, n (Figure 5.4). This


is the angle in the equatorial plane from some arbitrary reference to the
ascending line of nodes. The reference X 1 is conventionally taken as the
direction of the vernal equinox, but, in practice, it is completely arbitrary
as long as it is appropriately defined for the application. (See Equation
5.17 below.)

2. Inclination of the orbital plane with the equatorial plane, i (Figure 5.4).

3. The argument of perigee w, which is the angle between the direction of


the ascending node and the direction of perigee measured in the plane of
the orbit (Figure 5.5). The axis of the ellipse connecting the apogee and
perigee is called the line of apsides.

Two of the remammg orbital elements are the dimensional parameters;


namely the semimajor axis of the ellipse, a (Figure 5.5), and the eccentricity
of the orbit, e, where
ORBITAL MECHANICS AND CLOCK CHARACTERISTICS 187

where b is the semiminor axis. Note that for a circular orbit, where the two
axis are equal, the eccentricity is zero. The sixth orbital element is the time
of perigee passage, tp, measured with respect to some arbitrary time scale. tp
establishes the phase of the satellite along the geometric path defined by the
other elements.

Useful Orbital Parameters and Equations The six orbital elements describe
the path of the satellite in its unperturbed orbit. To perform a navigation solu-
tion, however, it is usually better to describe the satellite's position and velocity
in ECEF coordinates. The relationship between the six coordinates and satellite
position is as follows [6]:
The mean motion is the average angular rate of the satellite radius vector r.
It is defined as

(5.1 0)

in radians per second, where 11 = 3.98605 x 10 14 meters 3 jsec2 is the WGS 84


value of the Earth's universal gravitational parameter [7, 8]. The period of the
orbit is then

(5.11)

in seconds. The mean motion is used to compute the mean anomaly

M = n(t- tp) (5.12)

in radians, which is the basis for the solution of Kepler's equation [6]. Kepler's
equation defines the relationship for the eccentric anomaly, where

E- esin E = M (5.13)

in radians. This relationship is illustrated in Figure 5.5, where E is the angle


between a line from the center of the ellipse to a point projected from the posi-
tion of the satellite to a circumscribing circle perpendicular to the major axis
and the line of absides. The eccentric anomaly can be used to compute the
instantaneous radius of the orbit from the center of the Earth as
188 SATELLITE RADIO NAVIGATION

a(l - e 2 )
r =a( I- ecos E ) = - - - - (5.14)
1 + e cos v

where v is the true anomaly in radians. Note from Figure 5.5 that to compute
the cartesian components of the satellite's position (Xp, Yp) in the orbit plane,
the cosine and sine of the true anomaly is required. They can be computed as

~sinE
sin v =' (5.15)
I - e cos E
cos E- e
cos v = (5.16)
I - ecos E

This position in the orbit plane can then be transformed into ECEF coordinates
by performing a Euler transformation through the orientation parameters w, i,
and 0, in that order. Note, however, that 0 is not constant because the Earth
is rotating. It varies from some predefined epoch value 0 0 (right ascension) at
some time to as

(5.17)

in radians, where ~h is the earth's rotational rate, which has a WGS 84 value
of7.2921151467 x 10- 5 radjsec [7, 8].

The Perturbed Orbit If the Earth were a spherically symmetrical body in


empty space, the parameters described above would stay fixed. However, other
forces, called perturbations, cause the orbital plane to rotate and oscillate and
the satellite's path to vary from its elliptical path. These forces include (I)
spherically asymmetrical components of the Earth's gravitational field, (2) luni-
solar perturbations, (3) air drag, and (4) magnetic and static-electric forces.
Other perturbations are due to the variations in the Earth's rotation rate and
polar wander. Such perturbations do not affect the orbit in inertial space but
rather the orientation parameters as they relate to ECEF coordinates.
The effects of these forces vary, depending upon the altitude of the orbit. For
example, higher orbits are less affected by the Earth's gravitational field vari-
ations and air drag than the lower orbits are, but are more affected by forces
such as lunar and solar gravity and solar radiation pressure. In any case, since
the reference coordinate system is ECEF, the Earth's gravitational field varia-
tions are the most significant, of which the second zonal harmonic is by far the
most dominant. If a truly inertial coordinate system were used, the solar gravity
would be the dominant perturbation force. The gravity potential of the Earth
developed in zonal harmonics is given by [9]
ORBITAL MECHANICS AND CLOCK CHARACTERISTICS 189

UV, A,¢) = ~ [I + t, ( A: )" ~ (C,,, cos rnA + S,, ,, sin mA)P,,, (sin ¢) l
(5.18)

where
AE is the WGS-84 semimajor axis of the Earth's ellipsoid =
6378.137km [9]
n,m are degree and order
¢,"A are geocentric latitude and longitude
Pn.mCsin ¢) are Legendre polynomials
Cn,m,Sn,m are geopotential coefficients

Neglecting the effects of longitude (m = 0), which are relatively small compared
to the effects of latitude, the gravity potential due to the second zonal harmonic
is [6]

U2(r,¢)= J-tAioC2 ·
o ( -3s i n
2 I)
¢-- (5.19)
1 r 2 2

where the WGS-84 value for C 2 o is -1.08263 x 10- 2 [9].


The radial and meridional gravity force of the second zonal harmonic are
then

3 A 2 C2
Jl E 'O (3 COS 2¢- 1) (5.20)
4r 4

3A~C2.osin 2¢
(5.21)
2r4

Note that the radial force has two components-a constant that adds to the
nominal gravitational force and one that oscillates as function of the satellite's
latitude ¢, where

¢= sin- 1[sin(v +w)sin i] (5.22)

based on the argument of latitude, the argument of perigee, and the inclination
angle of the satellites orbit. Note that the period of the oscillating force is one-
half the orbit's period. The force in the direction of latitude also oscillates with
latitude.
190 SATELLITE RADIO NAVIGATION

These oscillating perturbations are important for defining parametes to rep-


resent the satellites' orbits for the users.

5.3.2 Clock Characteristics


As indicated in Section 5.2, the candidate measurements used in the naviga-
tion solution include terms representing the satellite and user clock time offsets
and/or clock drift. Thus, the characteristics of these clocks are important as they
are a potential navigation error source in two ways. First of all, the navigation
message from a satellite includes parameters describing the satellite's clock off-
set and drift, which are predicted by the control segment. Any instability in the
satellite's clock causes this prediction to be in error, thus resulting in range and
range rate errors that degrade the user's navigation solution. Second, the user's
own clock may drift between, or during, navigation solution updates, which
also results in a solution degradation. This degradation is due to an estimation
of a "moving target," preventing averaging of the clock solution.
Clock errors are characterized in terms of a time offset, a frequency off-
set, aging (frequency drift), and measures of clock instability. Time offset, fre-
quency offset, and sometimes frequency drift are part of the solution, whether
it be the control segment's solution and prediction of the satellite clocks or the
user's solution for his own time and frequency offset. Clock instability, on the
other hand, hampers the capability to perform these functions. The most com-
mon measure of clock instability is in the form of the square root of an Allan
variance [10], which is defined as

(5.23)

where

¢(tk + 7)- ¢(tk)


Yk = (5.24)
27r/o7

is the fractional frequency offset averaged over 7 seconds after the systematic
frequency offset and frequency drift have been removed. i:J.¢(tk) is the change
in clock phase, measured in radians over 7 seconds as illustrated in Figure 5.6.
f 0 is the nominal frequency of the oscillator in Hertz and M is the number of
samples used in the computation.
A typical square root of Allan variance for a good quality crystal oscillator is
shown in Figure 5.7. Three typical stability characteristics are shown, depend-
ing upon averaging time: white frequency noise, flicker frequency noise, and
random walk frequency noise. They are defined using the coefficients hex of
ORBITAL MECHANICS AND CLOCK CHARACTERISTICS 191

Cl)
zc:(
c
~
-
':&:
w
Cl)
c:(
:I:
0..

'<- 't-->K- 't ->I

TIME t- SECONDS

Figure 5.6 Measuring clock stability.

FUCKERFREQUENCYNO~E

j2tn2 h_ 1

1o-12-t----.,...------r------y-----.------,.--------.-
o.o1 0.1 1.0 10.0 100.0 1000.0 10000.0

AVERAGING TIME 't ·SECONDS

Figure 5.7 Typical square root of Allan variance.


192 SATELLITE RADIO NAVIGATION

a single-sided spectral density of fractional frequency fluctuations of the form


[ 1 I]

. h f 2 h h_J h_2
S, (j ) = 2. + 1f + ho + f +f 2
(5.25)

The first two terms define high-frequency phase noise, which can affect signal
tracking, but they are not considered stability terms. The square root of the
Allan variance is a measure of frequency stability. An estimate of clock phase
(or time) stability, in seconds, can be obtained by multiplying the ordinate axis
of Figure 5.7 times the abscissa. However, a more accurate representation in
terms of the coefficients hex is given as a function of time since last measured
or estimated as [12]

(5.26)

which can be used to determine the ability to predict time.

5.4 ATMOSPHERIC EFFECTS ON SATELLITE SIGNALS

A major source of errors in satellite navigation signal measurements (compo-


nents of the EPR; of Equation 5.1) is due to signal refraction through the atmo-
sphere, which includes the ionosphere and the troposphere. In the ionosphere,
the larger one of these two sources of error, the signal interacts with free elec-
trons and ions, causing an increase in its phase velocity with a corresponding
decrease in its group velocity. The troposphere causes the propagation velocity
of the signal to be slowed, compressing the signal wavelength. In this section,
the effects of these phenomena on measurement accuracy and quality will be
examined.

5.4.1 Ionospheric Refraction


The ionosphere is a shell of electrons and electrically charged atoms and
molecules that surrounds the Earth, stretching from a height of about 50 km to
more than 1000 km [I 3]. It owes its existence primarily to ultraviolet radiation
from the sun. The photons making up the radiation possess a certain amount
of energy. When the photons impinge on the atoms and molecules in the upper
atmosphere, the photons' energy breaks some of the bonds that hold electrons
to their parent atoms. The result is a large number of free, negatively charged,
electrons and positively charged atoms and molecules called ions.
The free electrons in the ionosphere affect the propagation of radio waves.
ATMOSPHERIC EFFECTS ON SATELLITE SIGNALS 193

At frequencies below about 30 MHz the ionosphere acts almost like a mirror,
bending the path traveled by a radio wave back toward the Earth, thereby allow-
ing long-distance communication. At higher frequencies, such as those used in
satellite radio navigation, radio waves pass through the ionosphere. They are,
nevertheless, affected by it.

The Refractive Index and Phase and Group Velocity The velocity of prop-
agation of a radio wave at some point in the ionosphere is determined by the
density of electrons there. The velocity of a carrier, the pure sinusoidal radio
wave conveying the signal, is actually increased by the presence of the elec-
trons. The greater the density of electrons, the greater the velocity. The net
effect on a radio wave is obtained by integrating the electron density along the
whole path that the signal travels from the satellite to a receiver. The result is
that a particular phase of the carrier arrives at the receiver earlier than it would
have had the signal traveled in complete vacuum. The early arrival is termed a
phase advance.
The increased phase velocity of propagation is related to what is called the
refractive index n by the expression [ 14 J

(5.27)

where c is the velocity of light. The refractive index is given as [5]

(5.28)

where the critical frequency fn or plasma frequency, below which complete


reflection occurs, is, in Hertz [ 14],

(5.29)

where Ne is the electron density in electrons per cubic meter, e is the electron
charge, m is the mass of an electron, Eo is the permittivity in free space, andf is
the signal's carrier frequency. Note that the higher this carrier frequency is, the
closer n is to I, and thus the less the ionosphere affects signal propagation. Note
also that the higher the electron density is, the higher the plasma frequency is,
and thus the more the ionosphere affects signal propagation.
On the other hand, the signal that is modulating the carrier (e.g., pseudo-
random noise codes and navigation data) is delayed by the ionosphere. Since
the composite signal can be thought of as being formed by the superposition
194 SATELLITE RADIO NAVIGATION

of a large group of pure sinusoids of slightly different frequencies, the delay of


the modulation is called the group delay. The magnitude of this group delay is
identical to the phase advance. The group refractive index is given as [ 14]

d I
ng= -(nw)=- (5.30)
dw n

Thus, the relationship of the phase velocity and group velocity (the rate of
energy propagation) satisfies that for a signal passed through a wave guide,
which is

(5.31)

Electron Density and Total Electron Content The electron density is quan-
tified by counting the number of electrons in a vertical column with a cross
section of one square meter [13]. This number is called the total electron con-
tent (TEC). The TEC is a function of the amount of incident solar radiation.
On the night side of the Earth, the free electrons have a tendency to recombine
with the ions, thereby reducing the TEC. As a consequence, the TEC above a
particular spot on the Earth has a strong diurnal variation.
Changes in TEC can also occur on much shorter time scales. One of the phe-
nomena responsible for such changes is the traveling ionospheric disturbance
(TID). TIDs, which have characteristic periods on the order of I 0 minues, are
manifestations of waves in the upper atmosphere believed to be caused in part
by severe weather fronts and volcanic eruptions. There are also seasonal varia-
tions in TEC and variations that follow the sun's 27-day rotational period and
the roughly 11-year cycle of solar activity.

Dual Frequency Corrections All these changes in TEC make it difficult to


consistently predict the phase advance and group delay accurately using mod-
els. Thus, all of the satellites of the satellite radio navigation systems described
herein transmit signals at two frequencies, allowing some users to measure and
correct for these quantities. Note that the refraction index defined in Equation
5.28 is a function of the inverse of the carrier frequency. A Taylor series expan-
sion of that equation yields

n"" I_ JZ, + f~."" I_ 40.5Ne + 1640.25N~ (5.32)


2f 2 4! 4 f 2 f 4

If the carrier frequency is chosen to be high enough so that the fourth-order term
is negligible, then the deviation of n from a free space value of one is inversely
proportional to f 2 . Thus, by making measurements on two widely spaced fre-
quencies and combining them. the electron density Ne can be determined, and
ATMOSPHERIC EFFECTS ON SATELLITE SIGNALS 195

almost all of the ionospheric effect can be removed. This is true whether the
measurements are pseudorange, Doppler, or integrated Doppler measurements,
since they all have an error component that is a function of the refraction
index. However, the correction is applied with a different sign, depending upon
whether the measurement is obtained from the carrier or the modulated signal.
Applications of ionospheric corrections using models or dual frequency mea-
surements are peculiar to satellite radio navigation system, and thus will be
described later in this chapter.

Ionospheric Scintillation Effects Irregularities in the Earth's ionosphere can


produce both diffraction and refraction effects causing short-term signal fad-
ing that can severely stress the tracking capabilities of a satellite-navigation
receiver fl3l. The fading can be so severe that the signal level will drop com-
pletely below the receiver's lock threshold. The geographic regions where scin-
tillation effects normally occur are in the equatorial region (±30c either side of
the geomagnetic equator) and the polar regions. Scintillation is the strongest in
the equatorial regions. Strong fading can also be accompanied by rapid carrier
phase changes. Fortunately, strong scintillation effects are rare or localized at
certain times of the night, and usually only during periods of high solar activity.
Also, by design, a satellite navigation receiver can be made to track through the
severe fading, although the collection of navigation data can be disrupted with
parity errors. Fortunately these data are periodically repeated, so the disruption
is only temporary.

5.4.2 Tropospheric Refraction


Refractivity Unlike the refractivity of the ionosphere, the refractivity N of
the troposphere is not a function of carrier frequency. At a given altitude it is
commonly determined from the following equation [14, 15J:

N = I 0\n - I) = --:r-
77.6 (
.P + -4810e)
T- (5.33)

where Pis total pressure in millibars, Tis absolute temperature inK, and e is
partial pressure of water vapor in millibars, where one definition of e is given
as [ 15]

e = 6.1 RH J07.4475Tc;(2l47+Tc) (5.34)


100

where RH is relative humidity in % and Tc is temperature in "C. The first


term in Equation 5.33, which does not depend on relative humidity, is called
the dr.v term, while the second is called the wet term. Equation 5.33 provides
the refractivity at any altitude given the variables measured at that altitude.
196 SATELLITE RADIO NAVIGATION

Different scientists have modeled refractivity as a function of altitude h but


have done so differently on the dry and wet terms, yielding the relationship

77.6 373256e
N(h) = ------:;- P.fd(h) + T2 .fw(h) (5.35)

Propagation Delay Through the Troposphere The propagation times for


waves traveling through the troposphere between a satellite and the user are
longer than that for free space for two reasons [ 14]:

I. The path does not follow a straight line. The consequence of this is small
and can be neglected except for very small elevation angles.
2. The wave velocity is slightly lower than it is in a vacuum, producing an
apparent increase in the length of the path given as

11L =c JR (n - I) ds (5.36)
()

where s is the curved abscissa on the path and R is the distance to the
satellite, which can be treated as infinite.

Since the real path does not deviate much from a straight line for all but very
small elevation angles, Equation 5.36 can be made a function of elevation angle
and integrated with respect to altitude. The result is a correction model for
ranging measurements.
In general, the dry and wet terms need to be integrated separately because
they include different functions of altitude and elevation angle. This is the case
when actual surface measurements are used and the ultimate accuracy is desired,
such as in the control segment of the radio-navigation system. However, in the
case of avionics applications, those measurements are not generally available,
and a standard day is used to define the correction model. Then, commensurate
with the accuracy of that standard day, Equation 5.36 becomes

11L = f= (n - I) dh (5.37)
ho sin ¢o

where ho is the altitude of the user and ¢o is the elevation angle of the signal
path to the satellite. An approximation of 11L can be obtained by assuming the
atmosphere is exponential. That is, let

n- I = (no- I )e-hh (5.38)


NAVSTAR GLOBAL POSITIONING SYSTEM 197

so that

n0 - I
i1L = e~bho (5.39)
b sin ¢o

For a typical standard day model, n0 is 1.00032 and b is 0.000145 meters~ I,


resulting in a zenith delay equivalent to 2.208 meters at sea level. At 5° eleva-
tion angle, the equivalent delay becomes 25.33 meters.

5.5 NAVSTAR GLOBAL POSITIONING SYSTEM

The NAVST AR Global Positioning System (GPS) was conceived as a U.S.


Department of Defense (DoD) multi-service program in 1973, bearing some
resemblance to and consisting of the best elements of two predecessor develop-
ment programs: the U.S. Navy's TIMATION program and the U.S. Air Force's
Program 621B [16, 121 (Chapter 1)]. The success of Transit had stimulated both
of these programs. The Air Force, as the executive service, manages the overall
program at the GPS Joint Program Office (JPO) located at the Space Division
Headquarters in Los Angeles, CA. The other U.S. military services, as well as
representatives from the Defense Mapping Agency, Department of Transporta-
tion, NATO, and Australia maintain active participation at the JPO. The result
of this development program is an all-weather global radio-navigation system.
GPS is a passive, survivable, continuous, space-based system that provides any
suitably equipped user with highly accurate three-dimensional position, veloc-
ity, and time information anywhere on or near the Earth.

5.5.1 Principles of GPS and System Operation


GPS is basically a ranging system, although precise Doppler measurements
are also available. To provide accurate ranging measurements, which are
time-of-arrival measurements, very accurate timing is required in the satel-
lites. Thus, the GPS satellites contain redundant atomic frequency standards
[ 17]. Second, to provide continuous three-dimensional navigation solutions to
dynamic users, a sufficient number of satellites are required to provide geo-
metrically spaced simultaneous measurements. Third, to provide those geomet-
rically spaced simultaneous measurements on a worldwide continuous basis,
relatively high-altitude satellite orbits are required. These three capabilities are
all related, since they all are necessary to provide the high-dynamic user navi-
gation capability in three dimensions.

General System Characteristics The GPS satellites are in approximately 12-


hour orbits (11 hours, 57 minutes, and 57.27 seconds) at an altitude of approxi-
mately II ,000 nmi. The total number of satellites in the constellation has changed
198 SATELLITE RADIO NAVIGATION

over the years, the number being tied to budget constraints. The intent, how-
ever, is to provide coverage at all locations on the Earth as nearly to 100% of
the time as possible. Each satellite transmits signals at two frequencies at L-Band
[ 1575.42 (L I) and 1227.6 (L2) MHz] to permit ionospheric refraction corrections
by properly equipped users [ 18]. These signals are modulated with synchronized,
satellite-unique, pseudorandom noise (PRN) codes that provide the instantaneous
ranging capability. Those codes are modulated with satellite position, clock, and
other information, in order to provide the user with that information. Details on
the constellation and signal structure appear in later sections of this chapter.
All equations of Section 5.2 apply to the GPS navigation solution in
that all three measurement capabilities-ranging, Doppler, and integrated
Doppler-exist, and, in general, the solution for the user's clock and clock drift
are required. It is not uncommon for a specific user equipment to use at least
two of the three measurement capabilities to simultaneously solve for position,
velocity, clock offset, and clock drift, in some cases using all satellites in view
[19, 20]. That number can be as high as 12.

System Accuracy GPS provides two positioning services, the Precise Posi-
tioning Service (PPS) and the Standard Positioning Service (SPS) [21 ]. The PPS
can be denied to unauthorized users, but the SPS is available free of change to
any user worldwide. Users that are crypto capable are authorized to use crypto
keys to always have access to the PPS. These users are normally military users,
including NATO and other friendly countries. These keys allow the authorized
user to acquire and track the encrypted precise (P) code on both frequencies
and to correct for intentional degradation of the signal.
Encryption of the precise code provides GPS with an anti-spoofing (A-S)
capability. A-S is not meant to deny the P code to unauthorized users but to pre-
vent the spoofing of the precise code by an unfriendly force. Unfortunately, A-S
denies the P code to unauthorized users. Thus. A-S prevents these users from
correcting for ionospheric refraction, since the L2 signal only carries the P code,
although there are "codeless cross-correlation" techniques that do allow this
measurement [22, 23]. A-S does not prevent the use of the coarse/acquisition
(Cj A) code, which is only carried on the L I signal.
The intentional degradation. on the other hand, is meant to deny accuracy
to an unfriendly force. It is called selective availabilit_v (SA). Unfortunately,
SA also denies accuracy to unauthorized users that are friendly, which is the
entire civil community. The peao~-time policy of the DoD is to provide an
unauthorized accuracy of I 00 meters, 2drms (horizontal accuracy) [24].
Either A-S or SA. or both, may be turned on. If neither is turned on, SPS
accuracy is the same as PPS. In 1996, the U.S. stated that it is its intention to
turn off SA within a decade [135]. More details on GPS system accuracy are
provided later in this chapter.

The GPS Segments GPS has the basic system configuration illustrated in Fig-
ure 5.1. The monitoring and satellite control sites are dispersed around the
L1 AND L2
NAVIGATION
SIGNALS

• MONITOR STATIONS

~fir GROUND ANTENNAS


~
~

Figure 5.8 GPS sys1em configuration.


200 SATELLITE RADIO NAVIGATrON

TABLE 5.1 GPS segment functions


Segment Input Function Product

Space Satellite commands Provide atomic time scale PRN RF signals


Navigation messages Generate PRN RF signals Navigation message
Store and forward navigation Telemetry
mes~:age

Control PRN RF signals Estimate time and ephemeris Navigation message


Telemetry Predict time and ephemeris Satellite commands
Universal coordinated Manag,~ space assets
Time (UTC)
User PRN RF signals Solve navigation equations Position, velocity, and time
Navigation messages

world. GPS is comprised of three segments as illustrated in Figure 5.8. The


functions of these three segments are summarized in Table 5, I.

5.5.2 GPS Satellite Constellation and Coverage


GPS Satellite Constellation The fully operational GPS satellite constellation,
as described in the 1992 Federal Radio Navigation Plan [24], comprises 24
satellites, four each in six 55"-inclined orbit planes spaced 60° apart in lon-
gitude. The nominal GPS 24 satellite constellation is given as orbit parame-
ters for an epoch of July I, 1993, at 0000 GMT [25] as follows: semimajor
axis A = 26, 559.8 km; eccentricity e = 0; inclination i = 55°; argument of
perigee w = oo; right ascension of ascending nodes 0 = 272.847° (plane A),
332.847" (plane B), 32.847" (plane C), 92.847" (plane D), 152.847° (plane E),
212.847" (plane F); and mean anomalies M 0 that are nonuniform but nominally
90" between satellites within a plane and !Yo between planes. The actual mean
anomalies vary significantly from those nominal values in order to provide opti-
mum coverage over regions of interest.
Because of the altitude of the orbits, the satellite paths, as projected onto the
surface of the Earth, along a line to the center of the Earth, repeat every day
once a day. However, the time at any point along the path occurs three minutes
and 56 seconds earlier every day. This is known as a sidereal day, even though
the orbit period is approximately 12 hours, because of the daily rotation of the
Earth.

Coverage As stated earlier, the coverage, defined as providing a satisfactory


instantaneous navigation solution, is not I 00%. In turn, a satisfactory instanta-
neous navigation solution is one where there are at least four satellites visible
to the user above a desired elevation angle, usually taken to be 5", and line-of-
sight vectors to those satellites provide an adequate geometric solution. Simply
having four satellites visible is not sufficient. There are even times when more
than four satellites are visible and the geometry is not adequate. To measure this
NAVSTAR GLOBAL POSITIONING SYSTEM 201

adequacy, we introduce the concept of geometric dilution of precision (GDOP)


[26]. GDOP is defined from the linearized navigation/time solution derived
from a vector of N (N 2 4) pseudorange residuals:

(5 .40)

where the ith row (for satellite i) of the measurement matrix H (h;) is given as

(5.41)

in terms of Equation 5.3. (The first three elements of Equation 5.41 are the
directional cosines from the user position to the satellite position.) GDOP is
defined as

GDOP = y trace[HTH]-1 (5 .42)

where trace[·J indicates the sum of the diagonal elements of [·]. If the vector
of pseudorange residuals defined in Equation 5.40 are statistically uncorrelated
with equal !-sigma errors of aPR· then the !-sigma position/time error is

I 1 1 2 , 2
(5.43)
ax. 1 c:c GDOP ·apR=ya~+a~+a:+c-a 11 ,

Position dilution of precision (PDOP) is computed by deleting the fourth diag-


onal element in the trace of Equation 5.42 (of which the square root represents
time dilution of precision, TDOP).
To compute horizontal and vertical dilution ofprecision (HDOP and VDOP),
the ECEF coordinate residuals of Equation 5.40 must first be transformed to
local tangent plane (LTP) residuals, defined as north, east, and up residuals at
the estimated location. This results in the new measurement matrix

HLTr = HTEcEF~LTP =
cos El 1 cos Az1
COS El1 COS
-:
Azo
-
cos El 1 sin Az1
cos El2 sin Az2
sin El1
sin El2 -1]
-I

[
cos EIN cos AzN cos EIN sin AzN sin EIN -1
(5.44)

where El; and Az; are the elevation and azimuth angles from the user to satellite
i and where
202 SATELLITE RADIO NAVIGATION

~]
- sin Au sin ¢u COS ¢u
COS Au 0
(5.45)
sin Au cos tPu sin ¢u
0 0

where ¢u and Au are the estimated latitude and longitude of the user. This new
H matrix then replaces the one in Equation 5.42, and fix, fiy, and fiz are replaced
with fiN, fi£, and fih, respectively. Then, the sum of only the first two diagonal
elements in the trace of Equation 5.42 are included for computing HDOP, and
only the third diagonal element is included for computing VDOP. The linear
transformation of Equations 5.44 and 5.45 is only appropriate for the transfor-
mation of location residuals and is not appropriate for total state ECEF coor-
dinates to latitude, longitude, and altitude, which requires a nonlinear transfor-
mation (see References [9] and [23] of Chapter 2).
Adequate coverage is usually defined by the U.S. DoD when PDOP is less
than 6 for elevation angles greater than 5°. However, in some applications of
GPS, such as civil and commercial aviation, this is not adequate coverage. In
these applications, augmentation using pseudolites or geostationary satellites
is necessary. These augmentations are discussed later in this chapter. Because
of these advanced applications, the DOP concept has been replaced with the
concept of availability of accuracy, accounting for satellite failures. This avail-
ability is based upon analytical procedures using the mean-time-between-fail-
ures (MTBF) and mean-time-to-repair (MTTR) characteristics of satellites (and
appropriate augmentations), a concept originally developed by the French space
agency CNES [27, 28] and later extended [29, 30]. With these extensions, cov-
erage is more appropriately defined in terms of probabilities (or availability).

An Example of Coverage in Terms of Availability ofDOP Using all nonfailed


satellites in view (above 5°) of the nominal GPS satellite constellation [25],
the average and worst-case location availability HDOP and VDOP over the
continental United States (CONUS) is illustrated in Figure 5.9. This availability
was computed with a 5-minute time and a 28 grid granularity. Note that HDOP
is at least twice as good as VDOP, especially at the worst-case location. Since
the HDOP and VDOP values plotted in Figure 5.9 are probabilistic and do
not necessarily occur at the same location, they cannot be root-sum-squared to
obtain PDOP. The inset in Figure 5.9 is a plot of the underlying joint probability
of the unavailability of exactly N satellites in the constellation of 24 satellites
for 0 :<::: N :<::: 24.
As described later in Section 5.7.4, availability of 99.999% of at least a finite
HDOP is required for civil and commercial aviation. In addition, availability
of 99.9% of good VDOP is required for precision approach applications. The
99.999% available HDOP cannot be observed in Figure 5.9, and, in fact, a
finite HDOP is not available with that probability. The 99.9% available VDOP
is 4.29 averaged over CONUS and 5.62 at the worst-case location. None of
1 / ~-1-~--~~r~~-~~]~--~-~, ------ __ :~-r--~C--~~---
0.995
r/ l·.--· 1

-l -·- - ·- - -CONUS HOOP


0.99 ~ ~-~ -----·HOOP@ N36 W111
1:' i I --CONUS VOOP
0.985
f : . ---- VOOP@ N40 W115

0.98 1( . . ···_
. _1
__ · · · J24 GPSI s. ATEL~ITES ·-
/I: I
~
....1
0.975
-/~
I : : I
L_____
Ll···································
- ___L.__ _

-
..L__----,
co
<! 0.97 t 1E+01 , ,
I
i.
....1
<i:
> 0.965
I:
: ~
m
1 E-01
1 E-o3
r---.....-------r-
-- r ~
I I I I
<! 1

I: ! ~ 1 E-05 I - I ···-·
~0.. ~ 'r--..._
0.96
I :' ! 1E-07
• ' I
0.955 I~ 1 ! ~ 1E-09 1'-..1
I: ! ~ 1E-11 I I ..........
0.95 :
I;
I ;;: 1E-13 t-------1- j ~
: 1E-15 ,
0.945 I: ! o 1 2 3 4 5 s 1 a 9 10 11
i I; j NUMBER OF SIMULTANEOUS SV FAILURES
0.94 .
0.5 1.5 2 2.5 .3 3.5 4 4.5 5 5.5 6

N
"ALL-IN-VIEW" DILUTION OF PRECISION OVER THE CONTINENTAL US
0
Vol
:Figure 5.9 Availability of HOOP and VDOP over the continental United States (CONUS).
204 SATELLITE RADIO NAVIGATION

these values is acceptable, especially since real-time integrity of the signals is


not guaranteed. Thus, some type of augmentation to GPS is required for civil
and commercial aviation. Possible augmentations are described later in Sections
5.7.3, 5.7.4, and 5.8.

5.5.3 Space Vehicle Configuration


The various generations of N AVST AR GPS satellites that are in orbit are three-
axis stabilized vehicles with the navigation subsystem L-band antenna pointing
in the direction toward the Earth. The satellites have been designed to track the
sun about their yaw axis, which allows the solar array panels to have only a
single degree of freedom. This concept also simplifies the thermal control envi-
ronment for the satellite and its navigation payload, since one side is exposed
to the sun and the other two sides always face deep space.
In addition to the navigation subsystem, which from a user's perspec-
tive is the primary mission payload, the satellite consists of seven other
subsystems-electrical power; attitude and velocity control; reaction control;
thermal control; telemetry, tracking and command; orbit injection; and struc-
ture.

Navigation Payload The navigation payload consists of the following key


components and assemblies: transmitting antenna array, redundant atomic
clocks, digital processing or baseband assemblies, and RF equipment. The GPS
antenna array formed by inner quad helices encircled by a ring of eight outer
helices provides near equal power density to all terrestrial users. This arrange-
ment results in a shaped coverage beam with a 28.6c field of view. A block
diagram of the payload electronics for the Block IIR satellites is shown in Fig-
ure 5.10.
The atomic clocks used as the I 0.23-MHz frequency standards are the heart
of the GPS navigation concept. Initial Block I satellites employed three rubid-
ium (Rb) atomic standards. The subsequent Block lA versions incorporated
an additional cesium (Cs) atomic standard. Both standards provide equivalent
short-term stability of about I part in I 0 13 , but the Rb standards tend to wander
over periods of days, requiring a drift rate term to be uploaded by the control
segment. The Block II satellites employ two Cs and two Rb frequency stan-
dards. The Block IIR satellites use three frequency standards made up of a
combination of Cs and Rb standards.
Baseband GPS processing to generate the specific satellite pseudorandom
noise (PRN) codes (P and C/ A) and the 50 bit per second digital navigation
message data are controlled by redundant processors. These processors also
generate the time-of-week (TOW) count (or Z-count), which can be aligned
to the GPS system time by the control segment. The primary functions of the
processors are to store, format, and modulate the navigation message data onto
the signals transmitted via the L-band subsystem.
The L-band subsystem consists of carrier frequency synthesizers, modula-
NAV \Of ~I L1/L2
SYNTH. TRI· ~TRANSMISSION TO
PLEXER USER TERMINALS
HPA

FREQ. --------------------------M'ou-·:
STDS
~~- -------------- -· -- -----------------, I
I
I
I I
I
I
I
I

PROCESSOR lI ·--.
I
-------------------------------------l
hI
MEMORY
110 TX ~An~~~ 1+1 TY ~
I .... , &COMSEC
: I NT!:R-SA TELUTE
:coMMUNICATIONS
DATA I
I
,UPLOAD
I
I
I
I
FORMAT
CONVERTER
I
I
I
I
I
·- -------------- ------------------------ -·
I CTDU :

I
I
I
I
SMCD I
I
I
I
I
I
I
I
POWER I
I
I
I SWITCHING I
I

·---
I

------------· I

EPHEMERIS

--·----·--~I
rt ..L.t..
I I
r-----
DATA

TT&C : :
I
EPS
I
:
'
I
I ADS
I I
I I I I
I
•___________ J
----------J ·-------------~
Figure 5.10 Block diagram of the Block IIR GPS satellite payload electronics (courtesy,
N ITT Aerospace/Communications Division, ITT Defense).
Q
Ul
206 SATELLITE RADIO NAVIGATION

TABLE 5.2 Recehed RF signal levels (minimum


level received) [7]

Frequency P-Code C/ A-Code


Ll --163 dBw -160 dBw
L2 -166 dBw N/A

tors. and intermediate-power and high-power amplifiers. These RF equipments


are totally redundant and switchable. The two L-band carriers at 1575.42 and
1227.6 MHz are combined in a triplexerjfilter. which then feeds energy into
the 12-element helix antenna array. (A third L3 signal is also generated. when
needed. for another non-navigation satellite function.) The transmitted power is
such that the received power level for a user. defined at the output of a 3 dBi
linearly polarized antenna, is given in Table 5.2.
In Figure 5.10 an intersatellite communications function is also shown. On
the Block IIR satellites, this function includes an intersatellite ranging capability
that will be used for a future autonomous navigation capability.

Block Characterization Summary There are to date six generations of GPS


satellites-Block I. lA. II, IIA, and IIF. To a great extent, the variations in
these vehicles are transparent to the navigation users. The differences (espe-
cially in satellite weight. power, and complexity) simply reflect the addition
and evolution of military requirements for enhanced survivability and auxiliary
payload growth. As an example. the Block I satellites consisted of approxi-
mately 33.000 individual parts wi.lh limited radiation hardening requirements.
The Block II production satellites, on the other hand, contain almost 65,000
parts, have a much larger form factor. and are planned to achieve a 7.5-year
design life. With the Block IIR replenishment launches, the navigation user
should be assured of continued and unchanging navigation coverage and perfor-
mance well into the next century. In 1996. a contract was awarded for Block IIF
(follow-on) satellites with initial deliveries scheduled for the year 200 I [ 125].
Two of the important new features of the Block IIF satellites are a design life
of 11.5 years and the capability for inclusion of a new. second. civil-transmit-
ted frequency (L5). That frequency would permit civil users to perform auto-
matic ionospheric delay error correction that is normally only done by autho-
rized (military) users (Sections 5.4.1 and 5.5.7). The frequencies under con-
sideration for L5 are in the ranges of 1207 to 1217 MHz and 1309 MHz to
1319 MHz. The frequency should be at least 200 MHz away from L I for
optimum ionospheric corrections and it must not be subject to interference by
other systems operating in the same band, such as JTIDS. DME. and L-band
radars [ 134].
NAVSTAR GLOBAL POSITIONING SYSTEM 207

5.5.4 The GPS Control Segment [31]


The principal product of the GPS control segment (CS) is the GPS navigation
message data representing the predicted state of each GPS satellite, which is
put into a standardized format and periodically uploaded to each satellite mem-
ory for continuous retransmission to the user. The CS function of controlling
and maintaining the status, health, and configuration of the space segment (SS)
assets is of equal importance to total GPS integrity. Scheduling the contacts
required for the full constellation while not compromising the basic naviga-
tion service is a CS challenge second only to ensuring the navigation service
integrity. The third CS function of continuously monitoring the navigation ser-
vice availability and accuracy as available on L-band is essential for quality
control. The fourth CS function is to monitor and manage ground-based assets
so as to provide uninterrupted GPS support.

CS Performance Requirements The following are the basic CS performance


requirements:

I. Navigation range service. A six-meter RMS user range error (URE)


is required at the GPS satellite to support the 15-meter spherical error
probable (SEP) navigation service to full capability users. This require-
ment is derived from a constellation geometry criterion that the PDOP
be no greater than six. The responsibility of the URE budget compo-
nent is shared by the CS and SS, since both satellite process predictabil-
ity and CS process state estimate fidelity are significant performance
factors.
2. Time transfer service. A 90-nsec (1-sigma) calibration of the GPS time
scale relative to universal coordinated time (UTC) (USNO) is required to
meet a 97 -nsec (1-sigma) apparent uncertainty at the satellite. GPS time
(modulo I sec) is aligned to UTC within I p.sec for user convenience.
3. Constellation accommodation. The CS must accommodate up to 24 oper-
ational satellites providing full navigation service. In addition, up to
six spare satellites must be maintained within the S-band command and
telemetry processing resource capacities.
4. Orbital operations. The CS must support pre-launch compatibility vali-
dation tests prior to launch of each satellite. Full CS responsibility com-
mences after the satellite is three-axis stabilized on-orbit with an active
L-band capability.
5. Space vehicle communications. The CS must provide full command,
telemetry, and GPS navigation message upload support to satellites of any
Block design. This includes memory-map management and provisions for
telemetry data analysis and archiving. Assets to support three navigation
message uploads per day for each satellite are required to achieve the
208 SATELLITE RADIO NAVIGATION

6-meter URE performance requirement with specification satellite atomic


clocks (Allan standard deviation not exceeding two parts in 10- 13 beyond
one day).

CS Configuration The CS consists of a centralized master control station


(MCS) and remote RF facilities coupled by dedicated communication services.
Establishing locations for these RF facilities was a compromise between utility
and the availability of adequate facilities to establish global satellite visibility
so as to strengthen tracking geometry, to provide maximum monitoring oppor-
tunity, and to preserve operational flexibility.
Monitor stations (MSs) are the L-band facilities that receive the same signal
as the user. These passive stations measure pseudoranges and collect the navi-
gation messages for two purposes. First, the pseudorange histories are required
to estimate the satellite trajectories and clock calibrations. Second, both are
required to faithfully monitor the navigation service as provided to the user
community. Colorado Springs, Ascension Island, Diego Garcia, Kwajalein, and
Hawaii comprise the CS MS sites. The resulting track coverage is provided by
these sites for satellites above 5c elevation angle. The ground projection lati-
tude of the satellites will never exceed the 55° orbit inclination angle, and thus
better than 90% average constellation coverage is provided. Contact with each
of the six ground tracks is different, varying from 90% to 100%. Segments
of uncovered ground tracks occur west of South America. Regions of coverage
overlap are important, since they result in the common MS viewing of satellites
which enable the direct MS- to MS-time transfers with the estimation process
that stabilize the relative MS-time scales.
The MCS is the operations center for GPS. It is composed of the personnel
and the facilities to provide overall system management. Here the processing to
support the navigation mission and to maintain the satellite constellation are per-
formed. Both equipment configuration and procedures were designed to assure
continuous integrity of the total :;ystem. The Consolidated Space Operations
Center (CSOC) is the permanent MCS site.
Ground antennas (GAs) are the S-band facilities that provide duplex com-
munication with the multiple satellites by receiving telemetry and transmitting
both commands and upload data.
Contact opportunities are provided by the GA sites of Ascension Island, Diego
Garcia, and Kwajalein. Regions of coverage overlap provide scheduling flexibil-
ity to best meet the MCS contact requirements. A fourth GA without redundancy
is located at the Eastern Launch Site to support segment compatibility verification
but is of limited operational utility because of range transmission restrictions.
Figure 5.11 illustrates the total CS configuration. External operational inter-
faces to the Air Force Satellite Control Facility (AFSCF), United States Naval
Observatory (USNO), and the Defense Mapping Agency (DMA) are imple-
mented to permit initial satellite handover, to provide UTC time coordina-
tion, and to provide Earth orientation data, respectively. Less formal interfaces
also exist which import Jet Propulsion Laboratory (JPL) sun/moon data and
NAVSTAR GLOBAL POSITIONING SYSTEM 209

FALCON AFS or
ONIZUKA AFB

• NAVIGATION
PROCESSING
HANDOVER
DATA
EJ
AFSCF

GPS UTC VANDENBURG AFB


• SATELLITE
TIME
UPLOAD,
COMMAND &
COORDINATE~
CONTROL L:_j
MONITOR STATIONS
• PASSIVE TRACKING
• OCS CONTROL
AND SYSTEM
MANAGEMENT (
SELECTED
LOG DATA
)>
8
WASHINGTON, DC
EARTH DATA,

DMA

• PRECISE MEASUREMENTS_.__ _ _ _ ___, WASHINGTON, DC

MASTER CONTROL STATION


•MANNED
• CENTRALIZED
• REDUNDANT

UPLOAD (TRANSMIT)
GROUND ANTENNAS
•ACTIVE TRACK
•REDUNDANTELECTRON~S

Figure 5.11 GPS control segment configuration.

exchange pertinent information with other GPS segments. The following is a


brief description of the various components of the CS.

Monitor Stations Navigation visibility is provided to the CS by the globally


distributed MSs that are unmanned. The MSs are comprised of antenna elec-
tronics, a multichannel dual frequency receiver, a cesium frequency standard,
a test and calibration signal generator, meteorological sensors, a communica-
tions subsystem, a central processor, and backup power supplies. The antenna
electronics consists of a single beam pattern antenna that receives both the L I
and L2 frequencies, a conical ground plane with annular chokes for multipath
signal rejection, and a low-noise amplifier.
The MS receiver has 12 dual frequency channels (24 demodulators) that
acquire and track up to 12 GPS satellites simultaneously. These channels each
provide accurate pseudorange and accumulated delta-range (phase) measure-
ments on both the Ll and L2 frequencies every 1.5 seconds. They also demod-
ulate the GPS navigation message and measure carrier-to-noise density as sig-
nal quality measurements. All of these quantities are set into a message for-
mat for transfer to the central processor for subsequent transfer to the MCS.
The receiver uses as its frequency reference the output of one of two cesium
frequency standards, which provide the basis for the highly stable GPS time.
210 SATELLITE RADIO NAVIGATION

The receiver's channels are periodically calibrated using a dual frequency GPS
signal generator that is slaved to the same frequency reference. This signal gen-
erator is also used to provide signals for fault isolation.
One of the two frequency standards is on hot standby to provide a quick
change-over in case the operational standard exhibits faulty operation. A phase
comparator compares the phases of the two standards to help detect faulty oper-
ation, as well as to provide an initial frequency estimate of the backup standard
for a smooth transition. Both frequency standards are kept alive with a backup
power supply in case of prime power failure.
Meteorological sensors (barometric pressure, temperature, and relative
humidity) are polled by the central processor to permit accurate correction for
tropospheric delay at the MCS. These data, along with the receiver measure-
ments and status information, are forwarded to the MCS over a dedicated secure
communication channel in response to tracking orders received over the same
duplex channels. This channel utilizes a commercially developed SDLC proto-
col to provide error detection and data block re-transmission.

Master Control Station The MCS consists of the processing complex and
controller facility to completely manage and control the operational GPS space
assets and to provide navigation messages. The navigation mission requires an
upload availability of 98%, so redundancy is provided for all mission-critical
equipment. Dual processors with communication controllers and the customary
compliment of peripherals are configured to permit processing of the on-line
navigation processing and satellite control functions with either unit. Personnel
at the MCS control all navigation processing, constellation, and CS assets and
are responsible for the system integrity. This requires established procedures
and efficient access to critical mission data.
The navigation process is illustrated in Figure 5.12. It is based upon a lin-
ear expansion about a reference trajectory, which is obtained by integrating the
equations of motion forward in time from an initial position and velocity state.
The satellite force model used to accomplish this integration includes mathe-
matical expressions for the following effects: ( l) WGS-84 geopotential expan-
sion, (2) sun gravitational attraction, (3) moon gravitational attraction, (4) Earth
gravitational tides, (5) solar flux reaction (including eclipse), and (6) satellite
y-axis acceleration.
Partial derivatives with respect to the epoch states of inertial position and
velocity are generated to reduce the estimation and prediction process to linear
mathematical relationships. Given values for estimated state residuals, evalu-
ation of the first-order expansion provides the position trajectory used to pro-
cess measurements or to generate the navigation message. The inertial-to-ECEF
coordinate transformation matrix is generated to be consistent with externally
supplied Earth orientation data.
The measurement update process consists of data editing, smoothing, mea-
surement model transformation, and estimation steps. The raw MS data are
examined and correlated with receiver fault indicators to ensure track conti-
MS SURVEY DATA INERTIAL PARTIAL DERIVATIVES INITIAL
POSITION
& VELOCITY

~
TRAJECTORY
MEASUREMENT STATE
RESIDUALS RESIDUALS

REFERENCE
PHASE
TRAJECTORY
AIDED
GENERATION
SMOOTHING

DERIVATIVES

A PRIORI POLAR
TIME MOTION
INERTIAL
REFERENCE
DATA

~
A PRIORI INERTIAL POSITION TRAJECTORY

ROTATION MATRIX ROTATION


MATRIX
GENERATION
EARTH FIXED POSITION
',,

NAVIGATION
UPLOAD GROUND
MESSAGE
PARAMETER
MESSAGE I ... ANTENNA
GENERATION DATA
FITTING

N Figure 5.12 Control segment navigation data processing.


.....
.....
212 SATELLITE RADIO NAVIGATION

nuity. Both first- and second-difference histories are compared with threshold
values to detect any data inconsistencies. Smoothing is applied to refine the MS
measurements, combining the 1.:5-sec measurements into one low-noise data
point every 15 min. Corrections for the ionosphere and troposphere induced
delays and interchannel receiver delays are made. Then the measurement model
accounts for the Earth's rotation during the signal propagation time, the effects
due to crustal tides, relativistic time distortions and the a priori knowledge
of estimated states to form measurement residuals. Measurement partials with
respect to the estimated states are also evaluated to support the estimation pro-
cess.
A Kalman filter recursive estimator (Chapter 3) is implemented to estimate
the following states: (1) satellite position at epoch time, (2) satellite velocity
at epoch time, (3) satellite clock phase at epoch time, (4) satellite clock fre-
quency at epoch time, (5) satellite clock aging (rubidium frequency standards
only), (6) solar flux, (7) satellite y-axis acceleration bias, (8) MS clock phase
at epoch time, (9) MS clock frequency at epoch time, and (I 0) MS wet tropo-
sphere height. The estimation process is partitioned to conserve computational
resources. A common time scale ensemble of MS clock states is determined and
this common time scale holds the partitions together. The partitioned estima-
tion performance penalty is insignificant when an adequate number of satellites
exist in each partition to maintain a solid MS-time transfer network.
Every eight hours the state of each satellite is used to generate a predic-
tion of the time-scale correction and trajectory in ECEF coordinates. The same
linear expansion and coordinate transformations are used. Navigation message
parameters are obtained as a weighted least-squares fit to these data in accor-
dance with ICD-GPS-2008-PR and the GPS SPS signal specification [7, 8],
and the generated user information is put into a standard format and forwarded
to the GAs for upload. This process is time-phased across satellites to evenly
distribute the MCS work load.
Navigation service integrity is monitored using MS data for four-satellite
position solutions and by comparing the pseudorange and received message
data with MCS expectations. Measurement residuals are evaluated by the MCS
whenever tracking data exist, and performance statistics are maintained as infor-
mation available to system operator personnel and the user community. Each bit
of the received navigation messag,e is compared with the corresponding upload
data-base value to verify proper dissemination.

Ground Antennas Communications with the satellite constellation is pro-


vided to the CS by the globally distributed, unmanned GA facilities. These
installations consist of a 10-meter S-band antenna and extensive dual-string
electronics consisting of an RF exiter, high-power transmitter, receiver, servo
equipment, communication channels, and a processor. Command, telemetry, and
navigation upload traffic is handled by these GA installations. Tracking orders,
equipment configuration commands, and data are received over secure dedi-
cated duplex communication channels, using SDLC for the error detect and re-
NAVSTAR GLOBAL POSITIONING SYSTEM 213

transmission features. Data from the MCS are buffered on disk prior to satellite
contact as a telemetry data from the satellite prior to transmission to the MCS.

5.5.5 GPS Signal Structure


The GPS satellites broadcast two signals: Link I, Ll, at a center frequency of
1575.42 MHz and Link 2, L2, at a center frequency of 1227.6 MHz [6, 17].
Each of these frequencies is an integer multiple of the 10.23 MHz clock, 154 for
Ll and 120 for L2. The purpose of broadcasting signals at two frequencies is to
allow the appropriately equipped user to correct for the ionospheric refraction
described in Section 5.4.1.
The L 1 signal consists of two carrier components: One carries a precise (P)
pseudorandom noise (PRN) code, while the other, transmitted in quadrature,
carries a coarse/acquisition (C/ A) PRN code. The L2 signal consists of only
one carrier that carries only the P code. Both codes are modulated with a 50-
bit-per-second (bps) data message. The format and contents of that message are
described in Section 5.5.6.

Signal Modulation The PRN codes and data are modulated onto the carriers
using binary phase shift keying (BPSK). This modulation shifts the phase of
the carrier 180° each time there is a change in state of the digitally defined
code and data. First of all, the PRN code is a sequence of 1's and O's, as are
the message data. The message data sequence is modulo-2 added to the code
sequence, which is nothing more than an exclusive or of a code bit and a data
bit, although the data bits transition at a much slower rate than do the codes.
The resulting sequence of I 's and O's are converted to 180" and oc phase shifts
of the carrier, respectively. Since the 180° phase shifts simply change the sign
of the carrier, an equivalent representation is simply an amplitude modulation
of ±1 's. The result is a mathematical representation of the signals as follows:

su (t) = AP(t)D(t) cos (27rf 1t + ¢o 1) + VlAC(t)D(t) sin (27r_{ 1t + ¢oJ) (5.46)


A
sL2(t) = v2 P(t)D(t) cos (21fht + cfJo2) (5.47)

where A is the Ll P signal amplitude, P(t) and C(t) are the ±I P and C/ A code
PRN sequences, D(t) is the ±I message data bit sequence,.f 1 andh are the Ll
and L2 carrier frequencies, and ¢ 01 and ¢ 02 are the ambiguous Ll and L2 carrier
phases. Note that the two signals are not phase coherent, even though they are
derived from the same frequency reference. There are also slight group (code)
delay differences between the P code and Cj A code modulations and between
the P code modulations at the two frequencies l7l

PRN Code Properties The PRN codes are generated as products (modulo-
2 sums, if expressed as I 's and O's) of two other codes clocked at the same
214 SATELLITE RADIO NAVIGATION

chipping rate, where

C;(t) = G 1(t)G2(t + 11; Tc) (5.48)


P;(t) = X 1(t)X2[t + (i - 1)Tp] (5.49)

for satellite i, where T, is the C/A code chip width, or the inverse of the C/ A
code chipping rate of 1.023 MHz, Tp is the P code chip width, or the inverse of
the P code chipping rate of 10.23 \1Hz, and 11; is an integer assigned to satellite
i for the C/ A code. In case of the P code, 11; takes on a value between 1 and
37 for 32 satellites and 5 reserved for ground transmitters (GTs) [7, 8]. In the
case of the C/ A code, the 11; are selected values between 1 and I 023 for codes
exhibiting desirable properties [32].
These PRN codes provide the desirable code-division, multiple-access
(CDMA) property that, to an extent, the codes received from the various satel-
lites do not correlate with each other, nor do they correlate with a reference
code in the user's receiver unless the state of the received code matches that of
the reference code. Thus, all satellite signals can be received at the same fre-
quency (except for Doppler differences) and selectively acquired and tracked,
depending upon the selection of the code in the reference code generator. These
PRN codes also exhibit the property that, to an extent, they spread interfer-
ence signals over the signal's bandwidth, providing a degree of interference
rejection. These spread spectrum properties of the GPS PRN codes will be dis-
cussed later in this chapter. Here we will discuss the correlation properties of the
codes.

I. CjA codes. The C/ A codes are Gold codes [ 18], where the G 1 and G2
codes are generated in maximal-length, 10-stage linear feedback shift registers,
each of which generate repeating maximal-length codes of length 1023 chips.
Figure 5.13 represents an implementation of a C/ A coder [32]. In this imple-
mentation the initial state of the G2 register represents the delayed state of that
maximal-length code (the 11; in Equation 5.48) from its initial state of all 1's.
The resulting C/ A (Gold) code is not a maximal-length code. A maximal-
length code x(t) has the autocorrelation property that [33]

2M -I

L x(t1)x(ti + kTc) = -1, k cf. 0


J=I

(5.50)

where k is an integer number and M is the number of stages in the shift register.
This is partly true for the C/ A codes, but they do not have perfect correlation
properties. On the average, a C/ A code has the autocorrelation property that,
for k -J 0 [ 17],
NAVSTAR GLOBAL POSITIONING SYSTEM 215

r------------------------------------------

CODE SET TO
CLOCK ALL 1'S

INITIALIZE

G2 STATE

_..G2(t+niTc)
..._.... .
1
~~~,-L-~~,_~~~.-~~ I
I
I
I
I
I
I
I
I
I

G2CODER :
------------------------------------------·
Figure 5.13 C/ A coder implemented with initial 02 state.

1023
L C(j)C(j + k Tc)
j= I

= -1 with probability 0.75


10 2 2
= 2< + l/ - 1 = 63 with probability 0.125
= -20°+ 2)/ 2 - 1 = -65 with probability 0.125 (5.51)

although the probabilities vary somewhat, depending upon which code is eval-
uated [32]. Also, whereas the maximal-length codes are always balanced, 256
of the I 023 Cj A codes are not. That is [32],

(5 .52)
216 SATELLITE RADIO NAVIGATrON

All of the 32 codes selected for the GPS satellites and the four codes selected
for the ground transmitters are balanced.
The autocorrelation property de:;cribed above carries over to the cross-corre-
lation between the different Cj A codes. This means that the Cj A codes are not
quite orthogonal, and care must be taken during acquisition of the satellite sig-
nals to prevent false acquisitions and false alarms. At zero Doppler difference
between satellite signals, Equation 5.51 above represents a separation of 23.9
dB between signals of equal power. There is also a degree of cross-correlation
between signals at other Doppler differences. This property will be described
later when we discuss the spectral characteristics of the PRN codes.
2. P codes. Whereas the C/ A codes are linear codes, the P codes are non-
linear. That is, the Cj A codes are made up of two maximal-length codes that
are clocked synchronously and are allowed to proceed through all of their I 023
states. This is not true for the P codes. The underlying linear codes of the P
code are short-cycled before creating the product of the codes. This has the
effect of creating an extremely long code. In fact the 37 individual P codes are
simply a one-week piece of a long code that is approximately 38 weeks long.
A typical P code implementation is presented in Figure 5.14 [7]. Note that
there are four shift registers, two each for the X I and X2 code generators. Each
of these 12-stage shift registers that have 2 12 - I = 4095 possible states are
short-cycled, either at 4092 or 4093 states, and reset. Both of the XI shift reg-
isters are reset on X I epochs (every 1.5 seconds), while the X2 shift registers
are reset every 1.5 seconds plus 37 chip clock cycles. All shift registers are reset
at the end of the week. Although the X I and X2 coders repeat every 1.5 and
1.5+ sec, the fact that they are running asynchronously at I 0.23 MHz, their
modulo-2 addition generates an extremely long code. Delaying the X2 code
with respect to the X I code an additional i- 1 chips for the ith satellite pro-
vides codes for each satellite. The count of the X I epochs provides a Z-count
that is used as basic timing for the system to which the data message and the
Cj A coder are synchronized. If th1~ coder were not set at the end of week, the
code would eventually run into the code of the other satellites and return to
the beginning almost 38 weeks later. Unlike the Cj A codes, the P codes have
excellent cross-correlation properties.
3. Spectral characteristics qf" the PRN codes. In the frequency domain the
spectral density of the signal is the spectral density of the PRN codes centered
at ±f;, i = I, 2. At baseband, the spectral density of the P code is

-00 <.{ < 00 (5.53)

where Tc is the P code chip width, or the inverse of the P code chipping rate
of 10.23 MHz, and Pp; is the appropriate P code carrier power. Actually, this
spectral density is bandlimited in the GPS satellites in order to protect radio
astronomers, so its range is not infinite. This could prevent a user's receiver
:--lAVSTAR GLOBAL POSITIONING SYSTEM 217

COUNT TO
RESET
403,199
RESET (ONE WEEK)

-- ------------------------------'
'' + 3750
X1A EPOCHS
FOR X1
2.5 kHz EPOCH
L----..JX11.5 SEC EPOCHS

15,345,000 CHIPS

10.23 X1 CODE GENERATOR


MHZ ---------------------------------•
r--··-------·--·-----------------i
'
COUNT 3750 :'
EPOCHS '
PLUS 37
CHIPS

DELAY i
15,345,037 CHIPS CHIPS FOR
SATELLITE

'
, X2 CODE GENERATOR :'
~---------------------------------·
Figure 5.14 Typical P coder implementation.

from achieving full correlation. However, all P code receivers also are band
limited. The resulting effect is known as correlation loss due to filtering.
The C/ A code for satellite i is a short one millisecond repeating code whose
spectral density is a line spectrum with components cii• j = -oo to +oo, where j
represents spectral lines I kHz apart and

jf Cji = P,; (5.54)


j.=~oo

where P,; is the C/ A code carrier power. The envelope of this line spectrum
takes on the form
218 SATELLITE RADIO NAVIGATION

(5.55)

However, the spectral lines deviate from this envelope significantly. Figure 5.15
illustrates this for the first two lobes of the spectrum. As stated earlier, the C/ A
codes do have a level of cross-correlation at Doppler differences. It has been
shown that this cross-correlation level is approximately equal to the magnitude
of the spectral line component of another Cj A code in the family of l 023 [ 18].
Since the line spectrum shown in Figure 5.15 is typical of all the codes as
far as variation about the envelope is concerned, that spectrum shows typical
levels for this type of cross-correlation-on the order of 2I dB below the carrier
power. Thus, the cross-correlation at Doppler differences can be higher than at
zero Doppler offset. However, its occurrence is quite rare, and a receiver would
never track it because it would disappear as the relative codes move past each
other. These occurrences can cause false alarms during initial signal acquisition.

Navigation Data Synchronization The 50-bps navigation data stream is


modulo-2 added to the codes prior to their modulation onto the carrier [7, 8].
The data bits are synchronized to the P code and C/A code epochs. The data
frames, described in the next section, are synchronized to the XI epochs and
the data bits are synchronized to the C/ A code 1 ms epochs. Each data bit is 20
milliseconds in length, encompassing 20 C/ A code epochs. The leading edge
of every 75th bit is synchronous with an XI epoch.

5.5.6 The GPS Navigation Message


The GPS navigation message is the information supplied to the GPS users from
a GPS satellite. These data are provided via the 50-bps data bit stream modu-
lated on the PRN codes described in Section 5.5.5, providing the user with the
information needed to navigate [34]. Among other data, the user is provided
with information from which can be computed the position and velocity of the
satellite and the time and frequency offset of its clock, as well as information to
resolve ambiguities in the received C/ A code. The other information includes
almanacs for determining the position, velocity, and clock offsets of the other
satellites, an ionosphere model and a description of the time offset between
GPS system time and universal coordinated time (UTC).

Frames, Subframes, and TLM and HOW Words The GPS navigation mes-
sage consists of a frame of five 300-bit subframes spanning 30 seconds of time
as illustrated in Figure 5.16 [7, 8]. Each six-second subframe consists of ten
30-bit words, the first two of which repeat in each subframe. These words, also
illustrated in Figure 5 .16, are the Telemetry Word (TLM Word) and the Hand-
Over Word (HOW Word). These two words are generated by the satellite, while
the other eight words are generated by the CS and uploaded to the satellite as
0

-10

N
::I: -20 - - - PRN 2 LINE SPECTRUM
u
co
"C
' -30
>-
!:::
C/l
z -40
w
0
...1
ct -50
et:
t-
(.)
w -60
Cl..
C/l
w
0 -70
0
(.)

~ -80
(.)

-90

-100
0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
FREQUENCY OFFSET- kHz
Figure 5.15 PRN 2 CI A code spectra l dens ity.
220 SATELLITE RADIO NAVIGATION

SUBFRAME
NUMBER
I
I
10 30-BIT WORDS; 6 SECONDS; ONE SUBFRAME ~I
I I I
rl
L2 FLAGS, URA, WEEK NUMBER AND
TLM HOW CLOCK CORRECTION PARAMETERS

w
2 I I ITLM HOW
EPHEMERIS PARAMETERS
::;:
<(
0:::
u..
w
z
0
(/)
0

3 I TLM 'HOW I EPHEMERIS PARAMETERS


l z
0
u
w
(/)
0
M
(/)
1-
SPECIAL MESSAGE, IONOSPHERIC MODEL, UTC iii
4 PARAMETERS, SATELLITE ALMANACS & HEALTH C)
C)

25 PAGES, REPEATING EVERY 12.5 MINUTES


"'

5 I TLM 'HOW I SATELLIHo ALMANACS & HEALTH, WEEK NUMBER

25 PAGES, REPEATING EVERY 12.5 MINUTES


l
TELEMETRY MESSAGE l·l·ls BITS PARITY I
PREAMBLE • RESERVED BITS

F-~-"~ ~ - E_ .l I_P_:_1R_T1s_~_ _.l_o_. l_ .o


How woRDI'-_ _ _ _T_R_u_N_c_A_TE_D_z_-c_o__u_N_T_ _ _ ___.I___.I___...I ..........I...... I

l
TIME OF START OF NEXT SUBFRAME LALERT l...LBITS TO CAUSE
FLAG FINAL PARITY
A-S FLAG BITS TO ZEROS

Figure 5.16 GPS navigation message frame and TLM and HOW words.

described in Section 5.5.4. The TLM Word consists of an 8-bit preamble and a
satellite telemetry message. The preamble (1 000 I 011) allows the user's receiver
to synchronize with the subframes to establish time of reception and subsequent
resolution of the Cj A code ambigu:tty. This ambiguity exists because the signal
transit time from the satellite is on the order of 80 msec, while the length of the
repeating Cj A code is only 1 msec. For the most part, the telemetry message
is of no use to the user; it primarily provides real-time telemetry information
from the satellite to the CS.
The HOW Word contains a 17-bit truncated Z-count indicating the time of
NAVSTAR GLOBAL POSITIONING SYSTEM 221

the start of the next subframe in units of six seconds, or four X 1 epochs. It
also contains two flags, an A-S onjoff flag and an alert flag, and a subframe
identification (ID) (1 to 5 for subframes 1-5). The Z-count is used by the user's
receiver to establish GPS time. For P code users the Z-count also provides the
time information required to hand over to P code, and thus the reason for the
name of the word.
The A-S on/off flag alerts non-PPS users as to whether or not they are able
to acquire the P code, and it alerts PPS users as to whether or not they should
encrypt their reference P code [7]. The alert flag indicates to unauthorized users
that the satellite user range accuracy (URA) may be worse than indicated in
subframe I and that they should use the satellite at their own risk [7].

Parity Both the TLM and HOW words contain satellite-generated parity, six
bits per word. The other eight words of each subframe contain MCS-generated
parity. The HOW Word contains two noninformation bearing bits so that the
last two parity bits can be forced to O's, since the parity algorithm overlaps
word boundaries.
The parity algorithm is a (32, 26) Hamming error detection algorithm. It
will detect 1-, 2-, or 3-bit errors. As stated above, the parity overlaps the 30-
bit words, since it is based upon 32 bits, always including the last two bits
of the previous word. Thus, since the parity in the TLM and HOW words are
generated in the satellite, and the parity on the other words are generated in the
MCS, two bits in the HOW Word and Word I 0 of the MCS generated words
are wasted. This is so that both the satellite and the MCS know the state of the
last two bits of the previous word (set to 0) when generating the TLM Word
(satellite) and Word 3 (MCS). The Hamming error detection algorithm used is
specified in [7, 8].
Words 3 through I 0 of each subframe contain the message data. Subframes 1,
2, and 3 repeat every 30 seconds consisting of the same data for nominally one
hour but can change more often or less frequently. Subframes 4 and 5 subcom-
mutate 25 times each, so that a complete data message requires the transmission
of 25 full 1500-bit frames, or 12.5 minutes. Thus every 30 seconds one page of
subframes 4 and 5 are transmitted. These pages consist of less timely data that
change whenever the satellite is uploaded, while sub frames I, 2, and 3 consist
of more timely data that change periodically. This periodicity is either once per
hour, on the hour, once per 4, 6, 12, or 24 (or more) hours on 4-, 6-, 12-, or
24-hour (or more) boundaries. Any period greater than an hour represents a
case when a satellite could not be uploaded within a day of a previous upload,
which would be rare. In addition, the data could change at any time with a
quick upload. Again, this is a rare condition required to change substandard
or erroneous data. One-hour data sets are valid for three additional hours. All
other data sets are valid for two hours past their period of transmission.

Subframes 1 and 2 Data Content The primary content of subframe I con-


sist of clock parameters describing satellite time during the valid time interval.
222 SATELLITE RADIO NAVIGATJON

TABLE 5.3 Subframe 1 par:ameters


Parameter Description

Code on L2 flag C/ A or P or reserved; nominally P


Week number GPS week since I, January 1980, mod I 024
L2 P data flag Navigation data on or off; nominally on
URA parameter Exponent parameter for URA evaluation
Satellite health Discretes describing the satellites health
TGo Additional clock correction for Ll-only users
IODC Issue of data-clock
Reference time for clock corrections
Satellite clock offset relative to GPS time
Satellite clock fractional frequency offset
Satellite clock drift rate

Other contents include the GPS week number, URA, health of satellite, various
flags, and data reserved for authorized users. Content details are given in Table
5.3.

I. Satellite clock corrections. The satellite clock parameters describe a poly-


nomial that allows the user to determine the effective satellite PRN code phase
offset and carrier frequency offset. !::.t, is referenced to the phase center of the
satellite antennas and with respect to GPS system time t. The equation for cor-
recting the time t, received from the satellite in seconds is

t = t, - !::.t,,. (5.56)

where

(5.57)

where afO· an, and af2 are the polynomial coefficients transmitted in subframe
1, toe is the clock data reference time in seconds, and !::.t r is the relativistic term
in seconds computed as

2X,·V, r:.
!::.tr = - - ·-2 - · =FeY A Sll1 Ek (5.58)
c

where the orbit parameters e, A, and E are derived from the data contained in
subframes 2 and 3, and
NAVSTAR GLOBAL POSITIONING SYSTEM 223

-2.j/i
F = = -4.442807633 x 10-IO sec/~ (5.59)
c2

where

p. = 3.986005 x 10 14 meters 3 jsec 2 (5.60)

is the value of the Earth's universal gravitational parameter, and

c = 2.99792458 x 10 8 meters/sec (5.61)

is the speed of light. These relativistic effects on the GPS satellite clocks are
described in [35]. It suffices to use the uncorrected value of ts in the evaluation
of the above equations, since, prior to correction, it is within 1 msec of the
corrected value.
Since the clock correction parameters estimated by the MCS are based upon
L I and L2 measurements combined for correction of the delay through the iono-
sphere, and since there is a potential L I /L2 differential bias in the satellite, the
L !-only user must revise his computations of the satellite's clock offset with
[7, 8, 36]

(5.62)

By specification [7, 8], this correction can be as large as 23.19 nsec.


The issue of data clock (IODC) indicates changes in the subframe 1 data,
which occur nominally once per hour. It alerts the user if new data are being
transmitted but does not indicate the age of the data. It should also be used
by the user's receiver to match the issue of the ephemeris data provided in
subframes 2 and 3.
2. User range accuracy. URA is the MCS 's prediction of the pseudorange
accuracy provided to the user for the satellite, indicating no better than X
meters, where

X= 2(1 +N/2) N'5.6


= 2(N-2) N>6 (5.63)

where 0 '5. N '5. 14 is the parameter provided in subframe 1. N = 15 indicates the


absence of an accuracy prediction. This accuracy does not include the effects
of ionospheric delay modeling performed by a single frequency user.
3. Satellite position computations. Subframes 2 and 3 contain the ephemeris
parameters required to compute the position X, and velocity Vs of the satel-
lite in ECEF coordinates. Table 5.4 lists these parameters. These parameters are
224 SATELLITE RADIO NAVIGATlON

TABLE 5.4 Parameters of subfrarnes 2 and 3


Parameter Description

Mo Mean anomaly at reference time


tJ.n Mean motion difference;; from Computed Value
e Eccentricity
vA Square root of the seminajor axis
no Longitude of ascending 1odc of orbit plane at weekly epoch
io Inclination angle at reference time
w Argument of perigee
(l Rate of right ascension
di/dt Rate of inclination angle
Cue Amplitude of the cosine harmonic correction term to the argument of latitude
Cus Amplitude of the sine harmonic correction term to the argument of latitude
Crc Amplitude of the cosine harmonic correction term to the orbit radius
Crs Amplitude of the sine h<trmonic correction term to the orbit radius
C;c Amplitude of the cosine harmonic correction term to the angle of inclination
C;s Amplitude of the sine hamonic correction term to the angle of inclination
toe Reference time for ephemeris computations
lODE Issue of data-ephemeri:;

that of a curve fit over the time of validity (fit interval) and not necessarily
an entire orbit. A fit interval flag in subframe 2 indicates whether the interval
is four hours or greater than four hours. lODE has similar meaning for the
ephemeris parameters as IODC has for the subframe I parameters.
The user receiver applies the parameters to a variation on the equations given
in Section 5.3. However, not all the parameters are indicated in that section.
These additional parameters reflect orbit drift and other orbit perturbations such
as the sun and moon gravitational forces and solar radiation pressure. Typical
equations for the computation of the satellite's position in ECEF coordinates
are given in Table 5.5 [7, 8].
The accuracy of curve fit used in generating these parameters and subsequent
truncation is quite good. For the four-hour fit, the user range error (URE) based
on a projection of the curve fit error onto the user range is less than 0.4 meters,
1-sigma. For a six-hour fit, the URE degrades to 1.6 meters, 1-sigma. The equa-
tions provide the satellite's antenna phase center position in the WGS-84 ECEF
reference frame defined in [7, 8].

Subframe 4 and 5 Data Content. All of the data contained in subframes 4


and 5 are of the almanac variety; that is, the data describe long-term parameters
associated with all GPS satellites and the GPS system itself. These subframes
consist of 50 pages, not all of which are in use. Table 5.6 presents a summary
of the data contained by page number.

1. Satellite almanacs. The ephemeris and clock almanac data are a subset
NAVSTAR GLOBAL POSITIONING SYSTEM 225

TABLE 5.5 Typical satellite position computations


Equation Computation

t k = t - toe Time from ephemeris reference spoch toe


n =no+ !:;.n Corrected mean motion
Mk=Mo+ntk Mean anomaly at time tk
Ek=Mk+esinEk Eccentric anomaly at time tk

Vk =tan -l(~sinEk) True anomaly at time tk


cos Ek-e

Pk=Vk+W Argument latitude at time tk


OUk = Cus sin 2<I> k +Cue cos 2<I> k Second harmonic perturbation to argument of
latitude at time tk
Ork = Crs sin 2<I> k + Crc cos 2<I> k Second harmonic perturbation to orbit radius
at time tk
oik = Cis sin 2<I> k + Cic cos 2<I> k Second harmonic perturbation to inclination
angle at time tk
Uk = Pk + OUk Corrected argument of latitude at time tk
rk =A (I - e cos Ek) + Ork Corrected orbit radius at time tk
ik ~ io + oik + di/dt · tk Corrected inclination angle at time tk
xk' = rk cos Ilk Position in orbital plane at time tk
_',"k' = /"/(Sill
. Uk

!1k = !1o + (0- Or:)tk- OEtoc Corrected longitude of ascending node at time
tk accounting for earth's rotation rate !1 e
Xk = x~ cos !1k- y~ sin !1k cos ik ECEF coordinates at time tk
Yk = x~ sin !1 k + y~ cos !1 k cos ik
Z.k = Yk' ..
Sin lk

of that provided in subframes I and 2, describing the complete orbits of


all satellites in the constellation. The ephemeris parameters effectively
consists of only the six basic Keplerian parameters plus the rate of right
ascension. The clock parameters consist of only the clock offset and frac-
tional frequency offset. All of the parameters are referenced to a common
time 3.5 days in advance of the start of transmission. These parameters
can be used for an extended period of time in the future provided that
the number of weeks (since the almanac reference week number trans-
mitted on page 25 of subframe 25) are accounted for in projection of the
parameters to that time.
2. Ionospheric delay model. The ionospheric delay model is intended for the
L !-only users. Eight model parameters are provided on page 18 of sub-
frame 4 that serve as polynomial coefficients describing the maximum
zenith amplitude and the time dependency of the model as a function of
geomagnetic latitude of the Earth's projection of the ionospheric inter-
section point (liP), assuming a mean ionospheric height of 350 km. This
latitude is computed as a function of the user's geodetic latitude and Ion-
226 SATELLITE RADIO NAVIGATION

TABLE 5.6 Page content of subframes 4 and 5


Pages Content

Subframe 4, pages I, 6, II, Reserved for authorized users


12, 16, 19, 20, 21, 22,
23 and 24.
Subgrame 4, pages 2, 3, 4, Ephemeris and clock data for satellites with PRNs
5, 7, 8, 9 and 10 25 through 32
Subframe 4, pages 13, 14 Spare pages
and 15
Subframe 4, page 17 Special messages
Subframe 4, page 18 Ll-only user's ionospheric delay model and GPS
time/UTC conversion parameters
Subframe 4, page 25 A-S fiags and satellite configurations for 32 satellites;
health data for satellites 25 through 32
Subframe 5, pages I Ephemeris and clock data for satellites with PRNs
through 24 I through 24
Subframe 5, page 25 Almanac reference week number and health data for
satellites I through 24

gitude. Details of the algorithm are given in [7, 8] and [36]. The algorithm
reduces the delay error for the single-frequency users on the order of 50%
to 60% [37].
3. UTC parameters. Page 18 of subframe 4 also provides the parameters
needed to relate GPS time to UTC and notices to the users regarding the
scheduled future or recent past changes due to leap seconds. The param-
eters include the number of integer seconds between UTC and GPS time,
plus first-order polynomial coefficients describing the drift between GPS
time and UTC. However, this drift is kept to a minimum by the MCS by
steering GPS time toward UTC modulo I second. Details of this relation-
ship are described in l7, 8].

5.5.7 GPS Measurements and the Navigation Solution


Measurements The basic GPS measurements are pseudoranges from N satel-
lites as described in Equation 5.1, where N ~ 4, unless the solution is augmented
with another type of measurement such as barometric altitude. Two other types
of measurements are available to the avionics user-pseudorange rate (Doppler)
and delta-pseudorange (integrated Doppler). These measurements are used to
strengthen the determination of velocity and user clock drift, although the inte-
grated Doppler measurement is sometimes used as a change in pseudorange,
and thus change of position, in precise kinematic modes of operation being
considered for precision landing applications [2].
The pseudorange rate is simply the derivative of Equation 5. 1, where
NAVSTAR GLOBAL POSITIONING SYSTEM 227

dPR; . . . .
-- =
dt R; + cb.t,;
. ~ cb.tu + EPR I (5.64)

In a linearized sense, the perturbation of the pseudorange rate is

dPR; ~ ~ . .
0 -- = 1; · oXu ~ cf1tu + EPR
dt I

(5.65)

where h; is as defined in Equation 5.41.


Delta-pseudorange is usually used by an avionics receiver to approximate
a pseudorange rate measurement using a relatively short integration time [ 38 J
(0.1 to I second, depending upon dynamics), where. in a linearized sense, the
perturbation equation is simply Equation 5.65 multiplied by D.t = ti ~ ti 1, or
multiplying the components of h; and the error term by that quantity. In the
case of the kinematic mode of operation, Equations 5.7 and 5.8 hold.

Measurement Errors The error term is handled with a combination of correc-


tions and modeled uncorrected error sources. The corrections for pseudorange
measurements are as follows:

I. Satellite clock relativity correction. This correction is described by Equa-


tion 5.58. It is caused by the conservation of energy due to the difference
in kinetic and potential energy of the satellite and the user. The correction
is based on the theory of general relativity [35].
2. Ionospheric refi·action correction. This correction is either made by mea-
suring the difference in pseudorange (PR 1; PR 2;) at two different fre-
quencies and applying it to a dual frequency con·ection, or applying the
single-frequency model defined in Section 5.5.6 and in reference [36]. The
dual frequency correction is based on Equation 5.32 [ 18]. The correction
for an L I pseudorange is

D.PR!i = ( ./ ~ 1 ) (PR1; ~ PR2;)


f I ··f 2
= 1.546(PRJ; ~ PR2;) (5.66)

Since the L2 delay is greater than the Ll delay, the correction is negative,
decreasing the pseudorange.
3. Tropospheric refraction correction. This correction is made by subtracting
the effects of Equation 5.36. There are many variations to this correction,
depending upon the desired accuracy or complexity.
228 SATELLITE RADIO NAVIGATION

4. Earth's rotation correction. The error in pseudorange caused by the


Earth's rotation is due to the fact that the Earth is rotating while the signal
is propagating from the satellite to the user. The pseudorange error can
be as large as 30 meters with a sign dependent on the user's position with
respect to the satellite. The correction can be done in one of two ways
[6]. The first is a correction in pseudorange, which is

(5.67)

The second correction is realized by rotating the satellite back in time


about the Earth's axis to compute its position for a slant range rather
than a geometric range. This is accomplished by modifying the equation
for the corrected longitude of ascending node described in Table 5.5 by
the value

(5.68)

where R; is the estimated range to the satellite.


5. Selective availability errors. The authorized user (PPS mode) is also
allowed to correct for system-induced SA errors.

Uncorrected Error Sources After all the corrections stated above, residual
error sources remain. Among the most predominant are SA errors for the unau-
thorized user (SPS mode). The next most predominant error source is the resid-
ual ionospheric refraction error for the Ll-only users. Next is the standard sys-
tem errors in the satellite clock parameters, followed by multipath errors, the
tropospheric refraction, the satellite ephemeris, and finally receiver noise. New
receiver technology has reduced the effects of multipath and receiver noise,
with the exception of operating in a jamming environment.

The Navigation Solution The standard methods for producing a navigation


solution (position and velocity) are using Kalman filtering and sequential least-
squares methods (see Chapter 3). Military applications generally use Kalman
filtering, while commercial applications generally use least squares. A general
least-squares method is illustrated here.
The basic assumption is that an initial (or previous) solution estimate is
known (which could initially simply be the center of the Earth). Then residuals
are computed with respect to that initial (or previous) estimate and sequentially
updated based upon the latest projected solution estimate. The resulting mea-
surement residuals are then defined as
NAVSTAR GLOBAL POSITIONING SYSTEM 229

= HT[oX~ OC11tu oX! oc~iu]T


= HToPVu (5.69)

where oM is a 2N vector of pseudorange and pseudorange rate (or delta-pseu-


dorange) residuals (difference between estimated and measured) for N satel-
lites, oPVu is a 8 x 1 state vector made up of position, velocity, and time state
residuals, and

(5.70)

is a 2Nx8 matrix mapping the position, velocity, and time state residuals into the
2N measurement residuals. The H matrix could be the HLTP matrix of Equation
5.44 if the solution is in the north, east, and up domain. Since the HT is general
not a square matrix, the solution is overly determined. In that case, the solution
for oPVu in a weighted least-squares sense is

(5.71)

where W is a matrix weighting the measurements from the various satellites


according to their accuracy or other weighting criteria. This perturbation solu-
tion is then added to the projected estimate previously used to compute the
residuals. If the weights are inversely proportional to the variance of the mea-
surement errors, the position, velocity, and time estimation error covariance
matrix is then

(5.72)

5.5.8 Aviation Receiver Characteristics [121 (Chapter 8)]


GPS avionics receiver technology has evolved significantly since the start of
development in 1974 and is continuing to evolve for military avionics appli-
cations and especially for commercial avionics applications. The GPS avionics
receiver characteristics vary considerably, from the low-end general aviation
receiver to the high-performance military aircraft and missile avionics receivers.
Only the high-end commercial and military avionics receiver characteristics are
described in this chapter.
230 SATELLITE RADIO NAVIGATION

General Characteristics The architecture of commercial and military avionics


receivers is generally the same. They only differ in design parameters related
to the environmental, dynamic capability, anti-jam (AJ), integrity, and security
requirements. Except for the integrity requirements (see Sections 4.3.1, 5.7 .I,
and 5.7.3), the military avionics :receiver dynamic performance requirements
are usually more stringent, while commercial avionics applications are more
concerned with safety-of-flight integrity requirements. Prior to describing the
differences related to these requirements, a summary description of a generic
GPS avionics receiver is given. More details follow in the description of com-
mercial and military avionics receivers, respectively.
A system level functional block diagram of a generic GPS avionics receiver
is shown in Figure 5.17. The receiver consists of the following major functions:
(1) antenna, (2) preamplifier, (3) reference oscillator, (4) frequency synthesizer,
(5) downconverter, (6) an intermediate frequency (IF) section, (7) signal pro-
cessing, and (8) navigation processing.
The antenna may consist of one or more elements and associated control
electronics, and it may be passive or active, depending upon its performance
requirements. Its function is to receive the GPS satellite signals while rejecting
multipath and, in some military applications, to reject interference signals. The
passive antenna usually has a hemispheric coverage pattern.
The parameters that dictate the antenna requirements are as follows: gain
versus azimuth and elevation, multipath rejection, interference rejection, pro-
file, size and environmental conditions. The gain requirements are a function
of satellite visibility requirements and are closely related to multipath rejection
and somewhat related to interference rejection. The goal is to have near-uni-
form gain toward all satellites above a specified elevation angle while reject-
ing multipath signals and interference typically present at low-elevation angles.
These are usually conflicting requirements. Some multipath rejection can also
be achieved by reducing the left-hand circularly polarized (LHCP) gain of the
antenna without reducing the right-hand circularly polarized (RHCP) gain. This
is because the satellite signals are RHCP signals, whereas reflected multipath
signals usually tend to be either linearly polarized (LP) or LHCP, depending
upon the dielectric constant of the reflecting surface.
Interference rejection can also be achieved using a phased-array antenna,
where the relative phase received from each antenna is controlled to "null'' out
the interference in the combined reception. This type of an antenna i~; called a
controlled reception pattern antenna (CRPA), and it is usually reserved for mil-
itary applications. A low antenna profile is important in the avionics application
to minimize drag, but this is traded off against a desired gain pattern.
Environmental conditions dictate the type of material used for the antenna
and whether a radome is required. Some materials change their dielectric prop-
erties as a function of temperature.
The minimum operational performance standards for commercial avionics
antennas are given in [122b].
The preamplifier generally consists of burnout protection, filtering, and a
ANTENNA

· PREAMPLIFIER
RF~ I DOWN-
CONVERTER
I >I I IF
> SIGNAL
PROCESSING
PR,,
"' NAVIGATION
PROCESSING "'
POSI~ ON,

) )~ j' PHASE,
)~
VELOC TY,

'
LO

LOs& CLOCKS
DATA TIME, E TC.

'",
REFERENCE FREQUENCY
OSCILLATOR SYNTHESIZER INTERRUPTS
..
Figure 5.17 Generic GPS avionics receiver functional block diagram.

N
~
......
232 SATELLITE RADIO NAVIGATION

low-noise amplifier (LNA). Its primary function is to set the receiver's noise
figure (see Section 4.2.1) and to reject out-of-band interference. The parameters
that dictate the preamplifier requirements are the unwanted RF environment as
received through the antenna and losses that precede and follow the preamplifier
and desired system noise figure (or noise temperature) as derived from overall
receiver performance requirements. The gain of the preamplifier is not a system
level requirement per se, but a derived requirement that satisfies the system level
requirement.
The unwanted RF environment as received through the antenna affects the
preamplifier in two ways. It can cause damage to the preamplifier electronics,
or it can cause saturation of the preamplifier and circuitry that follows. Nor-
mally, except for damage prevention, one can do nothing to suppress the RF
environment, as passed by the antenna, at frequencies that are in the bandwidth
of the desired GPS signal. That environment is considered to be either jam-
ming or unintentional interference. There are, however, more advanced tem-
poral interference suppression techniques that can be used to suppress narrow-
band interference [121, (Chapter 20)]. Suppression of the RF environment out
of the desired GPS signal band can be accomplished by filtering before, during.
and/or after amplification. When It is accomplished it is based upon a trade-off
between system noise figure requirements, filter insertion loss. and bandwidth
efficiency. Suppression of in-band and out-of-band damaging interference is
usually accomplished with diode-; that provide a ground path for strong sig-
nals. In the case of lightning protection, more complex lightning arrestors may
be used.
The system noise figure is set using a low-noise amplifier (LNA) that pro-
vides enough gain to cause any losses inserted after the LNA to have a negli-
gible effect. An LNA cannot account for losses inserted before its operation or
for its own noise floor.
The reference oscillator provides the time and frequency reference for the
receiver. Since GPS receiver measurements are based on the time of arrival
of PRN code phase and received carrier phase and frequency information, the
reference oscillator is a key function of the receiver. Its output is used by the
frequency synthesizer, which converts the oscillator output to local oscillators
(LOs) and clocks used by the receiver. One or more of those LOs are used by
the downconverter to convert the radio frequency (RF) inputs to IF frequencies.
The signals are easier to process in the IF section of the receiver.
The requirements on reference oscillators for avionics receivers vary depend-
ing upon the avionics application. A high-quality oscillator can be the most sig-
nificant cost item of a modern receiver. Thus, there are compromises made on
oscillator performance. There are some commercial and military applications
where refrence oscillator performance is critical.
Typical requirements parameters applied to reference oscillators are as fol-
lows:

• Size. Stable oven-controlled crystal oscillators (OCXOs) and rubidium


NAVSTAR GLOBAL POSITIONING SYSTEM 233

oscillators can be quite large. Temperature compensated crystal oscillators


(TCXOs) are relatively small. Larger oscillators have more temperature
inertia.
• Power. OXCOs and rubidium oscillators consume significant power.
• Short -term stability (less than l 0 sec) due to temperature, power supply,
and natural characteristics. Short-term stability affects the ability to esti-
mate and predict time and frequency in the receiver.
• Long-term stability (greater than and I 0 sec, up to hours and days) due to
natural characteristics, including crystal aging.
• Sensitivity to acceleration-g force and vibration sensitivity. Vibration
causes phase noise and dynamic g forces affect the ability to estimate time
and frequency in the receiver.
• Phase noise, High-frequency stability (frequencies above I Hz offsets from
the nominal frequency).

Mostly, the requirements placed on the frequency synthesizer are derived


requirements and are the receiver designer's choice. Its design is based on
the designer's frequencv plan that defines the receiver's IF frequencies, sam-
pling clocks, signal processing clocks, and so on. The frequency plan requires
careful analysis to insure adequate rejection of mixer harmonics, LO feed-
through, unwanted sidebands, and images. A key design parameter for the syn-
thesizer is the minimization of phase noise generated in the synthesizer. The
frequency synthesizer is also required to generate local clocks for signal pro-
cessing and interrupts for the navigation processing. These local clocks com-
prise the receiver's time base.
The downconverter mixes LOs generated by the frequency synthesizer with
the amplified RF input to an IF frequency and, if so designed, the IF frequency
to lower IF frequencies. This process implements the frequency plan. which
again is the receiver designer's choice.
The purpose of the IF section is to provide further filtering of out -of-band
noise and interference and to increase the amplitude of the signal-plus-noise to
a workable signal-processing level. The IF section may also contain automatic
gain control (AGC) circuits to increase the dynamic range of the receiver and
to suppress pulse-type interference.
The requirements on the IF section are as follows:

I. Final rejection of out-of-band interference. unwanted sidebands, LO feed-


through, and harmonic~. The bandwidth of this rejection is a trade-off
against correlation lo~s due to flltcring. In addition, rejection of wideb<md
noise i~ required to minimize aliasing in the signal-processing sampling
process.
'1 lncrc~N' the amplitude of the sigr.a!··plus-noise to workable levels for sig-
ilal pruccssing anJ con1.ml that amplitude. a~ required. for signal proces~­
mg (AGC)_
234 SATELLITE RADIO NAVIGATION

3. Suppress pulse-type interference.


4. Converts the IF signal to a baseband signal comprised of in-phase (/) and
quadraphase (Q) signals.

The signal-processing function of the receiver is the core of a GPS receiver


that performs the following functions: (I) splits the signal-plus-noise into multi-
ple signal-processing channels for processing of multiple satellite signals simul-
taneously, (2) generates the reference PRN codes of the signals, (3) acquires the
satellite signals, (4) tracks the code and the carrier of the satellite signals, (5)
demodulates the navigation message data from the satellite signals, (6) extracts
code phase (pseudorange) measurements from the PRN code of the satellite sig-
nals, (7) extracts carrier frequency (pseudorange-rate) and carrier phase (delta-
pseudorange) measurements from the carrier of the satellite signals, (8) extracts
signal-to-noise ratio information from the satellite signals, and (9) maintains a
relationship to GPS system time.
The prime requirement of the signal processing is to provide the GPS mea-
surements and navigation message data from selected satellites required to per-
form the navigation-processing function. The outputs of the signal process-
ing function are pseudoranges, pseudorange rates and/or delta-pseudoranges,
signal-to-noise ratios, local receiver time-tags, and GPS system data for each
of the GPS satellites being tracked.
The navigation-processing function controls the signal-processing function
and uses its outputs to satisfy the navigation requirements. It does this by per-
forming some or all of the following functions:

I. Selects satellites to be acquired and tracked by the signal-processing


function.
2. Computes signal acquisiti:on and tracking aiding information for the
signal-processing function.
3. Reinitializes the signal-processing function in case of loss of lock.
4. Collects measurements and navigation message data from the signal-pro-
cessing function and maintains a system data base.
5. Computes the satellites' positions, velocities, and time corrections as
described in Section 5.5.6.
6. Corrects the measurements as described in Section 5.5.7.
7. Accepts and processes external navigation data to aid the navigation pro-
cessing.
8. Solves for position, velocity, and time as described in Section 5.5.7.
9. Determines the integrity of the position and velocity solutions.
10. Performs area navigation as described in Chapters 2 and 14.
II. Performs input/output processing. In the case of the commercial avion-
ics receivers, this processing is usually in accordance with ARINC 429
standards. In the case of the military avionics receivers, the usual inter-
NAVSTAR GLOBAL POSITIONING SYSTEM 235

face is in accordance with MIL-STD-1553 bus standards exercising GPS


1553 protocols.
12. Accepts appropriate crypto keys in military avionics receivers and per-
forms computations required for SA and A-S.

Commercial GPS Aviation Receivers In 1996, GPS receivers for commercial


avionics are still in the developmental stage, awaiting the definition of required
navigation performance (RNP) for sole means navigation. However, a prop-
erly designed and tested receiver can be used for supplemental means navi-
gation. Requirements for a supplemental means GPS receiver are defined in
the Federal Aviation Administrations (FAA) Technical Standard Order TSO-
C 129 [39] for all phases of flight except precision approaches, which is based
upon RTCA's Minimum Operational Performance Standards (MOPS) for Air-
borne Supplemental Navigation Equipment Using Global Positioning Svstem
(GPS!-RTCAjD0-208 [40]. The European minimum operational performance
specilications for airborne GPS receiving equipment are given in [123]. Similar
standards are being developed for sole means navigation, including tho~e for
precision approach [41]. Precision approach requires the concept of differential
GPS (DGPS), which is described in Section 5.5.9.
The characteristics for a GPS avionics receiver ( GPS sensor) for commercial
transport applications are specified in ARINC Characteristic 743A -/ GNSS Sen-
sor [42]. These characteristics arc specified for two configurations, one where
the receiver and LNA arc packaged scparatdy (2 MCC configuration) and one
where the receiver and LNA arc packaged together for installation near the
antenna (alternate configuration).
Commercial GPS avionics receivers are specified in ARINC 743A-l to u~e
the C/A code only, which limits them to the Ll frequency. Since some of these
receivers are being developed with precision landing capabilities in mind, high-
accuracy C/ A code tracking is required. Thus, in this section, the receiver used
as a model for the description of commercial GPS avionics receivers i:-. one that
use~ state-of-the-art technology for this high accuracy, namely the 12-channel
Nov A tel GPSCardTM. This receiver uses the narrow correlutor spacinfi tech-
nology that reduces the effects of ambient noise and multipath [43, 44]. An
OEM card version is illustrated in Figure 5.18. This card is being upgraded
for the commercial aviation environment. A functional block diagram of this
receiver card is illustrated in Figure 5.19. The following is a description of
those functions:

1. Antenna/LNA. The antenna shown in Figure 5.20 is not peculiar to the


described receiver card. It was developed and is manufactured by Sensor
Systems (P/N S67-1575-39). It is an FAA TSO-Cll5a certified airborne
radome-protected antenna that includes a built-in 26 dB LNA, band-
pass filtering and lightning protection. The LNA is powered through the
coaxial cable with a de voltage between 4 and 24 v at a maximum of
236 SATELLITE RADIO NAVIGATION

Figure 5.18 OEM GPS receiver card (courtesy, NovAte! Communications, Ltd.)

I 00 mw. The antenna is .1.5 in. in diameter and weighs 5 oz. It meets
the requirements specified in references [42] and [ 122].
'"l Reference oscil/atm: Like most commercial receivers, this receiver uses a
small TCXO as its reference oscillator (on the center of the hoard shown
in Figure 5.18). These small TCXOs have marginal stability and phase
noise characteristics for some GPS applications, hut they are satisfac-
tory when used in conjunction with a multiple-parallel-channel receiver
with tracking loop bandwidths commensurate with dynamic applications.
Note that the reference oscillator output (20.473 MHz) is used directly
as clocks for sampling and signal processing.
3. Synthesi::.erjdmvnconvertcr. A block diagram of the synthesizer and
downconverter is shown in Figure 5.21 ]4.1]. A commercial synthesizer
chip with a programmable divider (prescaler) is used to phase lock a
voltage-controlled oscillator (VCO) to the reference oscillator. The VCO
output frequency is doubled to provide the downconverter LO.
4. Filtering. IF filtering is realized with a 8 MHz bandwidth surface acous-
tic wave (SAW) filter centered at the IF frequency of .15 MHz ]4.1].
Although this SAW filter has a significant group delay, the delay is sta-
ble, and the filter provides excellent rejection of out-of-band noise and
interference.
5. IF sampling and A/D conversion. The IF sampling and A/D conver-
sion process is illustrated in Figure 5.22. This processing includes a
precorrelation automatic gain control (AGC) ]4.1] that controls the level
y t-11CROPROCESSQR

lf:> DOWN~

CONVERTER

LO
FILTERING ANDND
FUNCTION!>

~FILTERING
~ACQUISITION
ALGORITHMS
~TRACKING
LOOPS
LOCK
DETECTION
~DATA RECOVERY
REFERENCE SYNTHESIZER ~MEASUREMENT
OSCILLATOR NCO
PROCESSING

NCO AND
REFERENCE
CLOCKS

Figure 5.19 Functional block diagram of commercial receiver (courtesy, NovAte! commu-
nications, Ltd.)

N
~
......
238 SATELLITE RADIO NAVIGATION

GPS

oo
cNOi PA\N1

Figure 5.20 Civil aviation antenna (courtesy, Sensor Systems)

of the signal-plus-noise entering the 2.5-bit A/ D converter for optimum


quantization in the presence of both noise and narrowband interference.
Typical lower-performance commercial GPS receivers use 1-bit quanti-
zation with no AGC, which suffer 2 to 3 dB loss of signal-to-noise ratio,
depending upon the precorrelation bandwidth, in the presence of noise,
and up to 6 to 8 dB loss in the presence of narrowband interference
[45-50]. The multi-bit quantization coupled with AGC is important in
NAVSTAR GLOBAL POSITIONING SYSTEM 239

34.8267 MHz (IF)= 714 F0 -1.00105 MH

PROGRAMMABLE
INPUT
N=9632

SIGNAL PROCESSING
Q__""-PHASE
CLOCK

~DETECTOR

@ 79.99727 kHz F0 = 20.473 MHz


REFERENCE

Figure 5.21 Synthesizerjdownconverter block diagram (courtesy, NovAte! Commu-


nications, Ltd.)

aviation applications in which good receiver sensitivity and interference


rejection is a characteristics [42]. The IF sampling rate of 4f1F/N,, where
N, is an odd number (designer's choice), is a method of converting in-
phase (I) and quadraphase (Q) samples directly, without first converting
the IF signal to baseband. The result is a sequence of samples of the
signal components [43]:

SAMPLE CLOCK

Figure 5.22 Typical IF sampling and A/D conversion process.


240 SATELLITE RADIO NAVIGATION

(5.73)

or

l,k, -Q,,k, -l,k, Q,k. l.,k. -Q,k, -l,k. Q,k, ... (5.74)

depending upon the value of N,, where

l,k == ACkDk cos r:/Jk (5.75)


Q,k == ACkDk sin r:fJA (5.76)

where Ck. [)" and ¢k are the code, data bit, and signal phase at sample
time tk.
6. Doppler removal (phase rotation). The Doppler removal process of Fig-
ure 5.19 is part of the signal-phase or frequency-tracking function, which
is a complex multiplication between the signal I and Q samples and ref-
erence I and Q samples generated by the carrier number-controlled oscil-
lator (NCO). This NCO is controlled by the microprocessor's portion of
the carrier-tracking loop. Since the Cj A PRN code Doppler [s related
to the carrier Doppler by a factor of 1540, the carrier NCO also outputs
the basic code clock Doppler correction, which is further corrected by
the microprocessor's code tracking loop function with code phase cor-
rections. In some receiver implementations a completely separate NCO
is used to derive the code Doppler and phase.
7. Coder. The implementation of a typical C/ A coder is shown in Figure
5.13. This implementation, in which the G2 state is initialized, allows the
generation of all I 023 codes in the C/ A code family. which is important
for future implementations !32].
8. Correlators. The correlation process for narrow conelator processing is
illustrated in Figure 5.23 [43, 44]. In this process early, punctual, and
late codes are derived in a shift register that shifts the (early) C/ A code
from the coder at a clocking rate defined by the desired early/late cor-
relator spacing (in a fraction of a C/ A code chip). The dual correlation
process is realized by performing a multi-bit exclusive or between the
single-bit PRN codes and the multi-bit I and Q samples. A discrimi-
nator selection process allows the selection of either early and late or
early-minus-late and punctual correlation. Early and late correlation is
used during the signal acquisition process using the maximum (approx-
imately) one chip spacing :N I0 in Figure 5.23) for rapid acquisition.
Early-minus-late and punciUal correlation is used during tracking using
a 0.1 chip spacing (N •·· I) for optimum parallel code (early-minus-late
times punctual) and carrier (punctual) tracking. This dynamic spacing
concept provides fast acquisition and C/ A code-tracking performance
NAVSTAR GLOBAL POSITIONING SYSTEM 241

I SAMPLES
------l~\.6)
K/..~-----------'
.. E-L OR EARLY
""I SAMPLES

PUNCTUAL OR LATE
r-------------------r--~ISAMPLES

Q SAMPLES
~~-----------------r--~E-LOREARLY
Q SAMPLES

PUNCTUAL OR LATE
r-~-PU_N_C_T_U_A_L_O
__R______-r--~QSAMPLES

LATE CODE
DISCRIMINATOR SELECT

E-L OR EARLY CODE


LATE CODE
SELECT ~
20.473 MHz

~
SAMPLE CLOCK

~ EARLY CODE

~L..,_--1----'--~1+------- CIA CODE

EPL SHIFT REGISTER


CORRELATOR SPACING SELECTION

Figure 5.23 Dynamic correlation spacing process [43].

that approaches that of conventional P code-tracking performance. This is


because the code-tracking error variance as a function of correlator spacing
d and signal-to-noise density SjN 0 is given as l44]

(5.77)

where BL is the single-sided tracking loop bandwidth and Bu- is the two-
sided predetection bandwidth. The d for N = I is one-tenth that for the
conventional N = I 0, while the signal-to-noise density for Cj A code
tracking is twice that for P code tracking.
9. Postcorrelation filtering (accumulators). After correlation, the two sets
242 SATELLITE RADIO NAVIGATION

of I and Q samples are accumulated and dumped to the microprocessor


(Figure 5.19) at the C /A code epoch rate to provide 1-kHz predetecti on
bandwidth I and Q samples for further processing at that rate. That rate
is used directly for wideband signal acquisition and bit synchronization
after acquisition. Bit synchronization determines bit timing, after which
the predetection bandwidth is reduced to the navigation message data
rate of I 00 Hz for optimum signal processing.
10. Microprocessor signal processing. The receiver in Figure 5.18 uses a
32-bit microprocessor with a built-in math coprocessor to perform both
the signal processing and the navigation processing [43]. The different
signal-processing functiom. are indicated in Figure 5.19. In summary,
these are as follows:
• Signal ac~uisition is accomplished by computing signal-plus-noise
power Lk= 1 (/~ + Q~) at one-half C/ A code chip increments until it
exceeds a threshold based upon an estimate of noise power.
• Carrier phase tracking is accomplished by minimizing tan- 1(Qdh).
• Code tracking is accomplished by minimizing either a dot product dis-
criminator

(5.78)

or an early-minus-late power discriminator

(5. 79)

both of which minimize early-minus-late correlation amplitude, where


the I and Q options and early /punctual/late correlator spacing are con-
trolled as indicated in Figure 5.23.
• Data demodulation is accomplished by sampling the sign of h while
tracking the carrier phase.
• Bit synchronization is accomplished by sensing sign changes in 1-kHz
data samples.
• Frame synchronization is accomplished by correlating with the naviga-
tion message preamble at the beginning of the TLM Word (see Section
5.5.6).
• Phase lock is verified with the computation of the correct data parity
or through the computation of L.f=1 (/~ - Q~), which is an estimate
of the cosine of carrier phase.
• Signal-to-noise density computations.
• Formulation of pseudorange and delta-pseudorange measurements.
NAVSTAR CJLOBI'd~ POSITIONINCi SYSTEM 243

In the receiver of Figure 5.18, signal processing is optimized to accommodate


up to 6 g of acceleration.

Military GPS Aviation Receivers In 1996. the standard GPS receiver for mil-
itary avionics was the miniature airborne GPS receiver (MAGRJ produced
by Rockwell International, which also produces another variety intended for
embedded applications. where the entire receiver (the miniature GPS receiver.
MGR). less antenna, is housed inside another avionics assembly such as an
inertial navigation system [51[. The requirements for the MAGR are defined
in [52[. Guidelines for embedded military receivers are specified in [5~ I and
described in [54[. The MAGR receiver is illustrated in Figure 5.24.

J<'igure 5.24 MAGR receiver (courtesy, Rockwell International).


N

""'""'

MICRQ!'flQQE;>l;Qfl
RF F'REAMPLIFIGATION ~-~liONS
IF
AND FILTERING
-FILTERING
-ACQUISITION
ALGORITHMS
1ST
-TRACKING
LO LOOPS
-LOCK
DETECTION
·DATA RECOVERY
REFERENCE
OSCILLATOR
SYNTHESIZER CARRIER
NCO
-MEASUREMENT
PROCESSING

NCO AND
REFERENCE
CLOCKS

Figure 5.25 Functional block diagram of MAGR receiver functions [55].


NAYSTAR CILOBAL I'OSJTIONINC SYSTEM 245

The MAGR is a stand-alone live-channel PPS Ll / L2 receiver, although there


are two configurations (Air Force and Navy). The Air Force configuration does
not include the Antenna Electronics (AE). which consists of the LNA and down-
converter. while the Navy configuration does 152]. In the Air Force configura-
tion. the AE is remote at the antenna. The Navy configuration is intended to be
located near the antenna.
The functional description to follow will be for the Navy MAGR configura-
tion only. Its functional block diagram is given in Figure 5.25 1551. Note that
it is functionally similar to the commercial receiver illustrated in Figure 5.19.
The following is a description or those functions:

I. Antenna. The antenna shown in Figure 5.26, a version of the military


Fixed Radiation Pattern Antenna (FRPA-3), is the Dorne & Margolin DM
C 146-10-2 L I/ L2 antenna being used on the F-18 , the AV -8-B Harrier
and the TR-1 , the new version of the Lockheed U-2 156]. It is a low-
profile antenna ( 1.5 in . high) with a diameter of less than 5 in. It weighs
0.5 lb .

'~ Preumplificution, dmt •nconl'ersion , reference oscillutor and smlhe.1iz.cl:


The pre-select filtering , burnout protection, and LNA are made up of con-
ventional discrete components 150!. The downconversion and synthesizer
comprise two custom silicon bipolar integrated circuit chips as shown in

Figure 5.26 FRPA antenna (courtesy. Dorne & Margolin).


246 SATELLITE RADIO NAVIGATION

Figure 5.27. Dual downconversions are accomplished, one for Ll and one for
L2. The LO for these downconversions are common, converting both RF fre-
quencies to identical IF frequencies. This LO is derived in the synthesizer,
which is driven with the output of an ovenized reference oscillator at a fre-
quency of approximately I 0.95 MHz. The synthesizer also generates in-phase
and quadraphase LOs for later con version to baseband and clocks for the signal-
processing function.

3. AGC, conversion to baseband and A/D conversion. This process is


applied to both the L I and the L2 IF signals through identical wide band
IF silicon bipolar chips, whose block diagram is shown in Figure 5.28

----------- --------------1

154F0
(L1)
)I'V\_---'>•~L1
\61 r ~··· ~
I
I
I TO 17 F0
I
I BPF
I

120 F0 ~
~-:,L2
(L2)
I
I
I
I
I
I
I

I I ---..1~.+.·.'.J...'
i
CONTROL VOLTAGE TO PLL CHIP
FROM PLL CHIP F : 10.23 MHz
0

TO LBAND FROM LBAND


CHIP CHIP

.tr~~-- .~~~~~~~~~~---~
17.25 F0 (0 DEGREES)

17.25 F (90 DEGREES)


0

FROM FREQ. TO FREQ/TIMER


STANDARD CHIP
137/128 F0
(10.95 MHz)

PHASE LOCK
DETECTOR

Figure 5.27 MAGR downconversionjsynthesizer silicon bipolar chips [55].


NAVSTAR GLOBAL POSITIONING SYSTEM 247

17.125 F0 (0° & 90) 3-LEVEL THRESHOLD (R)


FROM PLL CHIP FROM SIG. PROCESSOR

""" "" -----------"---- -------- -~- ---------------------------- --------------------"1

17 F0 :'
IF IN :
''
''
'''
:__ ----T---------------------------------------------------------------------------- J'
AGC VOLTAGE

Figure 5.28 MAGR wideband IF chip [55].

[55]. Conversion to baseband is realized through mixing the L1 and L2 IF


signals with in-phase and quadraphase LOs from the synthesizer. Filtering
follows to limit the bandwidth to 20 MHz, the bandwidth of the P code, to
prevent aliasing in the sampling process. The AGC is a rapid wideband
AGC, whose purpose is to suppress pulsing interference as well as to
provide a large dynamic range (70 dB) to accommodate high levels of
jamming. The sampling process, at double code clock rate of 20.46 MHz,
provides 1.5 bit I and Q samples to the signal-processing function. The
thresholds (Rand -R) are controlled by the signal processing for optimum
performance in the presence of noise and narrowband interference.

4. Doppler removal, coder, correlators, and postcorrelation filtering. These


functions are performed in the MAGR in a similar manner to that of the
receiver shown in Figure 5.19, with the following exceptions:
• The code Doppler removal is realized with an independent code NCO.
• The coder function consists of both the C/ A and P coders as well as
the P code encryption function to produce the Y code.
• The early-minus-late correlator spacing is fixed at one C/ A or one P
chip depending upon which coder is being used.
• Rather than two sets of I and Q correlators and accumulators in each
of twelve channels, the MAGR has ten sets of I and Q correlators and
accumulators in each of five channels. This configuration provides fast
signal acquisition and reacquisition in the presence of jamming.
5. Microprocessor signal processing. The MAGR uses Rockwell's AAMP-2
microprocessor to perform both the signal processing and the navigation
processing [57]. The signal-processing functions are indicated in Figure
5.25. This includes variations on all the functions depicted in Figure 5.19
for the commercial receiver, with the following exceptions [58]:
248 SATELLITE RADIO NAVIGATION

• Because of the very high dynamic requirements (9g, lOg/sec [59]), the
MAGR performs carrier frequency tracking, instead of carrier phase
tracking, by minimizing tan- 1(Qk/h)- tan- 1(Qk-J/h- 1) (time differ-
ence of carrier phase error). Data are demodulated differentially by
observing the changes in the sign of h.
• Instead of verification of phase lock, measurement validity is deter-
mined comparing signal-plus-noise power L~= I u;
+ Q~) to a threshold
based upon an estimate of noise power.
• Code-tracking loop aiding processed from corrected external inputs
from an inertial navigation system to achieve a high AJ tracking capa-
bility [52, 60].
• L2 tracking for ionospheric delay corrections is normally performed
sequentially on the fifth channel. However, if it is determined that L I
is jammed, L2 tracking can be performed on some or all channels [52,
60].
6. A-S and SA processing. The MAGR incorporates a precise position-
ing service-security module (PPS-SM) and auxilliary output chip (AOC)
devices to perform A-S and SA processing [52, 59]. The AOC devices are
provided for each channel to allow tracking of the encrypted code when
the receiver is properly authorized. The PPS-SM performs crypto-key pro-
cessing and management for the A-S and SA processing. It operates on
battery power so that keys may be loaded or zeroized without receiver
prime power. A dedicated data path from the PPS-SM to the AOC devices
prevents sensitive data from being handled by the other processors.

5.5.9 Differential GPS


The concept of differential GPS (DGPS) is illustrated in Figure 5.29 [61,
62, 21]. DGPS requires a reference station at a known location that receives
the same GPS signals as does the avionics user. This reference station pro-
cesses its GPS measurements, deriving pseudorange, delta-pseudorange, and
pseudorange-rate errors with respect to its accurately known location and then
transmits these corrections to participating users in the area. The avionics user
then applies these corrections to his measurements, thus canceling all common
errors. Sub-meter accuracies to accuracies of I 0 meters have been experienced
using DGPS depending upon techniques used and distance from the reference
station.
This differential technique works if the common errors are bias errors due
to causes outside of the receiver. The major sources of common errors are the
following [63, 64]:

1. Selective availability errors. Although these errors are not biases, they
have correlation times that are long enough to be eliminated if the cor-
NAVSTAR CJLOB A L POSITIONI NG SYSTEM 249

DIFFERENTIAL
REFERENCE
STATION
Figure 5.29 Dillerential GPS concept.

rcction update rate is high enough. Typical pseudorange errors are about
JO meters, 1-sigma. but they have the potential to be hi gher.
,.., lonos;Jheric dclurs . These propagation group delay errors can be as high
as 20 to 30 meters during the afternoon hours to I to 6 meters at ni ght.
if not removed usi ng two frequ ency correcti ons. The single freq uency
mode l will reduce this by approximately .'iO'/c . These errors are slowly
varying biases but spatiall y dccorrclatc over larger distances.
J. Tro;}().lplieric dclovs. These propagation delays can be as much as 30
meters to a low-elevation satellite but are quite consistent and can be
modeled. However. variation s in the index of refraction can cause dif-
ferences between the reference station and the use r of I to 3 meter.-; for
low-ele vati on satellites. T hey arc also slow ly varying biases. and they
spatially decorrelate over larger distances.
4. E'p/u' nle ri.l errors. Normally, the difference between the actual satellite
location and the locati on computed from the broadcast e phemeris is small ,
less than I to 3 meters, but this error can be increased signi fica ntly with
selective availability. Ephe meri s errors arc very slowly varying biases but
can spatially decorrelate over large distances .
.'i. So!e!lilc clock errors. The differences between the actual satell ite clock
250 SATELLITE RADIO NAVIGATION

time and that computed from the broadest corrections can become large
if a satellite's clock is misbehaving.

Satellite clock errors, including those caused by SA dithering, are completely


eliminated by DGPS, except for the SA dithering effects due to delays in esti-
mating, broadcasting, and making the DGPS corrections. As noted above, all
of the other errors may not be completely eliminated as the distance between
the reference station and the user increases. The following errors are not
eliminated-multipath and receiver noise at the reference station and the user.
Multipath has been found to be the dominant error source that limits the accu-
racy in a local DGPS environment, while the ionospheric delay is usually the
limiting factor for achieving the best accuracy over large distances. Care must
also be taken to ensure that both the reference station and the user are perform-
ing their computations using the same satellite navigation messages, and that
their computations are performed using accurate algorithms.

RTCM Recommended Standards for Differential NAVSTAR GPS Service


The Radio Technical Commission for Maritime Services (RTCM) Special Com-
mittee SC104 took on the task of defining a standard set of broadcast messages
for disseminating GPS differential corrections [63, 64 ]. Although these stan-
dards were established primarily for maritime users, they have been applied
successfully to aeronautical uses as well. However, the aeronautical commu-
nity is in the process of defining it' own standards for precision landing appli-
cations [41]. The RTCM SC104 standard has, however, provided the framework
for other applications, including differential techniques similar to that used by
the surveyors for kinematic surveying [651. Data links and data link protocols
have not been standardized.
The key standardized message types that have been fixed include the follow-
ing [63, 64 ]:

1. Differential GPS corrections made up of pseudorange and pseudorange


rate corrections for all satellites in view of the reference station. A user
differential range error (UDRE) indicating the accuracy of each correc-
tion, plus the issue of satellite navigation data (IOD) are also included.
The corrected pseudorange for satellite i is then

dPRC;(to)
PR;c(t) = PR;(t) + PRC;(to) + (t - to) (5.80)
dt

where PRC;(t 0 ) and dPRC;(t 0 )jdt are the broadcast corrections for satel-
lite i at their time of applicability to.
2. Delta-differential GPS corrections made up of corrections to the broad-
cast corrections applicable to the previous issue of satellite navigation
data (IOD) for a period of time after an IOD change. These delta cor-
NAVSTAR GLOBAL POSITIONING SYSTEM 251

rections for the pseudorange and pseudorange corrections for satellite i,


respectively, are

f::.PRC;(to) = PRC;(to, IODoict)- PRC;(to, IODnew) (5.81)

!::. dPRC;(to) dPRC;(to, IODoict) dPRC;(to, IODnew)


(5.82)
dt dt dt

3. Reference station parameters made up of the WGS 84 ECEF coordinates


of the reference station with a resolution accuracy of 0.01 meters.
4. High-rate differential GPS corrections made up of the same contents as
the differential GPS corrections, but for only those satellites having high
rates of change of differential corrections. This message is used in the case
when the normal broadcast rate of transmission of differential GPS correc-
tions is not high enough to maintain the desired accuracy for those satel-
lites. The cause of this would be excessively high-bandwidth SA errors,
coupled with a low-bandwidth differential broadcast link.

Although DGPS provides a significant increase in accuracy over standard


GPS, especially when SA is invoked, the user time solution is no longer a
solution with respect to GPS time unless the reference station's time is syn-
chronized to GPS time. The time solution is now with respect to the time base
of the reference station. Normally, the practice is to maintain a time solution
at the station that drives the average of the corrections of all satellites in view
to zero, under the assumption that the mean of all errors are zero. Depending
upon how this average is filtered and the quality of the frequency standard or
oscillator used in the reference station, the effective time base will vary with
respect to GPS time. Furthermore, any biases present in the reference station's
receiver will be an offset with respect to GPS time. Receiver calibration can
minimize this offset.

Special Category I Precision Approach Operations Using DGPS At the


request of the FAA, RTCA, Inc.'s Special Committee SC159 established a spe-
cial ad hoc development team to prepare Minimum Aviation System Perfor-
mance Standards (MASPS) for Special Category I Precision Approach Opera-
tions Using DGPS [41]. This special capability is intended for designated air-
craft at special use airports and later at public use airports to provide standards
for the operational use and evaluation of DGPS techniques in actual precision
approach and landing conditions. Such a capability has been demonstrated using
the high-performance receiver described in Section 5.5.8, Figure 5.18, for both
the reference station and the airborne receiver using the RTCM DGPS mes-
sages described above with a three second update rate. Ninety-five percentile
accuracies of 1 meter, horizontal, and 2 meters, vertical, were obtained over 68
approaches using a laser tracker as a reference [66, 67, 68]. The MASPS devi-
ates slightly from the RTCM messages for this application for three reasons:
252 SATELLITE RADIO NAVIGATION

(1) to ensure the higher update rates using existing data links, (2) to provide
additional information required for the precision approach and landing appli-
cation, and (3) to increase the integrity of the broadcast with a much stronger
parity algorithm. The flight test results described above suggest that DGPS,
using differential pseudorange corrections, can meet even Category III preci-
sion approach and landing requirements [68].

Wide Area DGPS RTCA Special Committee SC 159 is also preparing require-
ments for the use of Wide Area DGPS (WADGPS) as part of the FAA's future
Wide Area Augmentation System (WAAS) to achieve a Category I precision
approach and landing capability [69]. WAAS uses a braodcast through a geosta-
tionary satellite to provide corrections over a very wide area, such as the conti-
nental United States (CONUS). Th1~ accuracy of this approach suffers somewhat
because of spatial decorrelation and limited broadcast bandwidth but is expected
to provide the required Category I precision approach accuracy over a region
such as CONUS. To achieve this, the broadcast messages differ significantly
from the RTCM messages because information on ephemeris and ionospheric
errors must be provided to correct for spatial decorrelation of these errors. More
detail on the WAAS is provided in Section 5.7.3.

Differential Carrier Phase Techniques There is some belief that differential


carrier phase techniques are required to achieve Category III precision approach
and landing accuracy performance using DGPS [2]. This application of DGPS
has promoted the development of differential GPS kinematic surveying tech-
niques toward achieving Category Ill performance. These techniques resolve
the carrier phase ambiguities to provide dynamic accuracies on the order of
a few centimeters. This is accomplished by augmenting the pseudoranges of
Equation 5.1 with the term c'A.N;, where 'AN; is the number (N; is an integer)
of ambiguous carrier wavelengths for satellite i, and to solve for these ambi-
guities [65]. Generally, these ambiguities are resolved using double-difference
techniques, where the double differences are computed between the user and the
reference station and between pairs of satellites, canceling all of the common
errors [70]. Unfortunately, within the initialization accuracy, there are many
possible solutions for the ambiguities, although only one solution is correct
[71].
The general technique for solving for the correct ambiguity is to search the
uncertainty region for the correct solution using error minimization techniques.
This requires a minimum amount of geometry change between the user and
the satellites. The amount of change required depends upon the initial uncer-
tainty, which is usually that of the double-differenced pseudorange measure-
ment solutions. Unfortunately, because of multipath and receiver noise, the
change required takes up to on the order of 30 seconds, even under the best
conditions using enhanced L ]-only C/ A code performance [72]. Once the ambi-
guities are resolved, resolution after cycle slips or signal outages are instanta-
neous, provided that there are redundant satellites being tracked. There are a
NAVSTAR GLOBAL POSITIONING SYSTEM 253

few techniques available to improve this initialization process. Two of the most
promising are the use of dual-frequency receivers [72] and the use of local near-
Ll transmitting pseudo lites (pseudosatellites) [73 J. The former approach uses
the differential carrier between Ll and L2 to first resolve ambiguities using a
larger beat frequency wavelength, and then transferring the solution to initialize
the Ll ambiguity resolution. The larger wavelength ambiguity takes much less
time to resolve. The second approach takes advantage of the rapidly changing
geometry between the pseudolite and the user, providing more leverage to the
resolution problem. The use of pseudolites for DGPS is discussed further in
Section 5.7.4.
The use of carrier phase techniques for precision approach and landing is still
in development. However, carrier-smoothed-code techniques are much more
robust. These techniques solve for the ambiguity not as an integer but as a
floating-point number. The ultimate accuracy of DGPS for precision approach
still remains unknown.

5.5.10 GPS Accuracy


Error Budgets GPS accuracy depends upon user receiver implementation.
There are PPS and SPS implementations, there are P code and Cj A code users,
there are Ll/L2- and Ll-only users, there are DGPS users, there are differential
carrier phase users and there are combinations of all of the above. Table 5. 7
presents error budgets for three of these implementations-one for the autho-
rized P code Ll/L2 PPS user from the MAGR (see section 5.5.8, Figure 5.24)
specification [52], one for the unauthorized C/ A code Ll-only user for airborne
supplemental navigation equipment [40], with and without SA, and one for the
special Category I precision approach and landing DGPS user [41].
The position errors in Table 5.7 are obtained from the system pseudorange
errors (UEREs) as follows:

I. The horizontal CEP (circular error probable) budgeted for the MAGR is
given as [52]

CEP = 0.8326 x rmshor = 0.8326 x HDOP x UERE (5.83)

where HDOP was taken to be approximately 1.39 with an elevation mask


angle of 5°.
2. The vertical LEP (linear error probable) budgeted for the MAGR is given
as [52]

LEP = 0.6745 X rmsver = 0.6745 X VDOP X UERE (5.84)

where VDOP was taken to be approximately 1.97 with an elevation mask


angle of 5°.
254 SATELLITE RADIO NAVIGATION

TABLE 5.7 GPS avionics user error budgets


ERROR (meters)

Unauthorized Unauthorized
C/A Code C/A Code DGPS Special
Authorized Ll User Ll User Category I
Error Source Ll/L2 User with SA Without SA User

Space/control segment/ 6.0 30.8 6.0 1.21


reference station
User
Ionosphere 2.2 10.0 10.0 0.0
Compensation
Troposphere 2.0 2.0 2.0 O.o2
Compensation
Multipath 1.2 1.2 1.2 1.2
Receiver noise 1.47 7.5 1 7.5 1 0.5
and resolution
Other 0.5 0.5 0.5 0.05
System UERE (RSS) 6.98 33.33 14.()7 1.78
Horizontal position error 8.10 CEP 100 2drms 42.2 2drms
Vertical position error 9.28 LEP 5.52-95%
Time error (UTC) 100 ns, I u
1Lower receiver noise errors are obtainable using carrier aiding of the code loop, resulting in lower code loop

bandwidth [121 (Chapter 8), 133 (Chapter 5), 135].

3. The 2drms budgeted for the commercial avionics user is given as [40]

2drms = 2 X rmshor = 2 X HDOP X UERE (5.85)

where HDOP was taken to be approximately 1.5 with an elevation mask


angle of 7.5°.

For the special Category I DGPS user [41], the space/control segment error
budget is reduced to residual SA, residual clock, and spatial decorrelation errors
amounting to 0.5, 0.01, and 0.0 I meters, respectively. The reference station
budget is set at 1.1 meters, resulting in a total spacejcontrol segment/reference
station error budget of 1.21 meters, root-sum-squared. The users' error bud-
get presented in Table 5.7 is an example. The special Category I user has the
choice in allotting his budget between sensor (GPS receiver) error and flight
technical error (FTE), which defines the pilot's or auto-pilot's ability to fly the
prescribed flight path. The total vertical error budget is 9.76 meters, with a prob-
ability of 95%. Thus, the GPS receiver error budget depends upon the assigned
FTE for a given aircraft. Using a 95% probability VDOP (ratio of 95% vertical
navigation error to !-meter rms pseudorange error) of 3.1, the 95% probabil-
ity vertical error for the pseudorange error budget in Table 5.7 is 5.52 meters,
leaving a 95% probability budget for FTE of 8.05 meters. An aircraft with a
good autopilot could use a receiver with larger errors. The HDOP is not speci-
NAVSTAR GLOBAL POSITIONING SYSTEM 255

fied for special Category I DGPS users because the 95% probability horizontal
navigation error budget is so large-33.54 meters, which is easy to achieve
using DGPS techniques.

Time Accuracy The time error budget with respect to universal coordinated
time (UTC) for the MAGR is a GPS system level specification for time trans-
fer [52]. The control segment's budget for maintaining the difference between
GPS time and UTC is 90 nsec, 1-sigma. The 1-sigma GPS time error due to
pseudorange error is

UERE 6.98
a~::. 1 =TDOPx = 1.12x --
c c
= 26.1 ns (5.86)

assuming a TDOP of 1.12 with an elevation mask angle of 5°. The MAGR
is required to output time via a one pulse per second (1- PPS) accurate to the
specified I 00 nsec, !-sigma. The remaining error budget to achieve this is

v 1002 - 902- 26. t2 = 34.9 ns (5. 87)

The unauthorized user time-transfer accuracy is dominated by the SA errors.


Using the same value of TDOP (1.12) yields the following:

2
33 33
(
1.12 X ~ )
+ (90 X lQ- 9 )2 = 153.6 ns (5.88)

Velocity Accuracy GPS velocity accuracy is not guaranteed, nor is it usually


specified, except possibly in classified military specifications. However, mea-
sured results have been published and are very much a function of the dynam-
ics of the host vehicle at the time of the measurement. Tests performed on the
authorized MAGR have yielded velocity accuracies better than 0.1 meter/ sec
in constant dynamic or inertially aided maximum dynamic conditions, and bet-
ter than 1 meter/sec if unaided [60, 74]. This velocity accuracy is generally
accepted as the norm under these conditions. For the unauthorized user, on the
other hand, the velocity accuracy is dominated by SA pseudorange rate errors,
which have been measured to be between 0.3 to 0.9 meter/sec, rms, horizon-
tal, under stationary conditions [75]. Carefully implemented DGPS yields the
same velocity accuracy as the authorized user. Receiver implementation (P, or
Cj A, and/or Ll/L2 or Ll-only) has no effect on velocity accuracy. The use of
differential carrier phase DGPS should yield much better velocity accuracy.

GPS Accuracy Summary Figure 5.30 provides a summary of GPS position


accuracy for the various receiver implementations. The moving survey accura-
100
N
U1
~

RELATIVE
10 -

(/)
0:::
w
1-
w
:2
I

u>-
~
:::> 0.1
u
u<(

ABSOLUTE
0.01 1-- -

I I I
0.001 I I I I I

STATIC MOVING DIFFERENTIAL PPS/A-S SPS (W/0 SA) SPS


SURVEY SURVEY GPS (DEGRADED)
Figure 5.30 GPS position accuracy summary.
GLOBAL ORBITING NAVIGATION SATELLITE SYSTEM (GLONASS) 257

cies are indicative of what might be achieved if differential carrier phase tech-
niques are developed for precision landing applications. Otherwise, accuracies
indicated for differential GPS are more applicable.

5.6 GLOBAL ORBITING NAVIGATION SATELLITE SYSTEM


(GLONASS)

The GLONASS satellite navigation system was developed by the Russian


Federation; it was started by the former Soviet Union (USSR) [76]. It was
declared operational in 1996 with a full constellation of satellites. GLONASS
offers many features in common with the NAVSTAR GPS, but with significant
implementation differences [77, 78]. In particular, the orbital plan also consists
of 24 satellites. However, rather than 4 in each of 6 planes, GLONASS has a
plan with 8 in each of 3 planes (designated planes 1-3) separated by 120' and
with spacing of 45° within the plane. The GPS spacing is not uniform. The orbit
altitudes also differ from that of GPS; the ground tracks repeat approximately
every eight days rather than approximately one day for GPS. GLONASS satel-
lites also transmit two spread-spectrum signals in the L-band (Ll and L2) at
around the same power levels as GPS at frequency bands that are approximately
20 to 30 MHz higher than GPS [79]. However, satellites are distinguished by
radio-frequency channel rather than spread-spectrum code (Frequency Division
Multiple Access, FDMA, instead of CDMA). Common codes are used for all
of the satellites. Both a C/ A code and a P code are transmitted in quadrature
on the L1 signal [80]. Otherwise, the basic principle of operation is identical
to that of GPS.

5.6.1 GLONASS Orbits


Table 5.8 provides a summary of the GLONASS orbital parameters compared
to those of GPS f77, 78]. Note that the orbit period and semimajor axis are less

TABLE 5.8 GPS and GLONASS nominal orbit parameters


Parameter NAVSTAR GPS GLONASS

Period (minutes) 717.94 675.73


Inclination 55' 64.8'
Semi major axis (meters) 26560 25510
Orbit plane separation 60' 120
Phase within planes Irregular ±30
Ground track repeat (orbits) 2 17
Longitude drift per orbit 180 169.4
258 SATELLITE RADIO NAVIGATION

than those of GPS that cause the GPS satellites to repeat their ground track
each day. Because of this, the GLONASS ground tracks precess around the
Earth and repeat every 17 orbits lasting 8 whole days plus 32.56 minutes. This
is equivalent to 16 GPS orbits.

5.6.2 GLONASS Signal Structure


Broadcast Frequencies The GLONASS satellites broadcast two signals: Link
I, L1 and Link 2, L2. According to figures made available to the International
Frequency Registration Board (IFRB) in Geneva [81], and updated in Novem-
ber 1994 for GLONASS-M [82], GLONASS transmits a maximum power spec-
tral density of -44 dBW/Hz in the frequency band of 1597-1617 MHz (Ll)
and -57 dBW/Hz in the frequency band of 1240-1260 MHz (L2). A shaped-
beam antenna is used to produce uniform power spectral density on the ground
[77]. The L1 C/ A and P code signals are transmitted at the same frequency, in
quadrature, just as the NAVSTAR GPS Ll signals are.
In the initial plan for GLONASS-M, each satellite was to be assigned a
unique frequency according to the following equation [82]:

.h = /1 + 0.5625z MHz, i = 0, I, ... , 24 (5.89)

for satellite i of 24 satellites (i = 0 is for testing), where the base L1 frequency


is

.f1 = 1602.0 MHz (5 .90)

That is, when all 24 satellites were to be in the constellation, each would have
a frequency assigned to it with 562.5 kHz separation between satellite signals.
Similarly the L2 P-Code signal is transmitted at a frequency assigned to the
satellite. Each satellite is assigned a unique frequency according to the follow-
ing equation [82]:

hi = h + 0.4375i MHz, i = 0, I, ... , 24 (5.91)

for satellite i of 24 satellites (i = 0 is for testing), where the base L2 frequency


is

h = 1246.0 MHz (5.92)

That is, when all 24 satellites are in the constellation, each will have a frequency
assigned to it with 437.5 kHz separation between satellite signals. Note that the
ratio of the Ll and L2 frequencies is 9/7, including the frequency separations.
Also note that they are an integer multiple of a common frequency of 62.5 kHz.
GLOBAL ORBITING NAVIGATION SATELLITE SYSTEM (GLONASS) 259

However, because of interference issues, the Russian Federation revised the


frequency plan as follows for GLONASS-M [79, 831:

I. Until 1988, GLONASS-M will not use carrier frequencies for i = 16


through 20 for normal operations. Frequencies for i = 0, I, ... 12, 22,
23 and 24 wi II be used. Frequencies for i = 13, 14, and 21 will only be
used under exceptional circumstances. This revision is to prevent trans-
mission into radio-astronomy antennas in that band, and will be realized
by re-using frequencies on anti-podal satellites (satellites visible on the
opposite side of the Earth).
2. From 1988 to 2005, GLONASS-M will use frequency channels i = --7
through + 12 for normal operation and use i = 13 only under exceptional
circumstances.
3. After 2005, GLONASS-M will usc frequency channels i ~ ~ 7 through
+4 for normal operation and usc i +5 and +6 as technical channels
only for limited periods of time during orbital insertion or other periods
of exceptional circumstances.

The shift down in frequency is to avoid interference from future Mobile Satellite
Services (MSS) terminals.

Signal Modulation The L I and L2 signals arc both bi-phasc modulated with
the PRN codes and navigation data. The PRN code and navigation data char-
acteristics arc as follows:

l. C/A code. The GLONASS C/ A L'ode is comprised of a nine-state shift


regislcr with tap feedback that produces a 511-hit maximal-length sequence. It
is clocked at a rate of 511 kHL so that it repeats every millisecond. A functional
block diagram of the C/ A code generator is shown in Figure 5.31.
Every satellite generates the same C/ A code. The 1-msec C/ A code epochs
are eoherently synchronized to the satellite's time, which is maintained to within

5
9
9-BIT SHIFT REGISTER

7
511 kHz
C/A CLOCK
C/ACOOE OUT

Figure 5.31 GLONASS C/ A code generator.


260 SATELLITE RADIO NAVIGATION

1.953 msec of GLONASS system time. The resultant signal spectrum is a line
spectrum centered at the assigned satellite frequency with an envelope equal
to that given in Equation 5.54 with a T, of I /511 ,000 sec, where the lines are
spaced 1 kHz apart, and a spectral null occurs at multiples of 511 kHz. Since
the assigned satellite frequencies are spaced only 562.5 kHz apart, there is an
overlap of signal spectra.
Even though the spectra of the C/ A codes of the different satellites over-
lap, it has very little effect on signal acquisition and tracking, because the user
receivers, when correlating with the code, will track the correct carrier. Spectral
interference will occur, but will be well below the thermal noise level. Adjacent
frequency numbered satellite signals will have a cross-correlation level not to
exceed 48 dB [79]. Because of the separation in frequency, even if full code
correlation between signals occurred for an instant, postcorrelation integration
reduces the effect to that level. The C/ A code only appears on the Ll signals
[77, 80].
2. P code. The CIS has never published the GLONASS P code. However,
it has been determined independently [80]. The GLONASS P code is com-
prised of a 25-stage shift register with tap feedback that would produce a
33,554,431-bit maximal-length sequence, except for the fact that it is short-
cycled to 5, II 0,000 bits and reset to all I 's. It is clocked at a rate of 5.11 MHz
so that it repeats once per second. A functional block diagram of the P code
generator is shown in Figure 5.32. Every satellite generates the same P code.
The !-sec code epochs are synchronized to the 1-msec C/ A code epochs to
ease the handover from one code to the other.
The resultant P code signal spectrum is a line spectrum centered at the
assigned satellite frequency with an envelope equal to that given in Equation
5.53 with aT, of I /5, II 0, 000 sec, where the lines are spaced I Hz apart, which

5.11 MHz
P CLOCK

C/A CODE
HAND OVER

Figure 5.32 GLONASS P code generator.


GLOBAL ORBITING NAVIGATION SATELLITE SYSTEM (GLONASS) 261

essentially makes it a continuous spectrum. A spectral null occurs at odd mul-


tiples of 5.11 MHz. Since the assigned satellite frequencies are only 562.5 kHz
and 437.5 kHz apart for the L I and L2 frequencies, respectively, the P code
spectra of the different satellites overlap a great deal. For the same reasons as
with the C/ A code, this does not pose a problem when acquiring and tracking
the signals. The P code is modulated on both the Ll and the L2 signals [80].
3. Navigation data. The navigation data is modulo-2 added to both of the
codes prior to the bi-phase modulation of the carriers. Because of that, these data
do not alter the spectrum of the signals. The effective data rate is 50 bps. How-
ever, it is differential and return-to-zero encoded, so the modulation is actually
at 100 symbols per second [84].

Signal Power The GLONASS ICD for the L I C/ A code signal indicates a
minimum received power of - 161 dEW, which is 1 dB less than specified for
the GPS L I C/ A code [79], although this level has been updated to -160 dBW
[82], which may be the total received C/ A code plus P code power. The L I
P code signal level is not published. The L2 P code received signal power is
-166 dEW [82].

5.6.3 The GLONASS Navigation Message


The GLONASS navigation message differs significantly from its GPS counter-
part. It is made up of lines, frames, and super frames [79]. Each line is 2 seconds
long containing I 00 bits: 85 bits of digital data in I. 7 seconds containing 8 bits
of a Hamming (85,77) parity code followed by a 30 symbol time mark at the
I 00 bps rate. Each frame contains 15 lines over 30 seconds. A super frame is
5 frames over 2.5 minutes.
The first four lines of each frame contain ephemeris and time information for
the satellite broadcasting the message. The fifth line contains a day number and
system time correction. Lines 6 through 15 contain almanacs for five satellites,
two lines per almanac. The fifth frame contains only four almanacs.

Ephemeris Data The GLONASS ephemeris data are broadcast as ECEF carte-
sian coordinates in position and velocity with lunar/solar acceleration perturba-
tion parameters that are valid over about 0.5 hour [84]. The assumption is that
the user integrates via a fourth-order Runge-Kutta method the motion equations
that include the second zonal geopotential harmonic coefficient. Details of these
equations are given in the GLONASS ICD [79].

Almanacs Even though the ephemeris data differs completely from that of
GPS, the almanac parameters are quite similar as modified Keplerian parame-
ters.
262 SATELLITE RADIO NAVIGATION

Clock Corrections Clock correction parameters are also similar to that of GPS
in terms of clock offsets and clock drift.

5.6.4 Time and Coordinate Systems


GLONASS Time The GPS control segment provides corrections so that GPS
time can be related to UTC(USNO) modulo I sec to within 90 nsec, whereas
the GLONASS control segment provides corrections so that GLONASS time
can be related to UTC(Moscow) to within I ~-tsec [79]. GPS time does not
follow the leap second corrections that UTC occasionally makes. GLONASS
time does [79 J.

GLONASS Coordinate System The GLONASS system transmits ephemeris


and almanac data describing the 'atellite's antenna phase center in the Earth-
fixed reference PZ-90, which differs from WGS 84 by under 15 meters. A pre-
liminary estimate of the coordinate transformation between the two sy~tems
(ECEF) is a translation of 2.5 meters in the y direction and a rotation of 0.4
minutes about the z-axis [85].

5.6.5 GLONASS Constellation [86, 87]


At times. the GLONASS system has a full constellation of 24 satellites. In 1996
the system was usually operating with 21 or 22 satellites transmitting healthy
signals [87]. However. none of these satellites were of the new GLONASS-M
variety.

5.7 GNSS INTEGRITY AND AVAILABILITY

Acceptance of a global navigation satellite system (GNSS) as a sole means


navigation aid in the U.S. National Airspace System (NAS) will necessitate
meeting stringent availability and continuity of function requirements that are
usually unachievable without some sort of augmentation. Key issues arc safety
and performance assurance that relate to satisfying accuracy, integrity, availabil-
ity, and continuity of function requirements. The following definitions pertain
to these issues:

I. Accuracy pertains to the capability of the system, with or without augmen-


tation, to meet the navigation accuracy requirements specified by phase
of flight in the U.S. Federal Radio Navigation Plan [241.
2. Integrity relates to the probability of detecting anomalous signals that
could induce navigation errors beyond defined protection limits and to
providing timely warnings to the users [40, 41].
3. Availability of a navigation system is the ability of the system to pro-
GNSS INTEGRITY AND AVAILABILITY 263

vide required guidance at the initiation of the intended operation. It is an


indication of the ability of the system to provide usable service within
the specified coverage area. Signal availability is the percentage of time
that navigation signal broadcasts are available for use. Availability is a
function of both the physical characteristics of the environment and the
technical capabilities of the transmitter facilities [40, 41].
4. Continuity of a system is the ability of the total system (comprising of
all elements necessary to maintain aircraft position within the defined
airspace) to perform its function without interruption during the intended
operation. More specifically, continuity is the probability that the system
will be available for the duration of a phase of operation, presuming that
the system was available at the beginning of that phase of operation [41].

However, current GNSS systems (GPS or GLONASS) do not meet these


requirements for most phases of flight, especially for the more stringent pre-
cision approaches, without augmentation. In this regard, the FAA has defined
the following GNSS user services [88]:

l. Multisensor system implies that the GNSS and any augmentations can be
used for navigation, but only after it has been compared for integrity with
another approved navigation system in the aircraft.
2. Supplemental system implies that the GNSS and any augmentations can
be used alone without comparison to another approved navigation sys-
tem. However, another approved navigation system must be on board the
aircraft and usable when the GNSS is not available.
3. Required navigation performance (RNP) system is one that meets all the
requirements without need for any other navigation equipment on board
the aircraft. An RNP system may include one or more navigation sensors
in its definition (e.g., GPS with an inertial reference system, IRS).

GNSS does not add much to the aircraft's navigation system if it is only cer-
tified as a multisensor system service. It can add accuracy as long as the system
it is being compared with meets RNP requirements. This service is also useful
for test purposes. GNSS can, by itself, be certified as a supplemental system
through the use of receiver autonomous integrity monitoring (RAIM) [40] and
possibly oceanic en route [69, 89J. In 1996, based on FAA requirements, as
an RNP system, GNSS requires augmentation, such as combining GPS either
with GLONASS, an independent WAAS-type system, and pseudolites or with
another type of sensor, such as an IRS [69, 891.

5.7.1 Receiver Autonomous Integrity Monitoring (RAIM)


All GNSS RAIM schemes are based on making self-consistency checks of some
sort. This idea is not new. Prudent navigators have used redundant observations
264 SATELLITE RADIO NAVIGATION

to verify their position fixes since antiquity. The thing that is different now is
that computer technology has made it possible to use relatively sophisticated
mathematical methods in performing the consistency checks. Before getting into
the details of one RAIM method, it should be noted that catastrophic failures
are easy to detect with primitive methods, so they are not discussed further
here. It is the more subtle or incipient failures that are treated here; those where
a somewhat out-of-tolerance signal in space causes the user position error to
wander outside some specified limit for the phase of flight in progress. If the
GNSS is a supplemental system, it is sufficient for RAIM to simply detect the
failure and sound an alarm accordingly. If the GNSS is an RNP system, it is
necessary for RAIM to both detect and isolate and exclude the failed source.
This added burden of isolation and exclusion complicates the RAIM (now called
fault detection and exclusion, FDE) problem considerably [90, 91, 135].

RAIM Basics For tutorial purposes it is useful to begin with a simple two-
dimensional example. Suppose that we have three range measurements, each
defining a line of position (LOP) in a plane. Three possible situations are shown
in Figure 5.33. In Figure 5.33a we have the usual situation with good geometry
and consistent measurements (at least within the expected measurement noise).
The result is three intersections (fixes) that are close together. The observer
would then conclude ''no failure" in this situation. In Figure 5.33h we see
another possible situation in which we have favorable geometry but the fixes
are relatively far apart. The observer must conclude that something is wrong
here, and the decision is "failure.'' Note, though, that the information is insuf-
ficient to tell us which measurement is at fault that is, we can do simple error
detection here, but we cannot do fault isolation with just one redundant mea-
surement. Finally, in Figure 5.33c we see an extreme case of poor geometry.
Two of the LOPs are parallel. We can conclude here that measurements I and
2 are consistent (their LOPs are close together), but there is no valid check on
measurement 3. An error in it would go unnoticed. Thus the decision as to a
possible failure in this case is inconclusive. The observer must simply say, "No

(a) Consistent Measurements (b) lnc,:msistent Measurements (c) Poor Geometry Situation

Figure 5.33 Examples of three LOP intersections.


GNSS INTEGRITY AND AVAILABILITY 265

valid integrity check is possible because of poor geometry." All three situations
shown in Figure 5.33 have their counterparts in the more complex GNSS RAIM
setting. Of course the meanings of "close together" and "far apart" need to be
quantified. Also, statistical performance criteria relative to the reliability of the
observer's decision need to be developed. More will be said of these items later.
Work on autonomous means of GNSS failure detection began in earnest dur-
ing the latter part of the 1980s. It was also during this period that the acronym
RAIM (for receiver autonomous integrity monitoring) was coined. We will not
attempt to document all of the technical papers on RAIM that appeared during
this period. One has only to browse through the proceedings of the meetings of
the Institute of Navigation to assess the degree of activity that took place dur-
ing this period and on into the 1990s. A summary of three different methods
is given in [92].
One of many RAIM schemes will now be described; it is easily understood
and can be thought of as a baseline or reference method. While it is a good
scheme, there is no claim that it is the best.

RAIM Detection Algorithm In 1987 a RAIM technique that is known as the


least-squares-residuals method was presented l93 j. It begins with the assump-
tion that the receiver has simultaneous redundant pseudorange measurements
(five or more), and that the position-fixing problem has been linearized in the
usual manner (see Sections 5.5.2 and 5.5.7). First, the ali-in-view least-squares
solution is formed. It is well-known and is given by Equation 5.71. The sum
of the squares (weighted, in general) of the components of the measurement
residuals vector oM (from Equations 5.69 and 5.71) is the scalar quantity

SSE= oMrWoM (5.93)

SSE is the basic observable in the sum-of-squared-residual-errors RAIM


method. The decision rule is as follows: If, for a predetermined threshold TH,

SSE::;; TH (5.94)

decide "no failure," but if

SSE> TH (5.95)

decide "failure."
The intuitive rationale for this rule is simply that if the measurements are
consistent, we can expect the residuals to be small; on the other hand. if the
measurements are inconsistent. we can expect SSE to be large because of a poor
least-squares fit. Once the threshold value THis set, the decision rule is quanti-
fied. With this RAIM algorithm, it is easy to set the threshold to yield an alarm
rate that is independent of geometry in the absence of a satellite malfunction.
266 SATELLITE RADIO NAVIGATION

This is usually set at the maximum allowable rate. The RAIM algorithm then
accepts whatever detection probability that results from this threshold setting.

RAIM Specifications There are four key parameters that must be included in
the RAIM specifications:

1. Alarm limit (also called alert limit). Alarm limit refers to the maximum
allowable radial error before the alarm is sounded.
2. Time response of the alarm (i.e., delay to alarm time). Too much delay
can be disastrous in critical situations.
3. Maximum allowable alarm rate in the absence of a satellite malfunction.
There must be a limit to nuisance alarms.
4. Detection probability. This must be close to unity if the RAIM algorithm
is to be effective.

These specifications compete with each other to some extent. For example,
tightening the false alarm rate specification makes it more difficult to meet
the detection probability requirement. The "elastic" in the system that makes it
possible to meet all of the stated four requirements is availability. The RAIM
algorithm can (and indeed must) reject poor detection geometries. RAIM avail-
ability, of course, suffers from such rejections.

RAIM for a Supplemental System Extremely high integrity availability is not


essential for use of GNSS as a supplemental system. However, nearly 100%
availability and continuity of function are needed for an RNP system. Many
studies have shown that RAIM alone will not provide this high degree of avail-
ability when operating with just the GPS (or GLONASS) 24-satellite configu-
ration. This is especially true when one considers the extra burden on RAIM
in having to isolate the faulty satellite as well as detect the failure. But GPS
alone can be certified as a supplemental system, with an availability approach-
ing 94% for the nonprecision approach phase of flight and up to 99% if aided
with barometric altitude [94]. For the less stringent phases of flight, the avail-
ability is much higher. For a supplemental system, continuity of service is not
required because, upon the sound of an alarm, the other approved navigation
system can be used. However. because of the length of outage times, continuity
of service requirements could never be met for an RNP system without some
sort of augmentation.

RAIM for an RNP System In the RNP application, RAIM will have to be aug-
mented with additional measurement information from outside GPS. Many such
possibilities exist, such as using the combination of GPS and GLONASS, and
the marketplace (and perhaps politics) will determine the mix of sensor infor-
mation to be used in any particular application. Also, it is likely that the ultimate
GPS integrity protection will be provided by a combined W AAS/RAIM sys-
GNSS INTEGRITY AND AVAILABILITY 267

tern. The two systems are complementary, and there is much to be gained by
having the two systems work together [95]. (For a discussion of the W AAS,
see Section 5.7.3.)

5.7.2 Combined GPS/GLONASS


To improve the availability and continuity of service of RNP service using
GNSS, augmenting GPS with GLONASS has been suggested. This would
essentially guarantee the signal redundancy required for RAIM, even for fault
isolation. However, there are problems with combining the two systems.

Technical Problems There are numerous technical problems with combining


the two systems that need to be resolved by the user. These include the fact
that the two systems operate on different time scales and that they are ref-
erenced to different geodetic systems. The combined approach also increases
the cost of the user avionics receivers, which now must receive signals from
both systems and process two different sets of navigation data using different
ephemeris algorithms. Even beyond the fact that the GLONASS system oper-
ates at a different frequency, receiving the FDMA signals results in a more
complex receiver design than the receipt of the CDMA GPS signals.

Institutional Problems Institutional problems also exist. First, the GLONASS


system has not yet proved to be reliable. Over the years, there have been more
GLONASS satellite failures than the number of GPS satellites that have been
launched [96]. In addition, it is more susceptible to satellite communications
transmissions that exist in frequency bands near and above the band allocated
for GLONASS [97, 126].

GPSjGLONASS RAIM The fact that the two systems operate on different
time scales can be solved by the avionics user by simply adding the time dif-
ference to his solution state vector. This does, however, require an additional
satellite signal source because it adds another unknown to the solution. Further-
more, at least two satellites are required from both systems in the solution. If
there is only one satellite, any error in its pseudorange will simply be assigned
to the solution for the time difference based upon the position solution deter-
mined from the other system. However, this requirement for an additional satel-
lite would not exist continuously, since the time scales of both systems are quite
stable and a reliable time difference solution would remain valid over a long
time. Continuous monitoring may be required for solution integrity, however,
as well as detecting interfrequency errors in the receiver.
The problem with operating with two coordinate systems can be solved over
a period of time and updated as necessary with data-base parameters. For most
phases of flight, the differences appear to be small enough, so they do not mat-
ter [85 J. For precision approach applications, the use of differential corrections
would cancel the differences, including the differences in the time scales.
268 SATELLITE RADIO NAVIGATION

The combined system can also be used in conjunction with the W AAS
described in Section 5. 7 .3, in which case the differences can be broadcast via
the W AAS. If there are a number of failures in either system, the availability
of RAIM and continuity of service could also suffer because, as stated at the
beginning of this section, the two systems' orbits are not synchronized. That is,
if the GLONASS system were to augment the GPS system on one day, it may
not on the next because the ground tracks of the satellites moved with respect
to each other. This could be a problem if the GLONASS system continues to be
unreliable and the number of satellites in orbit do not maintain an operational
status.

5.7.3 Wide Area Augmentation System (WAAS)


The WAAS Concept The WAAS is being developed by the FAA and is
expected to provide a test signal by 1998 [ 125]. In parallel the Europeans are
developing the European Geostationary Navigation Overlay Service (EGNOS)
[127] and Japan is developing the MTSAT Satellite-Based Augmentation Sys-
tem (MSAS) [128]. Both of these systems will be very similar to WAAS. Japan
will use their own satellites (MTSAT-1 and MTSAT-2). The Europeans will
share the Inmarsat-3 satellites with the FAA. ICAO has named the generic
WAAS-type system a Satellite-Based Augmentation System (SBAS). It is a
safety-critical system consisting of a signal-in-space and a ground network to
support en-route through precision approach air navigation. The W AAS aug-
ments GPS with the following three services: a ground integrity broadcast that
will meet the RNP integrity requirements for all phases of flight down to Cate-
gory I precision approach, wide area differential GPS (W ADGPS) corrections
that will provide accuracy for GPS users so as to meet RNP accuracy require-
ments for all phases of flight down to Category I precision approach, and a
ranging function that will provide additional availability and reliability that will
help satisfy the RNP availability requirements for all phases of flight down to
Category I precision approaches [69].
Figure 5.34 illustrates the WAAS concept [69, 98]. The W AAS uses geo-
stationary satellites (GEOs-Inmarsat-3 's and successors) to broadcast the
integrity and correction data to users for all of the GPS (and GEO) satellites
visible to the W AAS network. This broadcast is at the GPS L 1 frequency mod-
ulated with a C/ A code in the same family as the GPS C/ A codes. This family
of codes contains I 023 codes, of which all but 256 are balanced codes and
of which 36 are assigned or reserved for GPS [32]. Nineteen of the remain-
ing codes have been reserved for the wide area augmentation broadcasts [98].
Thus, a slightly modified GPS avionics receiver can receive these broadcasts.
Since these codes will be synchronized to the W AAS network time, which is
the reference time of the W ADGPS corrections, the signals can also be used for
ranging. A sufficient number of GEOs provides enough augmentation to satisfy
RNP availability and reliability requirements.
The first two launches of Inmarsat-3 satellites, the first such satellites avail-
GNSS INTEGRITY AND AVAILABILITY 269

W IDE-AREA
REFERENCE
STATION
(WRS)

Figure 5.34 W AAS concept.

able for wide area augmentation at the L I frequency were in 1996 [87]. Four
satellites are planned with an edge-of-Earth coverage shown in Figure 5.35 .
Note that many areas have double coverage, while some areas (e.g. , Europe)
have triple coverage. However, at least double coverage is required everywhere
in the service volume to provide the required RNP availability and reliability
[29, 30]. Thus, for CONUS, at least one or two additional GEOs are required
[30] . Unlike the Inmarsat-3 communications satellites, these additional GEOs
may be small single-mission navigation satellites [I 00].
In theW AAS concept, a network of monitoring stations (wide area reference
stations, WRSs) continuously track the GPS (and GEO) satellites and relay the
tracking information to a central processing facility [69, 98]. The central pro-
cessing facility (wide area master station, WMS), in turn, determines the health
and W ADGPS corrections for each signal in space and relays this information,
via the broadcast messages, to the ground Earth stations (GESs) for uplink to
270 SATELLITE RADIO NAVIGATION

Figure 5.35 lnmarsat-3 four ocean-region dep loyment showing 5° elevation contours.

the GEOs. The WMS also determines and relays the GEO ephemeris and clock
state messages to the GEOs. The signal is converted to the L I frequency on
the GEO satellite is then broadcast to the avionics user by the GEO satellite.

WAAS Navigation Payloads The navigation payload of lnmarsat-3 is added


to the normal communications payload to provide the wide area broadcast. This
payload is simply a bent-pipe transponder that converts a C-band uplink to both
a C-band downlink and the L I broadcast using the normal communications
C-band receivers and transmitters [32] . In the payload, input from the C-band
receiver is converted to IF, filtered, and then converted to the frequency required
for the C-band transmitter, plus converted to L I for continuous transmission via
the Ll HPA and antenna. Uplink power and the Ll gain are controlled so that
the Ll HPA always operates in saturation. This minimizes transmitted power
variations and, along with the requirement for a strong uplink signal , reduces
the effects of uplink interference and maximizes the ability to sense it [101].
Power control is realized via an encrypted TT &C.
Future WAAS navigation payloads may take on a different form - more like
that of the GPS satellites as described in Section 5.5.3 but without the military
mission features of the GPS satellites ]100] . Instead of being a simple signal
transponder, the payload will act as a data transponder, incorporating its own
stable clock. In this way, the uplinked data can be encrypted and , thus, provide
a more secure data and signal channel.

Message Format and Content [I 02, 1291 The integrity message contains the
status of each GPS satellite as use/ don '1 use information as well as WADGPS
GNSS INTEGRITY AND AVAILABILITY 271

error corrections and GEO ephemeris and clock data. The messages set into a
format with the capability to include both GPS and GLONASS data, although
GLONASS data are not planned in the FAA system. The magnitude of the
W ADGPS corrections can be also be used as error statistics for the users that
are not applying the corrections in the appropriate phases of flight.
The message data rate differs from that of the standard GPS signal [I 02,
I 03, 104, 129]. It has a symbol rate of 500 symbols per second. A rate 1/2
forward error-correcting (FEC) convolutional code of length seven is used to
reduce the effective data rate to 250 bps, but allowing a 5-dB gain in effective
energy-to-hit ratio over an uncoded 250 bps transmission [105]. Each message
block, shown in Figure 5.36, contains 250 bits, lasting one second. Each frame
contains 24 bits of parity to provide a strong burst error detection capability
as required for high integrity. The higher data rate provides two capabilities.
The first is a required capability to provide an integrity alarm to within 5.2
sec of a signal-in-space fault during Category I precision approach [98]. The
second is to broadcast W ADGPS corrections at a rate commensurate with SA
and ionospheric delay errors.
The various message types are listed in Table 5.9 [129]. There are two types
of correction data-fast and slow. The types 2 through 5 fast corrections are
intended to correct for rapidly changing errors such as GPS SA clock errors,
while the slow corrections are for slower changing errors due to the atmo-
spheric and long-term satellite clock and ephemeris errors. The fast GPS clock
errors are common to all users and will be broadcast as such. Corrections des-
ignated with the maximum positive number indicate not-monitored satellites,
while those designated with the maximum amplitude negative numbers indi-
cate don't-use satellites, which is the integrity indication. Procedures for using
these messages are given in the RTCA MOPS [104, 129].
For the slower corrections, the users are provided with ephemeris and clock
error estimates for each satellite in view (message types 24 and 25). Users are
separately provided with a wide area ionospheric delay model and sufficient
real-time data to evaluate the ionospheric delays for each satellite using that
model (message types 18 and 26). This model is comprised of vertical iono-
spheric delays at a set of grid of points that a user can interpolate to the iono-
spheric pierce points of his pseudorange observations.

~-------------------250BITS-1SECONn-----------------------~l
I I I 212-BIT DATA FIELD
24-BITS
1 PARITY.
I

II 6-BIT MESSAGE TYPE IDENllFIER (0 - 63}


8-BIT PREAMBLE OF 24 BITS TOTAL IN 3 CONTIGUOUS BLOCKS
Figure 5.36 W AAS message data block format.
272 SATELLITE RADIO NAVIGATION

TABLE 5.9 WAAS message types

Type Contents
0 Do not use this GEO for anything (for W AAS testing)
PRN Mask assignments, set up to 52 of 210 bits
2-5 Fast corrections
6 Integrity information
7 UDRE acceleration information
8 Estimated standard deviation message
9 GEO navigation message, (X, Y, Z, time, etc.)
10-11 Reserved for future messages
12 WAAS network/UTC offset parameters
13-16 Reserved for futu,:e messages
17 GEO satellite almanacs
18 Ionospheric grid point masks
19-23 Reserved for future messages
24 Mixed fast corrections/long-term satellite error corrections
25 Long-term satellite error corrections
26 Ionospheric delay corrections
27 WAAS service message
28-62 Reserved for future messages
63 Null message

Since tropospheric refraction is a local phenomenon, all users must compute


their own tropospheric delay corrections using a standardized model. The GEO
broadcast messages will not include any explicit tropospheric corrections.
PRN masks are used to designate which PRN belongs to which correction
slot. These masks improve the efficiency of the broadcast by preventing the
continual inclusion of PRNs for the integrity data and corrections. The integrity
data and corrections are provided sequentially based upon PRN numbers that
are assigned to various types of satellies (GPS, GLONASS, GEO, and future
GNSS satellites).

WAAS/Fault Detection/Exclusion Interaction The users are only required to


use fault detection (RAIM) or fault detection and exclusion (FDE) in conjunc-
tion with theW AAS in two cases: Fault detection is required during Category I
precision approach, if available, and fault exclusion is required anytime W AAS
integrity is not available in the other phases of flight ll 04 ]. Otherwise, neither
fault detection nor fault exclusion is required, since these functions are provided
by the W AAS broadcast. The WAAS provides the following enhancements to
RAIM availability and performance l95, I 06]:
First, in providing use/don't use information, the WAAS broadcast elimi-
nates the requirement for the RAIM or FDE to perform fault detection and
isolation (or satellite exclusion). The WAAS network is isolating the faults.
GNSS INTEGRITY AND AVAILABILITY 273

Availability is increased because it removes the requirement for a fifth or sixth


satellite with good geometry.
Second, during precision approach, RAIM must be used if it is available-
enough satellites are available with good geometry. Its purpose is not to detect
satellite failures but to detect rare anomalous propagation events such as local
ionospheric, tropospheric, and interference effects. If RAIM is not available,
the performance of the signal in space indicated by the W AAS can be used for
integrity. The probability of the nonavailability of RAIM coupled with the rare
events is small enough to provide the necessary integrity.
Using GPS alone, these enhancements still would not increase the avail-
ability and continuity of function to that required for RNP service, primar-
ily because of satellite coverage. However, the W AAS can provide one more
enhancement that will do so-additional satellites with a ranging capability,
which is the subject of the next section.

WAAS Ranging Since the signals broadcast by the W AAS geostationary


satellites are modulated with a C/ A code, they can also be used for ranging
if the timing of the signals are controlled with enough accuracy. Even if the
signals are not controlled exactly, pseudorange corrections from their own data
messages will provide the required accuracy. The effect of this ranging capa-
bility is a very good GPS constellation augmentation, although geostationary
satellites, in addition to the Inmarsat-3 satellites, will be required for some areas
of the earth, including the central part of CONUS. The effect of adding four
ranging geostationary satellites on the coverage over CONUS is shown in Fig-
ure 5.37. These four satellites are the three (not including the one over the
Indian Ocean) illustrated in Figure 5.35 plus one located at W 120°. This avail-
ability can be compared to that presented earlier in Figure 5.9. The 99.999%
availability of HDOP (2.37 average, and 3.03 worst-case) meets nonprecision
approach availability requirements, although double coverage over all locations
is required to obtain reliable coverage in the case of a long-term geostationary
satellite failure. The availability presented in Figures 5.9 and 5.35 does not
include the availability of continuity. That is, they only present instantaneous
availability and do not take into account possible loss of availability over the
en tire flight phase, a subject discussed in reference [ 130].
The 99.9% availability of VDOP is 2.52 average and 3.13 worst case. These
values may not appear to be acceptable for Category I precision approach.
However, it is not the availability of VDOP that is important but the avail-
ability of vertical accuracy [106]. By applying a weighted-least-squares solu-
tion as described in Section 5.5.7, the concept of straight VDOP is not valid.
However, the square root of the vertical component of the covariance matrix
given in Equation 5.72 is valid. It has been shown that the 99.9% availabil-
ity of vertical accuracy can be met using the four GEOs described above
[30]. Furthermore, the weighted-least-squares approach can also be extended to
RAIM [106].
N
-..I ---= .. -~. .-.. - .-..-·-·-_·-CONUS
·---...l.::-""":"::''::'::'---,
""" ...-: ·.. .... HOOP
0.995 -1--------- . - - + ---f.;'-·- ..... - ----- -·--·-· .. · · · · HDOP @ N36 W1 05
0.99 ----i ·t····
/."
·-· · · · · · · . . . . . . . . ____
-.. -
___ --- -CONUS VDOP
VDOP@ N34 W093

0.985 -J-------- --+/: ·--·- ············ ·········· --·- -- · · · · ·-·-----· · · · -· · . ····-· 24 GPS + 4 GEO
• I

0.98 / •: .
...........•...... ······r············ ·······-- .. ....... ___ ···············- SATELLITES

~ 0.975 of·····-·-·················------
'
···-··-!········-········-·---·- ·+··········
...J
m 0.97 ---,. ...i... ________ -. l-- ........ ---- ....... --------!- .........___ ..__________ . . J.---------
<(
...J
-----,1: -· ··---···-~----·--- ~

1E+01
~ 0.965 --1~ .... -.. + . . . · · · 1-- ---1 ----t ~ 1E..01 ~
r-..1""---. I· 24GPS
<( :::i 1E..03 1.:'""
0.96

0.955 +···--····· . .
'
··ll···················

-lj- ··-
.... j

·········+·--1---·-·-----i .......... L ........


jjj
<(
m
0
0:
D.
w
1E..05
1E..07
1E..09
' ... "' """", ........._
-4 GEOs

' Kt
0:
::> 1E-11

0.95 -------- ' · · -·-


·-r~ -· -······· ·········l······-·····f···-····-···············-+···········--··········1
-'
<(
II.. 1E-13
1E-15
I~

. .J-!J
0 1 2 3 4 5 6 7 8 9 10 11
0.945 .. ·············-····l··--···f···--··········-·····-~---·'··--·---1

I
NUMBER OF SIMULTANEOUS SV FAILURES

11 lr I 1 I II
0.5 1.5 2 2.5 3 3.5 4

"ALL-IN-VIEW" DILUTION OF PRECISION OVER THE CONTINENTAL US


Figure 5.37 Availability of HDOP and VDOP when GPS is augmented with four GEOs.
GNSS INTEGRITY AND AVAILABILITY 275

~
GNSSSPACESEGMENT

DIFFERENTIAL CORRECTIONS
REFERENCE STATION
Figure 5.38 Pseudolite DGPS concept.

5.7.4 Pseudolite Augmentation


Pseudolite (PL) is an acronym for pseudosatellite. A PL is comprised of a
GNSS-like signal generator at a fixed known location that broadcasts DGPS
corrections as well as a ranging code. The concept is illustrated in Figure 5.38
and is analogous to the WAAS approach described in Section 5.7.3 with the
exception that the signal is generated on the ground. The advantages of PLs for
precision approach and landing applications are as follows:

I. They provide a data link for local DGPS corrections and integrity infor-
mation that can be received with a slightly modified GPS avionics receiver
[107, 108].
2. They provide additional ranging signals, just as the W AAS, resulting in
a significant VDOP availability enhancement [109, 110, 111, 112, 113].
This VDOP enhancement is illustrated in Figure 5.39 for two PLs aug-
menting GPS at the FAA Technical Center in Atlantic City, New Jersey.
VDOP is reduced from 2.3 for GPS only to less than 1 in the area of
the runways for GPS augmented with two PLs [113]. Similar enhance-
ments are shown in [ll2] for runways at O'Hare International Airport in
Chicago.
3. They provide a rapid change in geometry that is extremely important for
276 SATELLITE RADIO NAVIGATION

1.7
I

:::.:::

g
(.!)
0

g< -5.--_ __
M

i
1:1::: -10

-10 -5 0 5 10 15
RW13 CROSSTRACK (KM)
Figure 5.39 Illustration of VDOP reduction with 2 PLs at the FAA Technical Center.

kinematic carrier phase ambiguity (both integer and floating point) resolu-
tion techniques [73] and multipath mitigation. This approach is described
in Section 5.5.9.
4. Their signals in space can be more accurate than satellite signals because
of the nonpresence of ephemeris and ionospheric delay errors and reduced
tropospheric delay errors [Ill I].

Along with these advantages there are two significant disadvantages that require
attention: I. The proximity of the PL to the avionics user causes potential inter-
ference to the reception of satellite signals-the well-known near/far problem
[107, 108, 110, 114]. The PL signals become quite strong when the avionics
receiver is near the PL if the PL power is set for reception at a distance. For
example, the received PL signal power increases inversely with the square of
distance from the PL. For a precision approach and landing applications, the
PL power would be set for reception at about 20 nmi. Thus, at 0.1 mile, the
received signal is 40,000 times stronger (46 dB). 2. A PL's location on the
ground could present a problem with the antenna location on the aircraft for
simultaneous reception of the pseudo lite and the satellites [ 113, 115].

Solutions to the NearjFar Interference Problem The interference problem


can be solved by altering the signal structure of the PL [113, 115]. Two signal
diversity techniques have been tes.ted: the use of a pulsed signal and an offset
in frequency from that of the received satellite signals. Pulsing is required to
GNSS INTEGRITY AND AVAILABILITY 277

prevent the capture of a receiver due to the excessive dynamic range required
for a close-in signal [108]. The pulsing with a relatively low duty cycle reduces
the interference to any signal in the reception band. The strong signal simply
punches holes in the lower powered signals, reducing their received power by
only the duty cycle percentage. For example, if the duty cycle is set at 0.1, the
signal loss of the satellite signal is only I 0 log 10 0.9 = 0.458 dB, although an
additional loss in CjN0 is realized because the pulse power also enters the corre-
lator. This pulse power is reduced significantly by precorrelation pulse clipping
or pulse suppression, techniques that are already implemented in GPS receivers.
Pulsing is also required to prevent multiple PLs interfering with each other. This
adds a requirement of pulse timing so that pulses from two different PLs are
not received simultaneously, in addition to the requirement that received pulses
must be asynchronous with the reception of GPS data bit edges. This type of
timing is possible [113, 115].
Even though pulsing can reduce interference significantly, the strong signal
within the clipped pulses can still cross-correlate with the received satellite sig-
nals, if indeed the PL signals carry GPS-like Cj A codes [ 108, II 0, 113, 114,
115, 116]. This causes another dynamic range problem because the cross-cor-
relation margin between C/ A codes is only on the order of 22 to 24 dB [ 18,
32]. The I 0 dB reduction in average power of the pseudolite signal due to a
10% duty cycle pulsing does little to prevent cross-correlation [ 114 ]. This is
where the frequency offset helps. It has been shown that the cross-correlation
can also occur at frequency offsets but is proportional to the 1 kHz spectral
line component levels of another CjA code in the same family [18, 32]. As it
happens, however, the spectral line components near the null of the spectrum
are down on the order of 70 to 80 dB, as can be observed in Figure 5.15. Thus,
if the PL were to transmit in the null of the GPS satellite C/ A code spectra,
which are all at the same frequency to within 5 kHz due to Doppler differences,
the cross-correlation would be insignificant [ 113, 115]. Cross-correlation peaks
within 4 dB of the 0 offset case can still occur [ 116]. However, if the carrier
frequency jcode frequency ratio of 1540 is maintained, these peaks disappear
very rapidly and simply create an interfering noise that the pulsing mitigates.
Test results back up these interference mitigation theories [ 113 ]. Figure 5.40

20 40 60 80
Average PL to GPS Power (dB)
Figure 5.40 Effects of signal structure on PL interference to GPS satellite signal
reception.
278 SATELLITE RADIO NAVIGATION

presents the degradation in GPS silgnal reception C/No as a function of average


received PL to GPS power ratio for four cases: no PL pulsing or frequency off-
set, either pulsing or frequency offset, and both. The improvements using the
mitigation techniques are obvious. Using only a frequency offset buys very lit-
tle mitigation. Using pulsing alone results in a degradation of about 3.5 dB for
average PL to GPS power ratios of 16 to 70 dB (peak power is 10.5 dB higher).
Cross-correlation could occur in this case. This could be part of the degrada-
tion, but mostly due to the fact that the receiver uses multi-bit sampling that
allows some of the pulses to pass through the correlator. The best performance
is achieved when both techniques are used, resulting in about 1.5-dB degra-
dation. This is very acceptable considering the advantage gained in navigation
performance using PLs. The effect on low-cost "hard-limiting" GPS receivers
is even less. With this signal structure, the differential correction/integrity data
rate can be as high as 1 kbit/sec with standard BPSK modulation, and up to
2 kbitsjsec using quadrature phase shift keying (QPSK) data modulation [113,
115, 131].

Solutions to the Antenna Location Problem A combination of solutions


needs evaluation to solve this problem if it is indeed a problem [113, 115].
First of all, if the signal diversity techniques are used to solve the interference
problem, then the receivers become insensitive to the PL's transmitted power
level as long as the average power is strong enough to receive at the maxi-
mum distance (e.g., 20 nmi). Thus, it might be possible to increase the power
of the PL so that it can be received via the reduced antenna gain at negative
elevation angles. Second, PL placement (high and off to the side) may be pos-
sible so that the elevation angle is near zero rather than large negative. Third,
although not desirable, an additional antenna could be added to the bottom of
the aircraft. This is not desirable for three reasons: an additional antenna also
requires another hole in the aircraft's fuselage, it requires an additional LNA and
associated cables, and it adds an additional lever arm and delay to the GNSS
solution.
Initial test results have shown that the antenna location problem can be
solved without an additional antenna [113, 132]. These results show PL sig-
nal data message dropouts, with no loss of lock, only occur at larger dis-
tances when the aircraft is maneuvering. No message dropouts occurred when
the aircraft was on final approach. These preliminary test results are promis-
ing.

5.8 FUTURE TRENDS

In this chapter, satellite radio navigation is described as it existed in 1996,


except for some developments to enhance GNSS integrity, availability, and
accuracy for commercial aviation. However, there are other developments that
will enhance GNSS even further. Two of these developments are discussed
PROBLEMS 279

in this section: the relationship of future personal communications systems to


GNSS and the evolution of a future civil GNSS.
With the proposed development of satellite-based personal communication
systems such as Iridium, the possibility of position reporting via these systems
is attractive. This brought about a concept in which the communication system
signals themselves could be used as navigation signals. However, this concept
failed to mature for two reasons. First, the number of satellites required for oper-
ation of the communication system was not enough to provide adequate con-
tinuous navigation, especially for dynamic users. Second, since GPS receivers
have become so much smaller and less expensive, such a receiver could eas-
ily be embedded into the personal communication receiver/transmitter, with-
out the development of communications signals with a navigation signal capa-
bility.
However, this does not mean that a future civil GNSS system could not take
advantage of these future communications systems. With a suitable orbit con-
figuration from a geometric point of view, the same satellites could be used for
both purposes [99, 100, 117, 118, 119, 120]. The investigations of the feasibility
of a future civil GNSS, at least as an augmentation to GPS, have been recom-
mended by the RTCA Task Force 1 [2] and the FANS GNSS subgroup of ICAO
[120]. Early investigations by Inmarsat recognized substantial cost savings for
such a system if navigation payloads were to be hosted on future communi-
cation satellites, provided that they were placed in intermediate circular orbits
(ICOs) [99, 100]. In fact, a navigation payload in each of these communication
satellites (12-15) would provide an excellent augmentation to GPS, increasing
accuracy, integrity, availability, and continuity of service significantly. Then, as
a future option, additional low-cost navigation satellites could be launched to
eventually provide a stand-alone civil GNSS as a future replacement for GPS,
in the event that GPS is no longer available.

PROBLEMS

5.1. A GPS user receiver is tracking four GPS satellites-PRNs 1, 13, 19, and
22. Via the reception of navigation data, the receiver receives the following
ephemeris parameters:
(a) Parameters common to all four satellites:

vA = 5153.619629 meters
e=O
i0 = 0.3055555556 semicircles
wo = 0 semicircles
toe = 345, 600 seconds in the GPS week
280 SATELLITE RADIO NAVIGATION

(b) Individual parameters:

Satellite PRN 0 0 Semicircles Mo Semicircles


I -0.3239929371 -0.1055577595
13 0.6760070629 0.5808311294
19 -0.9906596038 -0.3166133151
22 -0.6573262705 -0.2525022039

All other received parameters are zero. This user's measurements


were taken such that a common time of applicability of the satellites'
ephemerides is 531,000 sec in the GPS week. Evaluate the position of
each satellite at that time of week.
Ans.: The satellite positions are as follows:

X 1 = 13,672.46475 km
Y 1 = -6,720.41440 km
Z 1 = 21,755.97535 km
x, 3 = --2,370.46666 km
Y 13 = --23,498.04734 km
zl3 = -12,150.94171 km
X ,9 = - 18,962.99343 km
Y 19 = 6, 971.55345 km
Z 19 = 17,240.21601 km
X 22 = -10,899.89991 km
Y 22 = -14,301.92165 km
z22 = 19,546.60953 km

5.2. The user's estimated position (near Denver) in ECEF coordinates is as


follows:

X= -1, 268.4451896 km
Y = -4,739.4160255 km
Z = 4, 078.0482708 km

which corresponds to a position of N39° 44", W 104 o 59", at an altitude


of 1,609.344 meters. (Assume a spherical earth with a radius of 6378.163
kilometers.) Compute the slant ranges and azimuth and elevation angles
from his estimated position to the four satellites described in Problem 1.
Compute the HDOP, VDOP, PDOP, TDOP, and GDOP at the user's esti-
mated position.
PROBLEMS 281

Ans: The slant ranges are as follows:

R1 = 23,230.69260 km
R 13 = 24,829.03623 km
R19= 24,969.68770 krn
Rn = 20, 578.69009 km

The azimuth and elevation angles are as follows:

Az1 = 45.201 o El 1 = 24.955°


Az13 = 171.127° El 13 = 8.759°
Az19 = 305.645° El19 = 7.436°
Azn = 302.780° El 22 = 66.743°

The OOPs are as follows:

HOOP = 1.241
VOOP = 1.631
POOP = 2.050
TOOP = 0.823
GOOP = 2.208

5.3. After traveling some distance the user then measures the following actual
ranges to the four satellites:

R 1 = 22, 280, 304.1 78 meters


R 13 = 25,351,375.133 meters
R 19 = 25, 373,230.135 meters
R22 = 20, 867, 137.653 meters

What is the user's new position in ECEF coordinates? What are the
azimuth and elevation angles to the satellites and the HOOP, VOOP, POOP,
TOOP, and GOOP at this new position, which has an approximate position
of N44° 58", W93° 15", at an altitude of 200 meters? Assume a spherical
earth with a radius of 6378.163 kilometers.
Ans.: The user's position solution (near Minneapolis) is as follows:

X = - 255.843602 km
Y = - 4,505.54881 km
Z = 4, 507.55905 km
282 SATELLITE RADIO NAVIGATION

The azimuth and elevation angles are as follows:

Az1 = 51.406° Elt = 36.316°


Azn = 182.344° El13 = 3.909°
AZJ9 = 310.241° El19 = 3.709°
Az22 = 288.152° Eh2 = 59.474°
The DOPs are as follows:

HDOP = 1.304
VDOP = 1.603
PDOP = 2.066
TDOP = 0.8432
GDOP = 2.231
6 Terrestrial Integrated Radio
Communication-Navigation
Systems

6.1 INTRODUCTION

Since the 1970s, many radio communication and navigation systems have
used the same portion of the frequency spectrum and common technology,
such as time synchronous operation, digital modulation, spread spectrum wave
forms, coding and user-borne clock oscillators. Synchronous operation, in con-
junction with signal time-of-arrival measurement, has lead to a direct method
for measuring the range between transmitter and receiver locations in sys-
tems using this technology. For these reasons, integrated relative and absolute
communication-navigation systems, which provide both digital communication
and navigation functions by means of the same wave form, have been widely
developed. These systems typically use the content of digital data and the time
of anival of the messages measured by the receiver, to determine the receiver
platform's position, through some form of multilateration. In general, the posi-
tions are determined in a relative sense within an arbitrary grid, although the
unit positions can be referenced to an absolute, geodetic coordinate system,
such as latitude, longitude, and altitude, through the use of reference stations
whose positions are independently known in the absolute coordinate system. In
addition, the position data may be combined in a Kalman filter with dead-reck-
oning sensor data, such as from an inertial platform, for the purpose of position
extrapolation and calibration of the dead-reckoning sensor enors.
Several types of tenestrial integrated communication-navigation systems
have been developed. One is a decentralized system, in which the operation
is not dependent on any central site or node, and each user in a community of
members determines its own position. Such a system is also called nodeless. A
second type is a centralized system, wherein the operation is dependent on a
central site (node) and may be controlled by it and wherein the determination
of the positions of the users in the community is performed by that central site.
Frequently, it is desired to have the positions of a large number of users known
and tracked at the central site, such as in military or civil command and con-
trol systems. Typically, in such a system, users may obtain their positions by
automatic, periodic, or occasional requests from the central node; hence such

Avionics Navigation Systems. Myron Kayton and Walter R. Fried 283


Copyright © 1997 John Wiley & Sons, Inc.
284 COMMUNICATION-NAVIGATION SYSTEMS

a system is considered nodal. Sys.tems are being developed that exhibit both
nodal and nodeless characteristics and thus become hybrid systems. However,
the fundamental design of these systems is typically based on either the decen-
tralized or centralized concepts.
Typical examples of these systems are represented by the Joint Tactical Infor-
mation Distribution System Relative Navigation (JTIDS RelNav) function and
the position location reporting system (PLRS) and its enhanced versions, whose
principles of operation are described in this chapter. The former is representa-
tive of a decentralized system and the latter is representative of a centralized
system. Applications of these types of systems cover a wide spectrum, includ-
ing the handover of targets between units operating within a common grid,
rendezvous of aircraft or other units, command and control from the viewpoint
of a military commander having knowledge of the position of his forces, and
such specialized purposes as search and rescue and medical evacuation.
The systems described in this chapter were mature and operational in 1996.
For example, by 1996, over 3600 PLRS and enhanced PLRS user units had been
produced and deployed on a variety of U.S. Army and Marine Corps vehicles,
including tanks and helicopters and as manpack units, and about 1500 more
were planned for the future. By 1996, about 500 airborne JTIDS terminals had
been installed on such aircraft as the U.S. E3A, E2C, B-1, F-14D, F-ISC, and
JSTARS, as well as on several aircraft of other NATO countries. About 400
more such terminals had been planned for later installation in various military
aircraft and ships. Also in 1996, a major development was under way by a
consortium of several countries for a JTIDS-like smaller and modular MIDS
terminal that includes the relative navigation function. This reduction in termi-
nal size will make it possible to install it in a large variety of other aircraft.

6.2 JTIDS RELATIVE NAVIGATION

6.2.1 General Principles


The relative navigation (RelNav) function of JTIDS is a decentralized position
location and navigation system wherein each user independently determines its
position, velocity, and altitude from data received from other users. Member
units in a JTIDS community make transmissions in time slots assigned on a pre-
cise common time base maintained by on-board synchronized clocks. Among
the many transmitted message types are round-trip timing (RTT) messages and
precise position location and identification (PPLI) messages. The RTT mes-
sages provide maintenance of the precise clock synchronism that supports one-
way radio ranging, and PPLI messages provide the time-of-arrival (TOA) range
measurements and the source position information that are the foundation of the
RelNav function.
JTIDS RelNav is based fundamentally on trilateration, which may be visual-
ized geometrically as scribing arcs of known radius (derived from the TOA of
JTIDS RELATIVE NAVIGATION 285

PPLI messages) centered on the positions of the transmitters and intersecting at


the position of the receiver. The algebraically equivalent process involves the
solution of three simultaneous quadratic equations in two unknowns (the third
equation serving to resolve the ambiguity in the solution of the first two). The
solution may also be obtained by an iterative linear process in which an initial
position estimate is adjusted in the direction of first one and then another of the
sources until a position satisfying the ranges to all three is found. This essen-
tially is the process mechanized in the JTIDS RelNav Kalman filter. Each PPLI
message is processed independently and the observations need not be simultane-
ous. The importance of this is obvious if the user is moving and the observations
are sequential as in JTIDS time slots.
JTIDS units typically transmit PPLI messages at intervals ranging from 3 to
12 seconds. A navigating user's processor employs an estimate of its velocity to
extrapolate travel during the time between received PPLI observations. It uses
the extrapolated position estimate at the instant of each observation to com-
pute a range error vector, namely, the difference between the measured range
(TOA) and the range computed from the estimated position and the received
source position. The velocity estimate is obtained from the aircraft dead-reck-
oner system, such as inertial or air data, and the errors of the dead-reckoner
system contribute to the observed range error vectors so that, over time, the
dead-reckoner errors can be estimated. JTIDS RelNav is, therefore, typically
operated as a hybrid multisensor navigation system in which the range mea-
surements are used to derive corrections to the dead reckoner in a multi-state
Kalman filter. Chapter 3 describes the basic concepts of hybrid multi sensor nav-
igation systems.
JTIDS RelNav may also operate without input from a dead reckoner in what
is called a TOA-only mode, but in this mode the extrapolated position estimate
is based simply on the immediately preceding two positions and is suitable only
for very low-dynamic platforms, such as surface ships.

6.2.2 JTIDS System Characteristics


JTIDS is a synchronous, time-division multiple-access digital communication
system operating in the 960- to 1215-MHz band. The time slot and message
structure are shown in Figure 6.1. The first 32 pulses of each message are a
synchronization preamble to establish precise receiver sampling times for the
information chips modulated on the following data pulses. The unoccupied por-
tion of the time slot allows the transmission of certain longer message types and
guard time for RF propagation to all users before the beginning of the next time
slot.
The preamble and its digital matched filter detector establish message start
time to a precision of a few hundredths of a microsecond. Precise determina-
tion of message start is necessary to the sampling and decoding of the 200-ns
data chips and synchronized clocks are necessary for slot number definition and
crypto-decoding. With message TOA very precisely known on the receiver's
286 COMMUNICATION-NAVIGATION SYSTEMS

One time slot


containing one
Standard Message

""'~ ~ 258 P"''"' of 6.4 mic"""coods


duration, spaced 6.6 microseconds

• II IITfiTI1lffi IITITIITfl1
___~~,~,~,~.~~.~.~~,___ _,.~~,~,~,~I.LI.L__.~~.Willillill~~,~,~,~~--JIWUI.II.I.II.u.u.-
32 chips/pulse encoding five
information bits, continuous phase
shift modulated, pseudo-randomly
encoded
Figure 6.1 JTlDS signal structure.

clock, very precise synchronization of all the individual clocks, although not
necessary for the communications function of JTIDS, allows the receiver to
convert message TOA to an accurate one-way radio range to each of the trans-
mitters and thus support a precise multilateration navigation function.

6.2.3 Clock Synchronization


Two means of maintaining clock synchronism are provided: round-trip timing
(RTT), which operates independently of Re!Nav, and a passive technique intrin-
sic to the RelNav function. There is also provision for synchronization to an
on-board external time reference (ETR) such as an atomic clock or a global
positioning system (GPS) receiver.
The net time reference (NTR) transmits first in any new net and establishes
the system time to which all other units synchronize by the exchange of round-
trip timing interrogation (RTTI) and round-trip timing reply (RTTR) messages
either directly with the NTR or with another unit already synchronized to the
NTR. RTTis are very short messages containing only the addresses of the inter-
rogator and of the desired donor. The donor responds at a fixed time later in
the same time slot with an RTTR message containing the address of the inter-
rogator and the time of arrival (TOA) of the RTTI as measured on the donor's
clock. The interrogator measures the TOA of the reply and computes the adjust-
ment to its own clock necessary to make the donor's reported TOA equal the
TOA of the reply at the interrogator. Figure 6.2 illustrates the message exchange
and the associated calculation. A series of RTT transactions over a period of
a few minutes provides an estimate of the interrogator's clock drift rate; i.e.,
the frequency error of its clock oscillator and the frequency error estimate is
then used to retune the oscillator driving the clock. The estimation of clock bias
and frequency errors is carried out in a small Kalman filter that provides error
JTIDS RELATIVE NAVIGATION 287

Interrogator t (o)
TOA(i)
err (i)
Interrogator

Source
Source t (o)

err (0 • [TOA (s) - TOA (0 + t (d)) I 2


err (i) Interrogator clock error
TOA(s) Time of arrival of interrogation on source clock
TOA(i) Time of arrival of reply on interrogator clock
t (d) Standard reply delay time
t(o) Slot start time
Figure 6.2 The round-trip timing (RTT) process.

uncertainties in its covariance matrix. The clock bias uncertainty (variance) is


converted to a time quality number that is transmitted in the PPLI messages.
A synchronizing user performs RTT transactions with the donor of the highest
time quality exceeding its own.
A technique of passive synchronization is provided for users constrained
to radio silence. A RelNav user's clock error adds linearly and equally to the
observed one-way radio ranges to all the PPLI sources. The RelNav Kalman
filter of passive users minimizes this common range bias by assigning it to the
clock state carried in the RelNav Kalman filter. The clock errors of passive
users are, therefore, correlated with source time and position errors and are
magnified by geometric dilution of precision (GDOP). (See Chapters 4 and 5
for a discussion of GDOP.) Generally, the time and position errors of passive
users are greater than those of active users because the simultaneous solution
for both time and position is the equivalent of hyperbolation, rather than tri-
lateration, and the associated GDOP is greater. Passive users with an external
time reference, of course, do not suffer this effect.
Synchronization to an external time reference (ETR) follows essentially the
same procedure as RTT. The ETR supplies a time pulse and a data message
declaring the time of the pulse. The JTIDS terminal observes the difference
between the TOA of the pulse and the ETR's declared time of the pulse and
adjusts its clock and frequency models accordingly. Synchronization to ETRs
allows widely separated JTIDS ground units to maintain synchronism and readi-
ness to communicate and support RelNav over extended periods without radio
contact with other RTT sources.
288 COMMUNICATION-NAVIGATION SYSTEMS

6.2.4 Coordinate Frames and Community Organization


Coordinate Frames The primary coordinate frame for Re!Nav calculations is
geodetic latitude and longitude; however, JTIDS may simultaneously support
a secondary purely relative grid (Re!Grid) coordinate frame unique to JTIDS.
Operation in the Re!Grid is at times useful for the exchange of target coor-
dinates between units operating in areas with poor reference to local geodetic
coordinates, such as over ocean or deep-penetration missions.
The RelGrid is a cartesian frame tangent to the Earth at its origin with
U -coordinate east, V -coordinate north, and W -coordinate upward at the ori-
gin. The Re!Grid is typically established by a single moving unit called the
navigation controller (NC) that serves the same function in the Re!Grid as do
surveyed ground references or GPS-equipped users in the geodetic frame; it
establishes a reference baseline from which other units make trilateration mea-
surements. There must, therefore, be relative motion between the single NC
and the dependent users. The NC uses a specified geodetic location for the grid
origin to transform latitude and longitude from its on-board dead-reckoner sys-
tem to Re!Grid coordinates, and it appends the RelGrid coordinates to its PPLI
message. Position and velocity errors of the NC dead reckoner lead to offset
and drift of the Rei Grid relative to the true Earth and azimuth errors introduce
a rotation of the Re!Grid about the W-axis. However, the objective of RelGrid
operation is the calibration of the dead reckoners of the users relative to the
dead reckoner of the NC rather than calibration with respect to the Earth.
Re!Grid user units acquire an estimate of the Rei Grid origin by computation
from the geodetic and Re!Grid coordinates in a PPLI message from the NC or
from another unit established in the Re!Grid. The new grid entrant derives its
initial Re!Grid coordinates from its own geodetic position estimate and the com-
puted origin location. Thereafter, Rei Grid and geodetic navigation computations
proceed independently using the same source selection and Kalman filtering
logics as described in Section 6.2.7. Though computations in the two coordinate
frames are essentially independent, in a community with both geodetic and Rei-
Grid sources there is some interaction through the common dead-reckoner error
terms of the Re!Nav Kalman state vector (Table 6.1) as it attempts to reconcile
own-unit dead-reckoner errors and the NC dead-reckoner errors reflected in the
grid drift. If well disposed accurate geodetic references are available, there is
no need to establish a relative grid.

Community Organization A unique characteristic of JTIDS decentralized


RelNav is that users navigating in the system become navigation reference
sources for other users, and it is highly interactive in that the navigation errors
of one user propagate to other users. Re!Nav employs a dynamic covariance-
based user hierarchy to control these interactions and prevent reciprocal ranging
and regenerative circulation of errors. Rank in the hierarchy is transmitted in
the position quality and time quality fields of the PPLI message. Each user
independently estimates position and velocity corrections on the basis of PPLI
JTIDS RELATIVE NAVIGATION 289

TABLE 6.1 Kalman filter state vector for ReiNav inertial


dead-reckoner system
State-Vector
Element Description
Relative grid U-position error
2 Relative grid V-position error
3 Geodetic quaternion error-element I
4 Geodetic quaternion error-element 2
5 Altitude error correction
6 RelGrid controller azimuth angle error
7 Clock bias
8 Platform Z-axis angle error
9 X-velocity error
10 Y-velocity error
II X-axis tilt error
12 Y-axis tilt error
13 Altitude scale factor error correction
14 RelGrid controller U-axis velocity error
15 RelGrid controller V-axis velocity error
16 Clock frequency error

messages received from users of superior quality and establishes its own rank in
the hierarchy on the basis of the qualities of its sources and the measurement
geometry as reflected in the RelNav Kalman filter covariance matrix. Some
units in the community must, of course, have independently known positions
and qualities to get things started.
In the time hierarchy, the net time reference (NTR) transmits the highest
time quality ( 15). Other units transmit time qualities derived from the time vari-
ance developed in their synchronization Kalman filters. Primary users employ
only the RTT technique, while secondary users employ primarily passive syn-
chronization, making recourse to RTT only under certain conditions of poor
geometry. Explicit designation of secondary users is seldom made. Radio-silent
users, unable to participate in RTT message exchanges, automatically assume
secondary user status.
Within the RelNav hierarchy, the equivalent of the NTR is the RelGrid
navigation controller (NC). RelGrid coordinates transmitted by the NC are by
definition perfect as indicated by transmission of the highest relative position
quality of 15. There is no equivalent of the NC in the geodetic frame; that
is, there is no equivalent arbitrarily perfect reference designator. Designation
as a position reference disables geodetic position update; however, this desig-
nation does not connote perfection. The accuracy of the geodetic position in
PPLT messages is characterized by the geodetic position quality which, for all
units, is determined initially by operator entry and subsequently by the RelNav
Kalman filter covariance. The maintenance of the covariance based hierarchy
290 COMMUNICATION-NAVIGATION SYSTEMS

in both time and position is the function of the source selection logic described
in Section 6.2.6.

6.2.5 Operational Utility


JTIDS RelNav provides precise position registration in geodetic coordinates and
(optionally) in its own unique U/V/W relative coordinate frame while requiring
no additional hardware beyond the JTIDS data link terminal and its data bus
interface with the host system. It provides calibration of on-board dead-reck-
oner errors (continuous in-air alignment) with consequently improved platform
alignment for referencing on-board weapon launch and guidance systems and
improved navigation accuracy during excursions out of JTIDS net coverage,
such as low-level missions in forward areas.
Precise registration of user positions and calibration of dead-reckoner errors
lead to improved registration between targets acquired by an on-board radar
and target track reports received from surveillance systems, especially those
from JTIDS-equipped airborne surveillance systems such as the E3 AWACS,
E2C Hawkeye, and JST ARS. More importantly, accurate relative position and
platform alignment provide improved registration of locally acquired targets
exchanged between the mission elements themselves. Position accuracy of one-
tenth to one-quarter mile is typically required to support reliable correlation
of these target reports. This exceeds the relative accuracy of unaided iner-
tial navigators after an hour of flight but is well within JTIDS RelNav accu-
racy. Accurate correlation of target reports between mission elements results
in more efficient weapon/target allocation. Accurate knowledge of the loca-
tions of the cooperating mission elements via the exchange of PPLI messages
also contributes to reduced risk of fratricide and allows greater freedom and
precision of maneuver between supporting elements in low-visibility condi-
tions.

6.2.6 Mechanization
The overall diagram of the RelNav function in Figure 6.3 shows its three major
subfunctions: source selection, Kalman filter and navigation processing. Each
received PPLI message is processed by the source selection function immedi-
ately following its reception. Host dead-reckoner data are processed in the nav-
igation function to provide an estimated own-unit position at the time of receipt
of each selected PPLI message. Source selection stores the selected PPLI obser-
vations and the associated navigation data to await processing by the RelNav
Kalman filter. The computed range and direction to the source and the mea-
sured range from the TOA of the received PPLI message provide a range error
vector which the filter uses to estimate position, velocity, and other dead-reck-
oner error states. These error estimates are applied to the internal dead-reck-
oner model in the navigation function and corrections are supplied to the host
platform.
Estimated Present Position
~

Souroe J Compute Range & leosinQ!;


1 Di'eotion Coeinee r --.
rNavigation
Nominal
IPOSitiOn State and
PPLI Message

(latitude ,Longitude, r.; ~Predicted


Ranae
Covariance
Update l
Dead
Altitude, nme of
Arrival & Qualities)
Source nmeof J Observation
Residual I .... Correction
Reckoner ~~--
Host Dead
Reckoner
Selection 1Nnva1
1 I Model Inputs
Source Vanance
Horizontal Position Variance ' I l J
.I
Careotione To
HOIJt
Kalman Filter Navigation
~---- ------ '- - -- - - -

Figure 6.3 Overall relative navigation flow diagram.

N
~
292 COMMUNICATION-NAVIGATION SYSTEMS

Source Selection The source selection function is crucial to the decentralized


design of JTIDS RelNav; in fact, it is the fundamental instrument for the main-
tenance of community stability. It enforces a position and time quality hierarchy
to prevent the regenerative circulation of errors and it also performs geometric
tests to give preference to sources from the directions most likely to improve
the user's position estimate.
Usually more PPLI messages are received than can be processed by the
Kalman filter function and some must be rejected. The JTIDS computer performs
many of its tasks on a slot-by-slot basis. However, there are tasks that do not need
to be completed within one time slot, and, indeed, there is usually insufficient time
for completion of all the tasks within a single slot. The Kalman filter involves
many trigonometric and matrix operations requiring considerable computer time,
but dead-reckoner error states are only slowly variable so the filter performance
is not very time sensitive. It is as:>igned a low task priority, and the processing
of a single observation may require many time slots for completion. Meanwhile,
the source selection function operates on each PPLI message as it is received and
buffers those selected to await processing by the filter. When the Kalman filter
completes processing the batch of observations gathered during the preceding
cycle, the current source selection buffer is transferred to the filter's input buffer
and a new source selection cycle is begun. The duration of the filter cycle, and
hence also of the source selection cycle, is variable (usually from 3 to 20 seconds)
because it is dependent on both the total processing load on the computer and on
the number of observations processed by the filter.
Figure 6.4 shows the functions and interfaces of the source selection pro-
cess. The minimum range test rejects observations from sources so near as
to threaten the linearity assumptions of the Kalman algorithm. The qual-
ity screening test is fundamental and maintains the quality-based hierarchy
by a comparison of the time and position qualities of the received PPLI

I Communications
Management Function
I PPLI
IMessage -----4_ Minimum Range Test I

I Kalman Filter Function Posrtion


!
l"ovar1anoe
--i-1
Quality Screening
I
- rl
1
I Time Base Function Time
vanance
Rank Tests
I
.j
1
Source Buffer I
I Navigation Function lNavigation
!Lata I
Management
I
Selected
.;:,ources
Kalman Filter
I
Figure 6.4 Relative navigation source selection flow diagram.
JTIDS RELATIVE NAVIGATION 293

messages with those of the receiving user. In general, both the time and position
qualities of the source must exceed the position quality of the receiving user.
This test is, in a sense, antithetical to the concept of the Kalman filter which
was designed to derive low-variancae estimates from higher-variance observa-
tions; however, it has been found to be essential to the operation of the network
of interactive filters in a RelNav community. It recognizes that the simple qual-
ities transmitted in the PPLI messages do not represent true variances (noises)
in the estimation sense; rather, they represent uncertainties of estimation errors
that are predominantly correlated bias errors. The limited PPLI message bits
available to the RelNav function precluded the more sophisticated approach
of transmitting the separate position terms of the RelNav filter covariance
matrix.
Users enter the network with an operator-assigned initial position quality
indicating the uncertainty of the initial position estimate. In airborne users, posi-
tion quality will degrade with time in accordance with the error signature of the
user's dead reckoner as modeled in the Kalman filter's time propagation of the
covariance matrix. The Kalman filter processes PPLI observations to estimate
the dead-reckoner errors (and reduce their covariance terms) and this will be
reflected in a decreased rate of degradation of position quality. A dynamic bal-
ance supportable by the quality and geometry available from the PPLI message
sources is soon established.
The geometric rank tests of Figure 6.4 recognize that the value of a PPLI
observation is related not only to its position and time qualities but also to the
direction from which it was received. Several sources of equally high quality
all in approximately the same direction provide little more information than
one such source, but each will consume source selection buffer locations and
filter processing time and will crowd out observations of lesser quality but of
greater value by virtue of their directions. The rank test uses the orientation,
eccentricity, and semimjaor axis of the bivariate error ellipse defined by the
horizontal position terms of the RelNav Kalman filter covariance matrix and
the quality-based variance and direction of a received observation to compute a
numerical rank that is stored with each observation. The rank is the approximate
variance of a hypothetical observation lying directly on the extended major axis
that would provide the same benefit as the received observation at its angle off-
axis. In this context, benefit implies the reduction in the major axis of the error
ellipse to be expected of Kalman filter processing of the received observation.

Kalman Filter The JTIDS RelNav Kalman filter is an extended Kalman filter
that estimates linear error states of the navigation process. Table 6.1 presents
a typical state vector for use with an inertial dead reckoner. Two of the state-
vector elements (7, 16) are time and frequency states used only by passive
users. For active users, these two states are carried in a separate synchronization
filter that uses RTT or ETR data. Five states (1, 2, 6, 14, 15) are relative grid
states. Of the remaining nine states, the two horizontal geodetic position error
terms are carried as quaternions, while the third dimension is carried as altitude
294 COMMUNICATION-NAVIGATION SYSTEMS

error and an altimeter instrument scale factor term. The inertial filter models
velocity errors in the north and west directions in the geodetic frame and the
three platform misalignment or tilt states in the local-level frame. A filter for use
with air-data computer/attitude and heading reference system (ADC/ AHRS)
inputs is also included and differs in that it models the dead-reckoner errors
as two wind components, an airspeed instrument scale factor, and azimuth bias
and azimuth gyro drift rate errors.
Figure 6.5 is a flow diagram of the Kalman filter function. Only the mea-
surement geometry, measurement innovation, measurement validity, and filter
characterization features that are peculiar to JTIDS RelNav will be discussed
here. See Chapter 3 for a general discussion of Kalman filters used in multi-
sensor navigation systems.
The source position and the own-unit position stored with it are subtracted
vectorially to obtain a predicted range and three-dimensional direction cosines.
The range error (measurement innovation) is obtained by subtracting the pre-
dicted range from the measured range. The measurement variance derived from
the source qualities and the direction cosines are supplied to the Kalman gain
function, and the range error is supplied to the state and covariance update
function.
The validity tests are intended to detect divergence of the filter and to pro-
vide protection against inconsistent observations. This is particularly impor-
tant to JTIDS RelNav, as compared to other radio-ranging systems because the
ranging sources are typically other, sometimes erroneous, navigating users. The
dilemma facing any validity test is whether the error lies with the local esti-
mate or with the input data. The source selection function makes this decision
more difficult by narrowing the group of sources to the few, typically three to
five, with the best announced qualities (whether true or false). An observation
is rejected if the measurement innovation (the computed range error) is large
compared to the receiving user's filter variances (3-sigma reasonableness test).
A series of observations exhibiting an average innovation exceeding 2-sigma
for this test, triggers a proportional increase of the position terms of the filter
covariance matrix. Eventually, if the recurrence rate of these covariance expan-
sions exceeds a threshold, the process is abandoned and the filter is restarted.
The filter characterization function serves only to convert filter covariance
data to forms more convenient to the source selection and PPLI generation
functions. Position quality in the PPLI message is defined by the JTJDS mes-
sage standard as representing the semimajor axis of the horizontal position error
ellipse and the rank computation requires the semimajor and semiminor axes
and the orientation angle of the error ellipse. Each Kalman cycle, the filter char-
acterization module computes these terms and stores them for use during the
next cycle.

Navigation Processing Figure 6.6 is a diagram of the inertial navigation


system implemented within the JTIDS terminal. The particular model (north
slaved, wander azimuth, unipolar, or free azimuth) installed in a given terminal
Error state ... Navigation Data
Extrapolation
Time-tagged
data
Error State & Covariance--" Kalman Gain
Covariance Measurement Predicted
Computation data
Propagation vector &
Error state r- covanance
reset
II
Innovation Measurement
Process Kalman

w noise
covariance
gain Geometry ..._ PPLI
II Computation posiUon
Navigation
Correction Careotion & Meaeurement
to t-- Validity T eating
Covariance Predicted
Navigation Extrapolation
model measurement
Error state Measurement Measurement (TOA) ......:: ~ Source

Correctio
state
' transition
matrix
innovation
~
' of
Selection
Function
transition State Transition Updated error state & Error State & Measurement
matrix
- Matrix
Maintenance
covariance Covariance
Update
Innovation Innovation
Computation

~ Measurement times & I


source seK!cuon olo_Covariance data
boundary times
I, Message
PPLI Qualities Ch FiltEl!' . ( Qualities and ell~e data
aractenzat1011 J
Generation f

Figure 6.5 Relative navigation Kalman filter flow diagram.

N
\C
Ul
N
',c)
C'l

~ Misalignment correction
Platform an_g_ular....::.
rates Misalignment Angle Velocity correction
Update Kalman Filter
14--- Alti~ & au.aternion
J Misalignment
angles
corrections

VAincitv __, Computed


Geodetic Velocity

v
data angular rates
Update
Host 1-<-- Grid
Navigation
System
Interface
IVelocity
'V
AI
V'/
correction

PPLI Message

Altitude
Geodetic Quatemion
Update
1-- 1
Rei Grid
~
Generation

Navigation
1 Quaternions & altitude . .. jUpdate

Geodetic pOSition
Direction Cosine and
Position Geodetic Update Geodetic & Qrid ..:::. Source
Veloaty .& Tilt pOSitiOn &. velOCity Selection
corrections

Figure 6.6 JTIDS relative navigation inertial navigation model.


JTIDS RELATIVE NAVIGATION 297

is selected to correspond with that of the host system. JTIDS initial position
is obtained from the host inertial system, but thereafter JTIDS uses only the
velocity and baro-inertial altitude inputs to compute delta-velocities for use in
the internal inertial mechanization. This model continuously applies the Rel-
Nav filter corrections to velocity and acceleration (via platform misalignment
calibration) to improve upon the solution provided by the host inertial system.
Corrections to position, velocity, and tilts are also returned to the host data
system.
As mentioned earlier, an air-data model is available for use in aircraft having
only an ADC/ AHRS, but JTIDS RelNav has seldom been installed in such
aircraft. Its primary use is as a backup mode to continue RelNav operation and
PPLI transmissions should the host's inertial system fail and force a switch to
air data.

Integration with GPS JTIDS RelNav processes position fixes from an inter-
connected GPS receiver as two uncorrelated, one-dimensional Kalman updates
in the north and east directions. The variance data in the GPS input are used in
RelNav source selection and Kalman gain computations in the same manner as
the qualities in received PPLI messages. If GPS data are of high quality-that
is, if GPS GDOP and signal availability (Chapter 5) are within typical GPS sys-
tem performance criteria-the JTIDS terminal will tend to operate exclusively
on GPS data, and the transmitted PPLI messages will reflect GPS position accu-
racy. A few aircraft with interconnected JTIDS and GPS equipments can, via
JTIDS RelNav, extend the benefits of GPS-based position fixing to an entire
community of JTIDS users.

6.2. 7 Error Characteristics


JTIDS RelNav is subject to errors of the dead reckoner, errors in the positions
of the reference sources, errors of range measurement and RF propagation, and
the amplifying effect of GDOP.

Dead-Reckoner Errors The inertial filter explicitly models velocity and plat-
form misalignment errors, but higher-order terms are modeled only as dynamic
process noise. The air-data filter models wind and azimuth errors as Markov
processes. To the extent that actual dead-reckoner errors depart from these
assumptions, JTIDS RelNav will experience errors. As one example, winds aloft
typically vary with altitude so a RelNav user coupled with an ADC/ AHRS can
be expected to exhibit temporarily increased errors following a substantial alti-
tude change.

Equipment Delay Errors Installation-specific data are used to compensate


transmission times and received message TOAs for equipment and cable delays
between the antenna and the signal processor. These delays are determined for
each installation and are subject to measurement error, but they are usually sig-
298 COMMUNICATION-NAVIGATION SYSTEMS

nificant only in ground or shipboard installations where the antenna may be


remote from the terminal.

Clock Synchronization Errors A typical JTIDS terminal clock runs at 80


MHz (clock quantization of 12.5 ns) and the JTIDS RF wave form uses a
phase-modulation chip of 200 ns (5-MHz chip rate). These parameters and sim-
ple double-oven crystal oscillaton: have proved able to provide consistent clock
synchronism of ±25 ns using RTT exchanges approximately every two minutes.
Clock synchronization errors have not been found to contribute appreciably to
Re!Nav error; however, oscillator stability under the extremes of temperature,
pressure, and acceleration of fighter aircraft is an important consideration as
has been demonstrated in several flight tests.
The clock error of a passive user is typically greater than that of an active
RTT user; however, it does not contribute independently to position error. It is
simply a fully correlated manifesration of the same source errors and GDOP.

Source Position Errors The best PPLI position quality (15) implies a site sur-
vey of better than 50 ft, !-sigma. More important than accurate survey, however,
are the position qualities assigned at the ground reference sites. These must
reflect a conservative estimate of the position uncertainty. Optimistic position
qualities can result in user validity failures and recurrent filter resets leading to
instability rather than just increased position error.

RF Propagation Errors JTIDS Re!Nav applies an approximate compensation


for atmospheric index of refraction (speed of propagation) by calculating an
average index over the signal path of each selected PPLI message using the
altitudes of transmitter and receiver and a standard atmosphere lapse rate for
index. Nonstandard atmospheres mtroduce significant error only over the very
longest of paths between high-altitude aircraft.
The pseudorandom coding of the phase-modulated chips of the JTIDS signal
wave form conveys strong resistance to multipath ranging errors. The digital
matched filter of the preamble detector will ignore as noise a delayed replica
signal arriving more than 200 or 300 ns after the direct signal. Delays of less
than 200 ns (chip overlap) cause ~light broadening and delay of the peak of the
preamble correlator output.

Geometric Dilution of Precision (GDOP) GDOP is a source of error in all


multilateration and hyperbolation systems (see Chapters 4 and 5) but is particu-
larly important to JTIDS Re!Nav. JTIDS Re!Nav GDOP is much more variable,
both geographically and temporally than, for example, that of GPS or Loran-
C. It may range from unity to over one hundred within the service area of
a JTIDS community. JTIDS ground reference sites are typically major com-
mand and control centers or airfields whose locations result more from tacti-
cal, political, and geographic considerations than from an intent to support the
JTIDS Re!Nav function. Also, local GDOP is a function of user altitude as the
POSITION LOCATION REPORTING SYSTEM 299

community of ground references within line-of-sight changes and as transient


airborne sources temporarily contribute to filling holes in the basic GDOP con-
tours provided by the reference sites. Furthermore, the GDOP at a given user
position is determined by the geometry to only those few sources selected by
the quality, rank, and validity tests of the source selection logic, and, despite
their intended purpose, these tests may or may not always choose the geomet-
rically optimum set of sources. Thus it is impractical to describe a generally
meaningful GDOP contour map as can be done for other less dynamic systems.
In many instances, however, GDOP is the dominant error contributor and must
be explicitly included in JTIDS RelNav performance analysis.

6.2.8 System Accuracy


The JTIDS system specification defines RelNav performance tests that compare
the measured position errors to a criterion called the available position quality
(APQ). Short track segments are selected, and, from all the received PPLI mes-
sages recorded by the test unit during that segment, the best a posteriori solution
and its error ellipse (the APQ) are computed for each point along the segment
at which the test unit transmitted a PPLI message. The deviations of the test
unit's computed positions from the true positions, as measured by the test-range
tracking system, are compared to the available position quality as representing
the 1-sigma bound of the error under the immediate local conditions.
Results of flight tests by the military services have not yet been published in
the open literature; however, the accuracy can be quite closely predicted from
the system design parameters and conservative estimates of the error sources.
Results of several computer simulations have been reported [1, 9, 13, 14], and
indicate that JTIDS RelNav can, with high confidence, be expected to achieve
airborne user position accuracies of 100 to 300 ft over a range of reasonably
assumed error budgets, flight scenarios, and GDOP.

6.3 POSITION LOCATION REPORTING SYSTEM

6.3.1 General Principles


The position location reporting system (PLRS) and its derivative systems pro-
vide centralized position location and reporting and data communications for
communities of hundreds of cooperating users in a tactical environment. Time-
of-arrival (TOA) measurements between units in a community, aided by baro-
metric pressure measurements, are processed at a central site to establish posi-
tion tracking of a large number of users. The positions of a few participants are
used as grid references. At the central site, both ranges and clock offsets are
derived from mutual pairs ofTOA measurements (Section 6.3.5). With the clock
offsets established, additional ranges are derived from one-way TOA measure-
ments. The positions of users are then tracked, using adaptive predictor-correc-
300 COMMUNICATION-NAVIGATION SYSTEMS

tor filterng. All positions are available to the cooperating users and to command
centers.
PLRS also provides short message data exchange for both manual and auto-
mated users. All control, measurement reporting, and data exchange are crypto-
graphically secured in a synchronous, anti-jam communications network. Mas-
ter stations (MSs) establish control circuits between radio sets (RSs) and the
MSs via a control network. This control network supports the position location,
navigation aid, and friendly unit identification. From a message flow standpoint,
the control network provides an "order-wire" capability that can be used for
data exchange between users. In addition to the order wire, the control network
also supports user access to a wide range of position location, navigation, and
identification information.
The system has a range of capabilities which support the conduct of coordi-
nated military operations. For the individual tactical user on foot, in a surface
vehicle, or in an aircraft, the system determines and displays to him his accurate
position in real time. It alerts him if he enters a restricted area. It also provides
the user with guidance to predesignated points, to other users, or along corri-
dors in accordance with requests, as well as providing a free text data exchange
capability.
For a tactical commander, the system provides the identification, location,
and movement of all cooperating users within an assigned area of responsibility.
In addition to allowing the commander to monitor the movement of forces,
the system also has the ability to input and modify control measures such as
coordination points, safe corridors, and restricted zones.
For all participants, the system (which operates beyond the line of sight
via integral relays) incorporates electronic counter-counter measures (ECCM)
and provides cryptographically secure digital data communications. Each user
has the capability of sending preassigned short messages to provide data to or
request information from the system and to exchange short free text messages.
A single synchronous community can support over 900 users with a varied
distribution of manpack, surface vehicle, and airborne platforms. System per-
formance is provided within the primary ground operating area, and airborne
users can be located and tracked within a 300-km square extended operating
area. It can interoperate with other communities in adjacent or overlapping geo-
graphical areas. It operates in the UHF band at frequencies from 420 to 450
MHz.

6.3.2 Major System Elements


PLRS employs two categories of hardware: master station (MS) and radio sets
(RS). The MS provides centralized network management as well as automatic
processing and reporting of position, navigation, and identification information
for each participating RS. RSs, which are individually identifiable to the MS,
perform reception, transmission (including relay), range measurement, and var-
POSITION LOCATION REPORTING SYSTEM 301

ious signal-processing and message-processing functions necessary for position


location and communication operations within the system.
PLRS is usually deployed with two or more identically equipped MSs. The
MSs monitor each other's operation, cooperate in the network position location
and communications functions, and assume adjacent community control either
by planned action, directed by the MS operators, or automatically upon an MS
failure.
The identification and position determination of RSs by PLRS is fully auto-
matic. When the RS operator turns on the equipment, it automatically becomes
and remains a member of the PLRS network. However, to permit the RS oper-
ator to provide data and requests to the MS and to receive and display informa-
tion from the MS, a separate user input/output (I/0) device is employed with
each RS. For manpack and surface vehicular RSs, this I/0 device is a small
hand-held device called the user readout (URO) module. For airborne RSs the
I/0 device is a pilot control display panel (PCDP). The PCDP provides a larger,
brighter display and an interface to a bearing indicator so that the pilot can "fly
the needle" in response to automatic bearing updates from the MS.

6.3.3 Control Network Structure


Tactical deployments require operations beyond line of sight (LOS) from the
MS. The approach taken in PLRS to satisfy this non-LOS requirement is to use
relays. In most deployments one and sometimes two or three relay levels are
needed to establish a path between a remote RS and a MS. To satisfy this need,
an integral relay capability is built into every RS. Any RS can be automatically
utilized to maintain contact with any other RS. This reduces the need for ded-
icated relays and improves speed of adaptation to changing deployments. To
maintain communications and provide organized reporting of data for position
location calculation, a concept of control network organization, called a PORT
structure, is used. This is a communications structure (Figure 6.7) consisting of

,--' ' _--


....
...,.. .....;
;
_..........
' ,,,'"

TOA LINK
PORT LINK ••
Figure 6.7
NODE (RADIO SET)
GRID REFERENCE NODE

PLRS network structure.


MASTER
STATION
302 COMMUNICATION-NAVIGATION SYSTEMS

a set of PORT links that connects RSs (nodes) to the MS either directly or
via one to three relay nodes. Network control and measurement reporting is
transferred over the PORT path. In addition to the bilateral PORT links, one-way
TOA links are utilized to provide the additional multilateration structure needed
for position location and tracking. Since the timing of RS clocks is established
using paired TOA data along the PORT paths (Section 6.3.5), one-way TOAs
can be converted to true range es.timates. In a typical PLRS deployment over
half of the range measurements are based on one-way TOAs.
To initialize the position locarion function and to maintain a relationship
between the internal coordinates and the external military grid reference system
(MGRS) coordinates, the MGRS positions of three or more cooperating RSs are
input to PLRS. These are normally input as three-dimensional fixed reference
positions, and the RSs then become grid reference nodes. The MS may be, but
is not necessarily, one of the grid reference nodes. In addition the system can
operate without any fixed reference RSs as long as the positions of three or
more RSs are regularly input to the position tracking function. In this latter
case the positions may be input and updated by RSs which are moving. This
is termed a dynamic baseline operation. External position sources such as the
global positioning system (GPS) (Chapter 5) can be used to provide the position
reference information to PLRS, but external data sources require the appropriate
coordinate conversion from the respective geoids and datums to MGRS.

6.3.4 Waveform Architecture


PLRS performs its functions in the face of either deliberate or accidental inter-
ference. One of the design characteristics that makes this possible is the use
of a spread spectrum type signaling wave form. Specifically, the information
transmitted is spread to a bandw:idth of approximately 3 MHz by modulation
with a pseudonoise (PN) code sequence. The chip width of the PN code is 200
nsec. Each time a RS or a MS transmits a burst, that signal burst is spread by
the code. The spread spectrum signaling format provides a low-density signal
spectrum that reduces detection by would-be interceptors and offers minimum
interference to other co-channel users. In addition, the effect of this modula-
tion is to encode the signal and thus help protect it from those who might try
to extract information if the signal is detected. The burst of signal that is sent
by a PLRS RS consists of two portions: a preamble portion and a message por-
tion (Figure 6.8). The receiver examines the preamble, using a digital matched
filter that accepts or rejects the signal on the basis of the degree of correlation
between the pseudonoise code received and that expected. If the preamble is
accepted, then the message, which consists of addresses, commands, measure-
ments, queries, and/or replies, can be decoded.
All of the burst transmissions appear to be identical. That is, whether the
particular transmission being sent is a reply to a query, a request for information,
or whatever, it has the same bandwidth and burst duration, and it cannot be
distinguished from any other transmission without having the proper receiver
POSITION LOCATION REPORTING SYSTEM 303

PREAMBLE TIME DATA = 182 SYMBOLS INCLUDING


RERNE 94 INFO+ 10 PARITY BITS
Figure 6.8 PLRS timeslot signal structure.

and crypto key. Also, ranging may be accomplished with any of the bursts sent,
no matter what their purpose, as far as the message portion of the transmission
is concerned.
PLRS is a synchronous time division multiple access (TDMA) system, which
also employs frequency division multiple access and a srread spectrum wave
form (Figure 6.9). PLRS employs a network that is fully synchronous in three
respects: all RSs maintain timing such that cryptographic resynchronization is
seldom required; all RSs perform actions in a programmed cyclic manner such
that reprogramming RS assignments for relay. ranging, and reporting is seldom
required ; and each RS's time base is maintained with sufficient accuracy such
that one way time of arrival measurements can be translated to ranges by the
MS. Each of these aspects reduces the number of required control transmissions
and makes time available for other syslem functions or for increased system
capacity.
PLRS employs time division multiplexing to permit a large number of users
to utilize the same frequency. Each of the RSs in a network takes turns trans-
mitting its burst while other RSs listen. These timeslots are assigned by the
MS. based on the particular requirements of each user. For example. for a given

~--ONE 64 SECOND EPOCH= 256 FRAMES


EACH ACTIVE RS CYCLES AT LEAST 1/EPOCH

ONE 0.25 SECOND FRAME= 128 TIMESLOTS


EACH ACTIVE RS CLCLES AT MOST 1/FRAME

----__,/ ONE TIMESLOT = 2 mSEC

_..of--- PROPAGATION TIME


= 600 )lSEC

----TRANSMISSION BURST
=BOO )lSEC
Figure (l.9 PLRS time division multiple access tTDMA) organization.
304 COMMUNICATION-NAVIGATION SYSTEMS

tracking accuracy, an aircraft-mounted RS needs more timeslots than a manpack


RS because of its higher dynamics.
In the synchronous TDMA approach, every RS has its own time-base gen-
erator that keeps track of the time that it should receive, transmit, or perform
other programmed operations. 011ce this time base is synchronized with the
network time base, then messages can be sent or received and range measure-
ments made. The MS's time base normally acts as the prime timing source for
the RSs under its control, and the MS corrects each of the RS clocks whenever
they require it. In this way, the timing oscillators included in the RSs need to
have only moderate stability.
The network utilizes the resources of time, frequency, and code to multi-
plex the many operations necessary for system operation. The structured use
of time allows a convenient and efficient method for gathering time-of-arrival
data and for managing the multiple relay levels within the network. The use of
the frequency resource provides additional anti-jam protection and allows for
noninterfering, coordinated operadon to increase system data capacity.

The Time Resource Each RS is commanded by the MS to perform transmis-


sions and receptions at specific times. The time division structure simultane-
ously accommodates both minimum and maximum network access rate require-
ments. For position location and control, these vary from manpack RSs requir-
ing update rates approximately once every minute to high-performance fixed-
wing aircraft with desired update:; 30 times per minute.

Timeslot The fundamental time division is the timeslot. The timeslot length is
1.95 msec. The burst transmission accounts for 800 p.,sec, and 600 p.,sec is allo-
cated to RF propagation delay. The remaining time is required for processing
overhead such as message encoding, validation, and guard time.

The Frequency Resource Each transmission occurs on a particular frequency


channel. When operating in the hop mode, each channel is pseudorandomly
hopped across the 420- to 450-MHz band to provide the network with additional
anti-jam performance. There are eight frequency channels in the PLRS, thereby
increasing the capacity of the network by allowing simultaneous transmissions
to occur with a minimum of mutual interference.

The Code Resource In addition to the time and frequency separation, PLRS
uses a pseudonoise code resource. These codes provide a different spread spec-
trum pattern for each transmission in the network, thereby eliminating cross talk
and reducing interference.

6.3.5 Measurements
TOA Measurements One-way ranging is made possible by the fully syn-
chronous nature of the PLRS network. Each RS employs a set of time markers
POSITION LOCATION REPORTING SYSTEM 305

RANGE
(PROP)
DELAY

-t------~TRANSMISSSION ~--------1--

~-----TIMESLOT MARKERS ---------1~

Figure 6.10 PLRS synchronous ranging.

to designate when a transmission is to start and when a reception must be com-


pleted. Thus a RS needs only to measure the time delay from the end of the
reception to a time marker at the end of the timeslot. This yields a digital num-
ber (TOA measurement) precisely related to the range between the two RSs
(Figure 6.1 0). These TOA measurements are compensated for local equipment
delays and sent to the controlling MS.
Spread spectrum signals are especially amenable to range estimation because
of the high chip rate codes employed in their modulation and because those
code sequences have excellent correlation functions. Over the entire length of
a code sequence, there is only one point at which a code will correlate with
itself, and that point is only about one chip increment in length. Standard TOA
measurement methods are used for spread spectrum systems that allow them to
resolve time to a small fraction of a chip.

Time Offset and Range Measurements To provide time synchronization, the


system performs time-difference measurements. When one RS transmits, a sec-
ond RS measures to TOA (Figure 6.11). The RSs report their TOA measure-
ments to the MS. The MS uses these two TOA measurements to determine the
timing offset between the two RSs, and the MS commands the RS to correct
its clock timing, when required. All timing information is stored in the MS's
computer.
The MS also uses these pairs of TOA measurements to estimate the range
between the two RSs. These two-way range estimates are more accurate than
one-way range estimates, since the clock offset errors are eliminated.

Altitude Measurements Each PLRS RS contains a barometric transducer that


measures air pressure (Chapter 8). These air pressure measurements are sent to
the MS, along with the TOA measurements. The MS then converts these pres-
sure measurements to altitude estimates. The use of relative barometric pressure
to aid vertical location is especially useful when the line-of-sight range vectors
have small vertical components.
306 COMMUNICATION-NAVIGATION SYSTEMS

p"
-RS#1

I
I~ TOA2 .I RS#2 F"~/
LEGEND
~T= TOA2-TOA1
2 ~TRANSMISSION ~I - TIMING OFFSET

RANGE = 1 k - TOA 2;TOA 1 ] c=J RECEPTION B - DURATION OF BURST

P- PROPAGATION TIME

Figure 6.11 Time difference and range measurement technique.

6.3.6 Position Location and Tracking


The initial location of position within PLRS is based on the use of three ranges
and an altitude to unambiguously locate a new RS in three dimensions (Figure
6.12). The altitude, based on a barometric measurement, establishes the RS on
a horizontal surface; two ranges, based on TOA measurements, are then used to
establish a pair of points on that surface; and the third range is used to resolve
the ambiguity. Velocity is then established using a sliding three-point method
for filter initialization. All of the position locations within PLRS are established
and tracked at the MS based on measurements taken by the RSs. These data
are provided to the MS via user measurement reports. Each user measurement

~-RANGE 1 , 4 - - - - - - - - f
~ ®-RA~E2.4

L-~-T_A_T~_o_N~/~:: ~ ~ ~ : - - - dGE3,<
POSrTIONS
RS#1,
OJ 0 RANGE 1,4
RANGE 2,4
}
..-'\. POSITION OF RS #4
RS#2, RANGE 3,4 L..y"
&RS#3 ELEVATION RS#4

Figure 6.12 Position location by multilateration in PLRS.


POSITION LOCATION REPORTING SYSTEM 307

report message may contain up to three TOA measurements and a barometric


measurement.
Choice of an algorithm for position location update is strongly influenced
by the RS dynamics and by the multi-level relay requirements. The majority of
RSs are typically not in line of sight (LOS) to the MS and in a large network it
is unlikely that most RSs have LOS to any known reference RSs. Thus a typical
RS must be located by using measurements from other RSs which are them-
selves being tracked. Since all position measurements cannot be made simulta-
neously, it is necessary to extrapolate the position of one (or both) RSs cooper-
ating in a range measurement. A portion of a position correction is ascribed to
each RS involved. The amount of correction applied is a function of the track
uncertainty of each RS along the line of path between the two RSs.
Because of the availability of multiple links and their usefulness in track-
ing other RSs, a single RS may have up to ten or more TOA measurements
taken at various times, utilized in a single position report period. To maintain a
simple algorithm that adapts to the variable data base and minimizes computer
memory requirements, each TOA measurement is processed as it is received
at the MS. This provides a sequence of partial updates and makes the tracking
algorithm relatively independent of which RS is the primary beneficiary of a
given TOA.

6.3. 7 Tracking Filter


There are four adaptive predictor-corrector filters used in PLRS. All of these
filters are simplified versions of the discrete Kalman filter (Chapter 3) and are
implemented in the software of the MSs. Together the filters take the raw mea-
surement data from each RS's measurement report message, consisting of TOA
and barometric pressure transducer values, and convert them to updated esti-
mates of the user's three-dimensional position and velocity. Figure 6.13 depicts
the interconnectivity of the four filters. All filters take one piece of input data
at a time. The mean sea level (MSL) filter's purpose is to furnish an offset cal-
ibration for all the nonreference RS barometric pressure transducers by using
reference RS barometric pressure transducer data as input Reference RSs are
at known altitudes. The output of the MSL filter is used as a constant by the
altitude filter. The purpose of the altitude filter is to obtain vertical position
and vertical velocity estimates based on barometric pressure transducer input
when there is little or no vertical information in the TOA data (because of unfa-
vorable geometry between users). The altitude filter output is also required to
aid the position initialization algorithm in initialization of the track review and
correction estimation (TRACE) filter. The position initialization algorithm pro-
vides checking of entries, initial track acquisition, ambiguity resolution, and
initial position and initial velocity estimation for the TRACE filter. The cen-
tral logic oscillator control (CLOC) filter is used to obtain estimates of each
RS's clock offset and drift rate with respect to the MS's clock. CLOC pro-
vides, as required, commanded corrections to each RS's clock offset andjor
308 COMMUNICATION-NAVIGATION SYSTEMS

OOE REFERENCE RS
BAROMETRIC PRESSURE - - . - 1 1-4--- ALTITUDE REFERENCES
MEASUREMENT
• .,..__ _ _ _ BAROMETRIC MEAN SEA
OOE NON-REFERENCE R S - - - - 1 LEVEL (MSL) PRESSURE
BAROMETRIC PRESSURE ALTITUDE
MEASUREMENT FILTER UPDATED NON-REFERENCE
- ~ RSALTITUDEANDVERTICAL
......---- VELOCITY ESTIMATES
UPDATED NON-REFERENCE ----1,.,._.
RS ALTITUDE ESTIMATE POSITION REFERENCES
(FIXED)
SINGLE TOA OR [ TRACE
PAIR OF TOA ---------~ FILTER 1-----II•UPDATED 3-D POSITION
MEASUREMENTS AND VELOCITY ESTIMATES
FOR BOTH RSs

• - - - - - CLOCK OFFSET AND DRIFT


RATE RELATIVE TOMS
ONE PAIR OF TOA
MEASUREMENTS -------·~[ CLOK 1-4--- MASTER STATION CLOCK
FILTER

Figure 6.13 Tracking fil1:ers interconnectivity and data sources.

frequency to keep all RSs nominally synchronized with the MS, and thereby
with one another.
Finally, the purpose of the TRACE filter is to take each TOA measurement
and partially update the two cooperating RS's position and velocity estimates in
three-dimensional space. Without the supporting processing from the other three
filters, TRACE would not be able to operate successfully. Link value account-
ing is one of the unique byproducts of the TRACE processing. If a particular
TOA link assignment is not aiding the position location accuracy (due to geom-
etry or excess TOAs in a particular direction) of either RS then that TOA link
assignment is replaced by one that may be more beneficial to overall system
accuracy.
These filters provide the partially updated positions necessary to permit the
MS to report a fully updated posJition about half a second after MS receipt of
each user's measurement report.

6.3.8 Network and Traffic Management


The MS is a processing facility providing centralized technical control and mon-
itoring of a community of PLRS: users. The MS performs dynamic network
management of all RSs under its control. The MS is the central control point
of the PLRS network, allowing a technical control operator to maintain a real
time overview of the network within his area of responsibility. The RSs accept
and implement MS issued commands and report status and communicant data
(i.e., information on mutual RS connectivity). These reports are used to support
automatic adaptive routing. The MS operator, as the communications technical
controller, is responsible for network initialization and monitoring community
POSITION LOCATION REPORTING SYSTEM 309

performance, using a graphic display. Normal operation requires only minimal


operator intervention.

Control Network Traffic Four types of messages are transmitted over the con-
trol network: user data input, user data output, network control, and measure-
ment report messages. All messages are sent to their appropriate destination,
through relays as necessary, without data content change. Error control is used
in order to insure a low(< 10- 5 ) message error rate. User data input messages
originate at any RS via an inputjoutput (I/0) device. An I/0 device may also
be used to originate queries requiring user data output messages to be sent back
to the RS. This establishes two-way communication between any RS and the
MS. Two way communications between any pair of RSs can also be established
using these message types.
Network control messages and measurement report messages are used by the
MSs to maintain communication with, and exercise control over, each RS. The
network control messages contain commands to the RS, (e.g., link assignments
and timing correction commands). Measurement report messages contain TOA
and altitude measurements and status information. The actual routing of mes-
sages to the proper destination is implicit in the network structure. The MS,
with its knowledge of the connectivity, assigns transmit and receive times to
provide proper relaying of messages to their destinations. RSs performing a
relay function make no distinction between different message types.

User-to-User Traffic The basic PLRS provides for user-to-user data exchange
using two distinct approaches. The first is by way of the MS, using the data
input and data output messages mentioned above. This approach can be used
by any RS operator to send short alerting or coordination messages to any other
user in the community. These messages are stored and forwarded by the MS(s).
The other approach provides local groups of up to eight users the ability to send
short messages to each other, without going through the MSs.

6.3.9 System Capacity and Accuracy


Each MS contains a data base which defines navigation aids and user parameters
for the entire multiple MS community. This data base contains the identifica-
tion, configuration, data access authorization, position, and current status for
each participating RS. Up to 900 RSs can reside in the data base. A nominal
community consists of 125 to 250 RSs controlled by each MS, but an MS is
capable of controlling a maximum community of 460 RSs with some reduc-
tion in support rate and in positional accuracy. An MS can produce 50 position
updates per second, spread across the RSs under its control.
In deployments with static RS positions, minimal clutter from buildings and
trees, and surveyed grid reference RS locations the average RS radial position
location error is about 5 meters. This ideal accuracy has been repeatedly demon-
strated in system testing with communities of 150 RSs. For most deployments,
310 COMMUNICATION-NAVIGATION SYSTEMS

the average radial error is 5 to 15 meters for static ground-based users, 10 to 30


meters for mobile ground-based users, and 15 to 50 meters for airborne users.

6.3.10 PLRS User Equipment Characteristics


The PLRS user equipment is the radio set (RS). Each RS consists of a receiver-
transmitter, a user readout device, a power source, and an antenna. The user
readout device serves as a control panel for the RS, and for limited data
exchange. The RS generates and processes PLRS messages. Centralized control
of the RS by a microprocessor within the message processor supports partition-
ing into the RS functions shown in Figure 6.14.
The RF /IF function performs frequency conversion, amplification, and fil-
tering of the transmitted and received signals. During receive, this function per-
forms an adaptive A/D conversin of the incoming signals. During transmit,
the digital output of the signal processor is used to generate a continuous
phase shift modulation (CPSM). The transmitted power of the RS is over
I 00 w. The signal processor function performs preamble detection/ generation,
interleaving/deinterleaving, error correction/error detection encoding/decoding,
pseudonoise code generation, data correlation, and time-of-arrival measure-
ment. The secure data unit performs encrypting/decrypting of transmitted and
received data, message validation, and provides outputs for transmission secu-
rity. The message processor contains a microprocessor that is the central con-
troller of the RS. It controls all processes done within the RS. Additionally, the
message processor generates and decodes link messages and provides the data
interface format for the user readout.

6.3.11 System Enhancements


While several derivatives of PLRS have been developed, the most advanced
functional extension is the enhanced position location reporting system
(EPLRS). EPLRS maintains all of the basic PLRS capabilities while greatly
increasing the user-to-user data communications capability. The EPLRS utilizes

ACORDC AIR PRESSURE


PONER

Figure 6.14 PLRS radio set functions.


FUTURE TRENDS 311

the PLRS control network for monitoring and controlling large communities
of user RSs. In addition to the positioning, position-reporting, navigation aid,
cryptographic key distribution, and status-reporting functions, the control net-
work is also utilized for distributing communications circuit assignments and
monitoring user-to-user communications performance. The EPLRS RSs support
communication network management by implementing the commands, moni-
toring and reporting circuit status, establishing new circuit paths, and controlling
the flow of data packets into and out of the network.
Both duplex (point to point) and group-addressed (broadcast) types of service
are available via the same user RS. Each RS can support up to 30 user circuits
(needlines) simultaneously with a composite (receive plus transmit) information
rate of 4 kbps. Each duplex circuit is capable of supporting acknowledged data
rates of up to 640 bps in each direction. Each group addressed circuit is capable
of supporting nonacknowledged data rates up to 1280 bps. The primary user
interface is via the Army Data Distribution System Interface (ADDSI), which
uses permanent virtual circuit protocols based on CCITT x.25. A !553B data
bus interface is also used for compatibility with existing aircraft and vehicular
systems.
The EPLRS concepts have been proven through live testing with 160 RSs.
In addition, extensive computer modeling and large-scale user community sim-
ulation have been used to confirm extension of performance to operations with
over 500 user RSs. EPLRS is interoperable with the basic PLRS allowing for
mutual support and coordinated operations between basic PLRS- and EPLRS-
equipped users.

6.4 FUTURE TRENDS

In most JTIDS-equipped aircraft, there are similarities and redundancies in the


computations performed separately by JTIDS Re!Nav, GPS and INS units. Cou-
pled with the continuing miniaturization and cost reduction of digital process-
ing, this suggests the future development of fully integrated navigation systems
embodying all the functions of JTIDS, GPS, INS, and the air-data computer in
one unit, occupying less space and consuming less power than the individual
separate units, and also providing the optimum combination of the measure-
ments from these multiple sensors (Chapter 3).
Miniaturization and lower equipment cost will also lead to small, expend-
able, receive-only JTIDS units for use in such vehicles as cruise missiles and
unmanned reconnaissance aircraft. These units will perform JTIDS ReiN av
functions for midcourse self-positioning, in addition to any midcourse correc-
tion signals from control centers.
In 1996, an intensive international effort, called MIDS, was underway toward
further miniaturation and modularization of JTIDS terminals which, in turn,
will lead to a trend of much wider implementation of JTIDS and its relative
312 COMMUNICATION-NAVIGATION SYSTEMS

navigation function on military aitrcraft worldwide in view of the low weight


and size of these JTIDS terminal~..
In the PLRS system, there is a strong trend toward distributing the position
and range/bearing calculations from the central processors to the individual user
units. This trend is based on the decreasing cost of digital processing and by
the worldwide availability of GPS signals, along with low-cost GPS receiver
modules. Using Kalman filtering techniques, the user units could then combine
the GPS-based positions with the PLRS TOA range-based positions to improve
both the accuracy and consistency of the positioning and navigation functions
for the user.

PROBLEMS

6.1. A JTIDS RelNav RTT exchange indicates an accumulated clock error of


+60 nsec (ahead) since the last clock update two minutes earlier. If the
terminal clock oscillator operates at 80 MHz, what is its frequency error
relative to the oscillator of the RTT source? Is the frequency high or low?
Ans.: 0.04 Hz high.

6.2. The PLRS time of arrival signal processing splits a PN chip into 16 equal
parts. What is the precision of a one-way range measurement in meters:
Ans.: 3.7 meters.

6.3. If the clock offset error between two PLRS RSs involved in a range mea-
surement is 15 nsec, what error in one way range measurement does this
cause?
Ans.: 4.5

6.4. In Problem 6.2, what error would result if this were a two-way range mea-
surement?
Ans.: 19 meters.

6.5. In Problem 6.3, what error would result if this were a two-way range mea-
surement:
Ans.: None.

6.6. In the military grid reference system used for reporting positions to a user
readout in PLRS, a location is reported as 4 decimal digits of casting and
4 decimal digits of northing within a designated 100-km grid square. What
is the maximum error introduced due the precision of this report?
Ans.: 5J2 meters
7 Inertial Navigation

7.1 INTRODUCTION

Inertial navigation is a technique for determining a vehicle's position and veloc-


ity by measuring its acceleration and processing the acceleration information in
a computer. Compared with other methods of navigation, an inertial navigator
has the following advantages:

I. Its indications of position and velocity are instantaneous and continuous.


High data rates and bandwidths are easily achieved.
2. It is completely self-contained, since it is based on measurements of accel-
eration and angular rate made within the vehicle itself. It is nonradiating
and nonjammable.
3. Navigation information (including azimuth) is obtainable at all latitudes
(including the polar regions), in all weather, without the need for ground
stations.
4. The inertial system provides outputs of position, ground speed, azimuth,
and vertical. It is the most accurate means of measuring azimuth and ver-
tical on a moving vehicle.

The disadvantages of inertial navigators are the following:

I. The position and velocity information degrades with time. This is true
whether the vehicle is moving or stationary.
2. The equipment is expensive ($50,000 to $120,000 for the airborne sys-
tems in 1996).
3. Initial alignment is necessary. Alignment is simple on a stationary vehicle
at moderate latitudes, but it degrades at latitudes greater than 75° and on
moving vehicles.
4. The accuracy of navigation information is somewhat dependent on vehicle
maneuvers.

The techniques of inertial navigation evolved from fire-control technology,


the marine gyrocompass, and conventional aircraft instrumentation (Chapter
9) [9]. The earliest practical applications-and the heaviest expenditure of
funds-were for ballistic-missile-guidance systems and for ship's inertial nav-
Avionics Navigation Systems. Myron Kayton and Walter R. Fried 313
Copyright © 1997 John Wiley & Sons, Inc.
314 INERTIAL NAVIGATION

igation systems (SINS). In the late 1950s, increased procurement of military


aircraft led to the development of aircraft inertial navigators. Many of the disad-
vantages of inertial systems can be overcome through aiding with other sensors
such as GPS [54], radars, or star-trackers [29]. Chapter 3 discusses multisensor
navigation systems.
In !996, inertial navigation systems were widely used in military vehicles.
Many ships, submarines, guided missiles, space vehicles, and virtually all mod-
ern military aircraft are equipped with inertial navigation systems due to the fact
that they cannot be jammed or spoofed. Large commercial airliners routinely
make use of inertial systems for navigation and steering [60].

7.2 THE SYSTEM

In the earliest inertial navigation systems, gimballed platforms isolated the


instruments from the angular motions of the vehicle. The gyroscopes acted as
null-sensors, driving gimbal servos that held the gyroscopes and accelerome-
ters at a fixed orientation relative to the Earth. This permitted the accelerometer
outputs to be integrated into velocity and position. In the late 1970s and early
1980s, the invention of large-dynamic-range gyroscopes and of more power-
ful airborne computers permitted the development of "strapdown" inertial sys-
tems in which the gyroscopes and accelerometers were mounted directly on
the vehicle. The gyroscopes track the rotation of the vehicle, and algorithms in
the computer (Section 7 .4.1) transform accelerometer measurements from vehi-
cle coordinates to the navigation coordinates where they can be integrated. In
strapdown systems, the transformation generated by the computer performs the
angular-stabilization function of the gimbal set in a platform system. In effect,
the attitude integration algorithms permit the construction of an "analytic" plat-
form.
Figure 7 .I shows a block diagram of a terrestrial inertial navigator. A plat-
form (either gimballed or analytic) measures acceleration in a coordinate frame
that has a prescribed orientation relative to the Earth. Usually, the stabilized
coordinate frame is locally level (two horizontal axes, one vertical). The com-
puter, which may be the aircraft's central computer or a navigation computer,
calculates the aircraft's position and velocity from the outputs of the two hori-
zontal accelerometers. The computer also calculates gyroscope torquing signals
that maintain the platform in the desired orientation relative to the Earth. In a
strapdown system, the analytic platform is "torqued" computationally. A verti-
cal accelerometer is usually added in order to smooth the indication of altitude,
as measured by a barometric altimeter or air-data computer (Chapter 8). The
calculation of velocity from the accelerometer outputs is described in Section
7.5; the calculation of position from the velocities is described in Section 2.4.
In a platform system, the gimbal-isolated structure, on which the gyroscopes
and accelerometers are mounted, is called the stable element. The gimbals (Fig-
ure 7 .2) allow the aircraft to rotate without disturbing the attitude of the sta-
Initial altitude and
d ltitude rat ~

Vertical acceleration channel Baro-inertial altitude


r------
I
z< 8 >-axis
IL ______ Baro-inertial
computations Baro-inertial altitude rate
Platform: Barometric altitude
spatially stabilized
or analytic
Destination
:~<B>-=-a~; coordinates
L
I
----- Platform stabilization commands
Gyros i y<B>-axis I
,-----
8
I
Gyros 1 z< >-axis I
L-----
I r ___ _t __
r------ Course-line 1---~ Rangeand
Gimbal angles : x< 8 >-axis I
computer : bearing to
(pick offs) 7L----- Inertial navigation Horizontal position I

Horizontal acceleration channels computer, horizontal L _ (Sec.


___ _ _ """ _ __. destination
2.7)
~--r--·--1 J
I I I I '>~._------ channels
I I I I
l y< 8 >-axis
I
I
I
I
I
I
I
I L----.:: l Initial position
and velocity
Horizontal velocity

Platform wander a
angle, if not
north pointing
Vehicle azimuth - Vehicle azimuth
relative to platform + from north

---···· ---
Vehicle attitude angles
(roll and pitch)

Figure 7.1 Block diagram of an inertial navigator.


~
......
Ul
(H
......
a-

Azimuth / Gyro error signal


torquer II' drives gimbal
r-<}-, servo (typical)

Stable element -h::/ ' ~ : ~------,..

Roll
I Azimuth Vehicle attitude
I 1 Pitch angle
Pitch gimbal readout
I
I Pitch
I I
L_~
Redundant gimbal Inner roll angle J
drive -------------
(Sec 7.4.2)

(Four-axis platform only)

Figure 7.2 Four-axis stable platform of an inertial navigator.


INSTRUMENTS 317

ble element. The gimbal angles are measured by transducers, usually resolvers
(Section 7 .4.2), whose outputs indicate the aircraft's roll, pitch, and heading
to the displays, auto-pilot, and sometimes to the computer. In strapdown sys-
tems, attitude angles are mathematically extracted from the analytic platform
transformation matrix (Section 7 .4.1 ).
When the inertial system is turned on, it must be aligned so that the computer
knows the initial position and groundspeed of the vehicle and so that the plat-
form (gimballed or analytic) has the correct initial orientation relative to the
Earth. The platform is typically aligned in such a way that its accelerometer
input axes are horizontal, often with one of them pointed north. As the vehi-
cle accelerates, maneuvers, and cruises, the accelerometers measure changes in
velocity, and the computer faithfully records the position and velocity.
The inertial navigator also contains power supplies for the instruments, a
computer, often a battery to protect against power transients, and interfaces
to a display-and-control unit. The system may be packaged in one or more
modules. Typical gimballed systems in 1968 weighed 50 to 75 lb (excluding
cables), of which 20 lb were for the platform. Steady-state power consumption
was approximately 200 w. First-generation strapdown navigators (early 1980s)
weighed 40 to 50 lb and consumed l 00 to 150 w. In 1996, strapdown systems
weighed 20 to 30 lb and consumed approximately 30 w.

7.3 INSTRUMENTS

This section discusses the sensing instruments (gyroscopes and accelerometers)


as they relate to stable platforms and strapdown systems.

7.3.1 Accelerometers
Purpose An accelerometer is a device that measures the force required to
accelerate a proof mass; thus, it measures the acceleration of the vehicle con-
taining the accelerometer. Figure 7.3 shows a black-box accelerometer whose
input axis is indicated. The instrument will supply an electrical output propor-
tional to (or some other determinate function of) the component along its input
axis of the inertial acceleration minus gravitation. If the instrument is mounted
in a vehicle whose inertial acceleration is a and if the vehicle travels in a New-
tonian gravitational field G, (Section 2.2), then the force acting on the proof
mass mp is

F = mpa = F R + mPG + F o
FR Fo
-=a-G--=f (accelerometer output) (7.1)
mp mp

where F R is the force exerted on the proof mass by the restoring spring or
318 INERTIAL NAVIGATION

a-G
-G
Electrical output _ _ 1 -
proportional to --,I 1
xa (a - G)IA I I I
I I I

Inertial acceleration
of accelerometer

G
R (Gravitation) along input axis

Figure 7.3 Black-box diagram of an accelerometer.

~til)(;. z
~

Permanent
magnet

Pendulum and___..--
torquer coil

Capacitive
pick off
plate
Flexure pivot
(flat metal spring)

Figure 7.4 Flexure-pivoted accelerometer.


INSTRUMENTS 319

restoring amplifier, as shown in Figure 7 .4, and FD is the unwanted disturbing


force caused by friction, hysteresis, mechanical damping, and the like. Thus,
if the instrument is designed with negligible disturbing forces, the restoring
force is a measure of (a - G) along the instrument's input axis. As explained
in Section 7.5, accelerometers are used to calculate the vehicle's acceleration
a; their outputs must be corrected for gravitation G in the computer.
If the accelerometer rests on a table, then a= 0 (neglecting the rotation of the
Earth) and the unit measures -G. If the accelerometer is falling in a vacuum,
then a = G, and the output is zero. If the instrument is being accelerated upward
with an acceleration of 7 g, then a- G = 7 g - (-1 g), and the instrument reads
8 g (l g is a unit of acceleration equal to approximately 32.2 ftjsec 2 = 981
cmjsec 2 ; if an acceleration must be specified more exactly than 0.5%, it should
be stated in fundamental units of length and time).
On the rotating Earth, a stationary accelerometer at a position R is acceler-
ating centripetally at n X en X R) in inertial space due to the Earth's rotation
rate n. The accelerometer's output therefore measures n X en X R) - G =
-g, which is the ordinary definition of gravity, as discussed in Section 2.2. A
stationary plumb bob on the Earth's surface points in the direction of g not G
[24].

Construction Several accelerometer designs are used in aircraft inertial nav-


igators:

1. Pendulum, supported on flexure pivots, electrically restrained to null.


2. Micro-machined (silicon) accelerometer with electrostatic nulling.
3. Vibrating beam accelerometer whose frequency of vibration is a measure
of tensile force and hence acceleration.

The flexure-pivot accelerometer, shown schematically in Figure 7.4 is most


commonly used in aircraft systems. The sensitive element consists of a pen-
dulum with a torquer coil and pickoff supported by a torsional spring or flex-
ure. The pickoff measures displacement of the pendulum from null and is often
mechanized with an optical sensor and shadow mask or with capacitors. The
torquer coils restore the pendulum to null, the torquer current being a mea-
sure of the restoring torque and, hence, of the acceleration. Mathematically, let
f =a- G. The torque T on the pendulum is

T=TR-k8+mbfi -mbh8=l(O+J>) (7.2)

where
TR is the residual torque applied to pendulum by friction in supports
and connecting wires and by electrical forces, Newton-meters
320 INERTIAL NAVIGATION

e is the pickoff angle, rad


k is the spring stiffness, mechanical and electrical, Newton-meters/rad
mb is the pendulosity, kg-meters
I is the moment of inertia of the pendulum about the pivot axis,
kg-meters 2
¢ is the angular acceleration of the case about pivot axis, rad/sec 2
fl,h two components of the linear acceleration of the case relative
to inertial space, metersjsec 2 ,f 1 is along the input axis,

The damping is neglected for illustration. In the steady state

e= mb [
k
/y + TR/mb- I¢jmb
I + mbfzlk
l (7.3)

If the stiffness k is high enough, 11 is small, and the instrument measures only
f 1, independent of the presence of a cross-axis acceleration /2. Sensitivity to h
is called cross-coupling and is most serious in a vibration environment when
e andh oscillate in phase and rectify. This rectification is often referred to as
vibropendulous error. The term Trd k is the angle offset due to the presence
of an unwanted torque on the pendulum; it causes an accelerometer bias. I¢/k
is an angular offset of the pendulum due to angular acceleration of the case
around the pivot axis; ¢ is negligible when the accelerometer is mounted on a
mechanical platform, but it is an important source of error in strapdown systems
e
where ¢ and oscillations can rectify. If position calculations are referred to
the center of percussion of the pendulum, the sensitivity to angular acceleration
is reduced (see size effect, Section 7 .4.1 ). The distance from the center of mass
to the center of percussion is I/ mb.
Flexure-pivoted accelerometers are simpler to construct than the older
floated instruments since they do not require adjustment for buoyancy f23, pp.
288-289]. Because they are undamped, they exhibit high-frequency mechanical
resonances. Resonances must therefore be controlled relative to both vibration
inputs and rebalance-loop characteristics. Undamped accelerometers offer the
greatest bandwidth (important for strapdown systems) but must almost always
be supported on a shock-mounted sensor block (Section 7 .4.1) in order to sup-
press high-frequency vibration and to prevent shock damage. Accelerometers
that include fluid damping exhibit reduced bandwidth and additional thermal
sensitivity due to changes in the fluid characteristics.
In navigation-grade accelerometers, a restoring loop maintains the pendu-
lum near null. The restoring servo must be linear and repeatable from l 0 to 25
11-g to 40 g, a range of six to seven orders of magnitude. A digital output can
be obtained by either digitizing the analog output (the current in the torquer
coil) or by pulse-rebalancing with a digital restoring servo. When rebalancing
with pulses of uniform height, pulse width measures incremental velocity Ll V.
INSTRUMENTS 321

In either case, a properly initialized digital counter accumulates the pulses and
stores the velocity change. The rebalance pulse train must not excite accelerom-
eter resonances.
The pivot or flexure supporting the pendulum must provide minimal restraint
for the pendulum in the direction of the input axis while exhibiting high stiffness
in the other two directions. The spring constant of the pivot/flexure generates a
restoring force that reduces the gain of the electronic restoring loop. The spring
constant should be repeatable in order to ensure accuracy, but the high-stiffness
restoring loop dominates. The pivot must not exhibit hysteresis, which may
cause accelerometer biases. Generally, high-quality accelerometers ca