Expert Advice: Availability Gaps: Solutions for Aviation

December 1, 2009  - By 0 Comments

Directions 2010

James L. Farrell

James L. Farrell

By James L. Farrell

Recent attention given to aging GPS satellites and availability gaps from lagging constellation replenishment have provoked deep concern, particularly within the aviation community. Available remedies include exploitation of well known but unused methods plus new techniques; those discussed here have future relevance, with or without availability gaps.

Even with far greater coverage from multiple GNSS, crises could emerge from severely stronger interference levels or other unforeseen events. Advance preparation for any such occurrence would avoid the waste, confusion, and blind alleys that generally arise with the sudden appearance of an emergency.

GPS lives up to expectations, brilliantly performing as advertised. Even that best-ever performance must and does have tolerance for occasional error; examples, though rare, are well documented. To live with less than perfect performance, the industry relies on integrity testing: comparison checks using extra satellites to detect inconsistencies and exclude questionable data.

Nevertheless, it is universally recognized that GNSS, even with existing fault detection and isolation or exclusion (FDI/FDE), is still not perfect. The ramifications of growing dependence on GPS have thus attracted more attention. The overall subject can be subdivided into general areas involving the likelihood of:

  • reduced availability and
  • reduced dependability (integrity, its verification, plus backup).

Although I mainly address the first topic here, the second unavoidably intertwines itself, making it difficult to keep them separate. Despite wide acclaim for the excellent 2001 Volpe Report, commitment to a key means of backup for GPS remains unclear at this time. Possibility of a shortfall calls for a review of both existing methods and procedures, and possible means for closing the gap.

Current Methods

Today’s air traffic management  designs demand constant replenishment of instantaneous position by full fixes.

Full Fix 1 RAIM. When each data vector must be a self-sufficient source of instantaneous position, a requirement arises for enough satellite sightline directions with geometric spread at all times. That interdependence is magnified when more satellites are added to provide FDI/FDE, requiring every subset of four within the enlarged group to support the requisite geometry. With this all-or-nothing posture, data lapses form a major stumbling block. A data gap that is only partial equates to a loss of GPS.

Position-Oriented Approach. Especially at high speeds, as in flight, instantaneous position is highly perishable. With little or no emphasis placed on accurate dynamics (beginning with velocity), demand for continuously accurate instantaneous position is highly dependent on abundant data. That abundance includes sufficiently high data rates, since latency becomes a significant liability without usage of a dynamic file.

Carrier Phase (Classical). Successful use of carrier-phase information is decades old. Although ambiguity resolution is not required in all carrier-phase applications, requirements for cycle-slip detection are quite common. More common yet — in fact, virtually ubiquitous — is the need to maintain phase continuity via a carrier-track loop. When those needs are satisfied, sub-wavelength instantaneous position is obtainable. Challenges involved, however, have produced among users a wide variation in perception of value. Some negative perceptions have arisen due to cutting corners in formation of carrier phase, or merely settling for delta range, by some receivers. Further, a cycle slip, even if only rarely overlooked, can be catastrophic in some operations.

Imperfect Validation. As already noted, verification is not my main topic here, but the issue is inescapable. Shortcomings include hard evidence of certification improperly bestowed, and severe limitations of go/no-go criteria (as with an automobile’s dashboard warning lights, we can learn if a performance trait is unsatisfactory — but a trivial excess produces the same indication as an imminent danger).

Necessary Changes

Extremely powerful and versatile means to improve performance have been available for a very long time. Kalman’s original paper, half a century ago, formalized an optimal way to achieve such performance. While Kalman estimation is commonly used today, its effective reach is almost invariably limited to data resident within each proprietary box of equipment.

The resources for providing centrally processed solutions for data from every source of information available, any combination of sources, any subset that may exclude any sensor or group, or any individual source in a federated configuration, are well known. Every conceivable choice from among these solutions can be made concurrently available; note the inherent backup.

However, all this capability is forsaken or lost by continued use of:

  • interfaces chosen poorly or from outdated standards;
  • undue consolidation within isolated equipment packaging;
  • overextended proprietary rights; and
  • limited, demonstrably flawed validation methods.

Drop Demands for Full Fix. An immediate explosion of benefits can follow from acceptance of partial information. Countless examples could be cited, but two obvious ones suffice:

  • Within GPS or GNSS, not all space vehicles (SVs) would be simultaneously affected by scintillation; ionospheric disturbance effects vary with both location and time. A similar case holds for multipath. Data from some SVs could be rejected, by decisions made external to a receiver, without forcing rejection of all.
  • Central processing — not within any one equipment box — has always offered potential for other sources (distance-measuring equipment or DME, and so on) to make up for incomplete sets of SV data.

My broad goal here is to take advantage of information not currently used and to prescribe corrective strategies. That objective has not been widely pursued due to perceived lack of urgency. GPS availability has thus far been more than satisfactory to a multitude of users — but that could change.

Availability Enhancements. For about two decades, the industry was effectively guided by a strong preference for the trait whereby every data refresh event was self-sufficient. A major reason for this was protection against gradual veering: a snapshot sequence is less sensitive than a continuously evolving path estimate. The cost, of course, is forfeit of benefits conferred by the sequence’s history. More recently, a middle ground was sought to mitigate the resulting loss; subfilters used as much new data as possible while making some use of knowledge from an estimator’s covariance matrix.

I promptly endorsed that approach and sought to carry it to the limit. A single-measurement receiver-autnomous integrity monitoring (RAIM) resulted, offering an independent integrity test for each separate observation. Despite its rigorous derivation, the technique is quite simple in practice. Further, it bridges a gap that formerly separated integrity test from optimal estimation, while also having significant advantages over conventional RAIM:

  • separation translates to independence from other satellites, and therefore from geometry (effective DOP of unity)
  • ability to use different error variances for different observations (for example, with nonuniformity in signal strength and/or elevation).

With this discussion, we have clearly left the realm of well-known subjects with self-evident prescriptions. Much of what follows likewise falls into the category of relatively obscure methods.

Beyond Position-Oriented. A time history
of GNSS observations, with or without an inertial measurement unit (IMU), inherently carries dynamic information. A file with observational history from multiple sources of course enables the aforementioned explosion of benefits. The obvious immediate offerings include:

  • closing of data lapses via information sharing;
  • intrinsic backup with automatic activation;
  • vast reduction of latency effects (for example, from 200 meters to less than 1 meter at 400 knots after 1 second, with easily obtainable velocity accuracy below 1 meter/second);
  • formation of 1-sigma projected future error (within reason).

Beyond these lie, once again, some lesser known techniques, including a few that are virtually nonexistent in operation at the time of this writing. With GNSS, the full potential of dynamics calls for a revisit of carrier phase.

Carrier-Phase Developments. Rather than pursuit of unnecessary sub-wavelength fixes for aircraft (for example, with 20-meter wing span moving at 400 knots), the true value of carrier phase in flight lies in enhanced dependability.  Sequential changes in carrier phase over 1 second provide excellent dynamics information, with or without an IMU.

Recognition of this opportunity led to the concept of segmentation, whereby position is determined separately from dynamics. Carrier-phase sequential changes with ambiguities unresolved can provide precise (1-centimeter/second RMS with IMU; decimeter/second without) streaming velocity independent of position. Dead reckoning then provides a priori position correctible by pseudoranges.

One advantage of this scheme is subtle: with 1-second phase change propagation effects generally at 1 centimeter or less, no mask is needed. The geometry benefit is obvious, and flight experience has verified it. This raises another segmentation characteristic: the single-measurement integrity testing is applicable to each carrier-phase sequential change and to each pseudorange, separately and independently.

These capabilities are untapped in essentially all operational systems — air, land, and sea — and all stand to gain. Yet another opportunity can be added: ability to sustain operation even if every SV has repetitive data gaps. This advantage is best exploited with receivers described next.

FFT-Based Processing. Correlators and track loops in GNSS receivers can be replaced. The theory is age-old: multiplication in the frequency domain corresponds to convolution in time (and vice-versa). Thus a term-by-term product of a digitized receiver input’s fast Fourier transform (FFT) with the reference pattern’s FFT can, after an inverse FFT, provide outputs equivalent to full sets of correlator responses. Today’s processing and analog-to-digital converter capabilities offer feasibility.

In addition to reduced vulnerability to jamming (not covered here), advantages include:

  • access to all cells (not only a track loop’s subset)
  • guaranteed access (stability is not conditional)
  • linear phase-versus-frequency; no phase distortion.

Features from the preceding section, combined with these traits, offer extreme robustness.

Extension to Surveillance. The practice of transmitting responses to RF interrogations has, for many decades, been quite vulnerable to overload (garble; one user’s information is everyone else’s interference). One report described the unsurprisingly poor performance during the first Gulf War, and identified a remedy: squitters with separate assigned time slots, spontaneously firing the transponder transmitter without interrogation. Immediately, a sea change in capability offers every participant an opportunity to track every other participant. With no interrogations, garble would disappear.

This dramatic increase in capacity has been successfully demonstrated with the use of an existing communication link and existing airborne equipment: GPS receivers and Mode S squitters. Subsequently I enthusiastically advocated adoption of the technique with one fundamental modification: replace the data bits of the transmitted messages with measurements instead of coordinates.
Additional improvements include small shifts in time (reducing bits needed for time tags) and recomputation of measurements that would have occurred at the center of gravity (to mitigate rotation effects). Collectively, the full set of procedures offers a vast and compelling list of benefits.

Conclusions

Capability and dependability of navigation and surveillance can be enormously increased. The key lies not in new inventions nor provisions, but in use of newer methods, (among them, FFT-based receivers, segmented estimation, and 1-second carrier-phase changes) while abandoning habits such as:

  • dismissal of partial fix data
  • preoccupation with full fixes for instantaneous position irrespective of dynamics
  • preference for location pseudomeasurements rather than the measurements themselves
  • reliance on proprietary software in equipment boxes
  • RF interrogation/response sequences instead of squitters.

The industry can either adopt changes or continue to settle for performance levels at a minor fraction of the intrinsic capabilities available from our present and future systems.


James L. Farrell worked for 31 years at Westinghouse in design, simulation, and validation of navigation and tracking programs. He continues teaching and consulting for private industry, the Department of Defense, and university research through Vigil, Inc

This article is tagged with and posted in Expert Advice & Leadership Talks
GPS World staff

About the Author:

GPS World covers all aspects of the GPS and GNSS industry for our readers. To submit news, please send your release to gpsworld @ gpsworld.com.

Post a Comment