Your behavior appears to be a little unusual. Please verify that you are not a bot.


Innovation: Realistic Randomization

March 1, 2011  - By

A New Way to Test GNSS Receivers

By Alexander Mitelman

INNOVATION INSIGHTS by Richard Langley

INNOVATION INSIGHTS by Richard Langley

GNSS RECEIVER TESTING SHOULD NEVER BE LEFT TO CHANCE. Or should it? There are two common approaches to testing GNSS receivers: synthetic and realistic. In synthetic testing, a signal simulator is programmed with specific satellite orbits, receiver positions, and signal propagation conditions such as atmospheric effects, signal blockage, and multipath. A disadvantage of such testing is that the models used to generate the synthetic signals are not always consistent with the behavior of receivers processing real GNSS signals. Realistic testing, on the other hand, endeavors to assess receiver performance directly using the signals actually transmitted by satellites. The signals may be recorded digitally and played back to receivers any number of times. While no modeling is used, the testing is specific to the particular observing scenario under which the data was recorded including the satellite geometry, atmospheric conditions, multipath behavior, and so on. To fully examine the performance of a receiver using data collected under a wide variety of scenarios would likely be prohibitive. So, neither testing approach is ideal. Is there a practical alternative? The roulette tables in Monte Carlo suggest an answer.

Both of the commonly used testing procedures lack a certain characteristic that would better assess receiver performance: randomness. What is needed is an approach that would easily provide a random selection of realistic observing conditions. Scientists and engineers often use repeated random samples when studying systems with a large number of inputs especially when those inputs have a high degree of uncertainty or variability. And mathematicians use such methods to obtain solutions when it is impossible or difficult to calculate an exact result as in the integration of some complicated functions. The approach is called the Monte Carlo method after the principality’s famous casino. Although the method had been used earlier, its name was introduced by physicists studying random neutron diffusion in fissile material at the Los Alamos National Laboratory during the Second World War.

In this month’s article, we look at an approach to GNSS receiver testing that uses realistic randomization of signal amplitudes based on histograms of carrier-to-noise-density ratios observed in real-world environments. It can be applied to any simulator scenario, independent of scenario details (position, date, time, motion trajectory, and so on), making it possible to control relevant parameters such as the number of satellites in view and the resulting dilution of precision independent of signal-strength distribution. The method is amenable to standardization and could help the industry to improve the testing methodology for positioning devices — to one that is more meaningfully related to real-world performance and user experience.


Virtually all GNSS receiver testing can be classified into one of two broad categories: synthetic or realistic. The former typically involves simulator-based trials, using a pre-defined collection of satellite orbits, receiver positions, and signal propagation models (ionosphere, multipath, and so on). Examples of this type of testing include the 3rd Generation Partnership Project (3GPP) mobile phone performance specifications for assisted GPS, as well as the “apples-to-apples” methodology described in an earlier GPS World article (see Further Reading).

The primary advantage of synthetic testing is that it is tightly controllable and completely repeatable; where a high degree of statistical confidence is required, the same scenario can be run many times until sufficient data has been collected. Also, this type of testing is inherently self-contained, and thus amenable to testing facilities with modest equipment and resources.

Synthetic approaches have significant limitations, however, particularly when it comes to predicting receiver performance in challenging real-world environments. Experience shows that tests in which signal levels are fixed at predetermined levels are not always predictive of actual receiver behavior. For example, a receiver’s coherent integration time could in principle be tuned to optimize acquisition at those levels, resulting in a device that passes the required tests but whose performance may degrade in other cases. More generally, it is useful to observe that the real world is full of randomness, whereas apart from intentional variations in receiver initialization, the primary source of randomness in most synthetic tests is simply thermal noise.

By comparison, most realistic testing approaches are designed to measure real-world performance directly. Examples include conventional drive testing and so-called “RF playback” systems, both of which have also been described in recent literature (see Further Reading). Here, no modeling or approximation is involved; the receiver or recording instrument is physically operated within the signal environment of interest, and its performance in that environment is observed directly. The accuracy and fidelity of such tests come with a price, however. All measurements of this type are inherently literal: the results of a given test are inseparably linked to the specific multipath profile, satellite geometry, atmospheric conditions, and antenna profile under which the raw data was gathered. In this respect, the direct approach resembles the synthetic methods outlined above — little randomness exists within the test setup to fully explore a given receiver’s performance space.

Designing a practical alternative to the existing GNSS tests, particularly one intended to be easy to standardize, represents a challenging balancing act. If a proposed test is too simple, it can be easily standardized, but it may fall well short of capturing the complexities of real-world signals. On the other hand, a test laden with many special corner cases, or one that requires users to deploy significant additional data storage or non-standard hardware, may yield realistic results for a wide variety of signal conditions, but it may also be impractically difficult to standardize.

With those constraints in mind, this article attempts to bridge the gap between the two approaches described above. It describes a novel method for generating synthetic scenarios in which the distribution of signal levels closely approximates that observed in real-world data sets, but with an element of randomness that can be leveraged to significantly expand testing coverage through Monte Carlo methods. Also, the test setup requires only modest data storage and is easily implemented on existing, widely deployed hardware, making it attractive as a potential candidate for standardization.

The approach consists of several steps. First, signal data is gathered in an environment of interest and used to generate a histogram of carrier-to-noise-density (C/N0) ratios as reported by a reference receiver, paying particular attention to satellite masking to ensure that the probability of signal blockage is calculated accurately. The histogram is then combined with a randomized timing model to create a synthetic scenario for a conventional GNSS simulator, whose output is fed into the receiver(s) under test (RUTs). The performance of the RUTs in response to live and simulated signals is compared in order to validate the fidelity and usefulness of the histogram-based simulation. This hybrid approach combines the benefits of synthetic testing (repeatability, full control, and compactness) with those of live testing (realistic, non-static distribution of signal levels), while avoiding many of the drawbacks of each.

Histograms

The method explored in this article relies on cumulative histograms of C/N0 values reported by a receiver in a homogeneous signal environment. This representation is compact and easy to implement with existing simulator-based test setups, and provides information that can be particularly useful in tuning acquisition algorithms.

Motivation and Theoretical Considerations. To motivate the proposed approach, consider an example histogram constructed from real-world data, gathered in an environment (urban canyon) where A-GPS would typically be required. This is shown in FIGURE 1, together with a representative histogram of a standard “coarse-time assistance” test case (as described in the 3GPP Technical Standard 34.171, Section 5.2.1) for comparison. (Note that the x-axis is actually discontinuous toward the left side of each plot: the “B” column designates blocked signals, and thus corresponds to C/N0 = –∞.)

From the standpoint of signal distributions, it is evident that existing test standards may not always model the real world very accurately.

FIGURE 1. Example histogram of a real-world urban canyon, the San Francisco financial district; Source: Richard Langley

FIGURE 1a. Example histogram of a real-world urban canyon, the San Francisco financial district;.

Figure 1b. Example histograms of 3GPP TS 34.171 “coarse-time assistance” test case). Chart: Richard Langley

Figure 1b. Example histograms of 3GPP TS 34.171 “coarse-time assistance” test case).

The histogram is useful in other ways as well. Since the data set is normalized (the sum of all bin heights is 1.0), it represents a proper probability mass function (PMF) of signal levels for the environment in question. As such, several potentially useful parameters can be extracted directly from the plot: the probability of a given signal being blocked (simply the height of the leftmost bin); upper and lower limits of observed signal levels (the heights of the leftmost and rightmost non-zero bins, respectively, excluding the “blocked” bin); and the center of mass, defined here as

Screen shot 2013-01-09 at 8.19.30 PM Source: Richard Langley(1)

where y[n] is the height of the nth bin (dimensionless), x[n] is the corresponding C/N0 value (in dB-Hz), and x[“B”] = –∞ by definition.

Finally, representing environmental data as a PMF enables one additional theoretical calculation. The design of the 3GPP “coarse-time assistance” test case illustrated above assumes that a receiver will be able to acquire the one relatively strong signal (the so-called “lead space vehicle (SV)” at -142 dBm) using only the assistance provided, and will subsequently use information derivable from the acquired signal (such as the approximate local clock offset) to find the rest of the satellites and compute a fix. Suppose that for a given receiver, the threshold for acquisition of such a lead signal given coarse assistance is Pi (expressed in dB-Hz). Then the probability of finding a lead satellite on a given acquisition attempt can be estimated directly from the histogram:

Screen shot 2013-01-09 at 8.20.18 PM Source: Richard Langley(2)

where Screen shot 2013-01-09 at 8.20.47 PM is the average number of satellites in view over the course of the data set. A similar combinatorial calculation can be made for the conditional probability of finding at least three “follower” satellites (that is, those whose signals are above the receiver’s threshold for acquisition when a lead satellite is already available).

The product of these two values represents the approximate probability that a receiver will be able to get a fix in a given signal environment, expressed solely as a function of the receiver’s design parameters and the histogram itself. When combined with empirical data on acquisition yield from a large number of start attempts in an environment of interest, this calculation provides a useful way of checking whether a particular histogram properly captures the essential features of that environment. This validation may prove especially useful during future standardization efforts.

Application to Acquisition Tuning. In addition to the calculations based on the parameters discussed above, histograms also provide useful information for designing acquisition algorithms, as follows.

Conventionally, the acquisition problem for GNSS is framed as a search over a three-dimensional space: SV pseudorandom noise code, Doppler frequency offset, and code phase. But in weak signal environments, a fourth parameter, dwell time – the predetection integration period, plays a significant role in determining acquisition performance. Regardless of how a given receiver’s acquisition algorithm is designed, dwell time (or, equivalently, search depth) and the associated signal detection threshold represent a compromise between acquisition speed and performance (specifically, the probabilities of false lock and missed detection on a given search). To this end, any acquisition routine designed to adjust its default search depth as a function of extant environmental conditions may be optimized by making use of the a priori signal level PMF provided by the corresponding histogram(s).

Data Collection

The hardware used to collect reference data for histogram generation is simple, but care must be taken to ensure that the data is processed correctly. The basic setup is shown in FIGURE 2.

Figure 2. Data collection setup with a reference receiver generating NMEA 0183 sentences or in-phase and quadrature (I/Q) raw data and one or more test receivers performing multiple time-to-first-fix (TTFF) measurements. Source: Richard Langley

Figure 2. Data collection setup with a reference receiver generating NMEA 0183 sentences or in-phase and quadrature (I/Q) raw data and one or more test receivers performing multiple time-to-first-fix (TTFF) measurements.

It is important to note that the individual components in the data-collection setup are deliberately drawn here as generic receivers, to emphasize that the procedure itself is fundamentally generic. Indeed, as noted below, future efforts toward standardizing this testing methodology will require that it generate sensible results for a wide variety of RUTs, ideally from different manufacturers. Thus, the intention is that multiple receivers should eventually be used for the time-to-first-fix (TTFF) measurements at bottom right in the figure. For simplicity, however, a single test receiver is considered in this article.

Procedure. The experiment begins with a test walk or drive through an environment of interest. Since an open sky environment is unlikely to present a significant challenge to almost any modern receiver, a moderately difficult urban canyon route through the narrow alleyways of Stockholm’s Gamla Stan (Old Town) was chosen for the initial results presented in this article. The route, approximately 5 kilometers long, is shown in FIGURE 3 (top). For the TTFF trials gathered along this route, assisted starts with coarse-time aiding (±2 seconds) were used to generate a large number of start attempts during the walk, ensuring reasonable statistical significance in the results (115 attempts in approximately 60 minutes, including randomized idle intervals between successive starts).

Once the data collection is complete, the reference data set is processed with a current almanac and an assumed elevation angle mask (typically 5 degrees) to produce an individual histogram for each satellite in view, along with a cumulative histogram for the entire set, as shown in Figure 3 (bottom). The masking calculation is particularly important in properly classifying which non-reported C/N0 values should be ignored because the satellite in question is below the elevation angle mask at that location and time, and which should be counted as blocked signals.

Figure 3. Data collection, Gamla Stan (Old Town), Stockholm (route and street view). Source: Richard Langley

Figure 3a. Data collection, Gamla Stan (Old Town), Stockholm (route and street view).

Figure 4. Fluctuation timing models (top: “Multi SV” variant; bottom: “Indiv SV” variant). Source: Richard Langley

Figure 4. Fluctuation timing models (top: “Multi SV” variant; bottom: “Indiv SV” variant).

In addition to proper accounting for satellite masking, the raw source data should also be manually trimmed to ensure that all data points used to build the histogram are taken homogeneously from the environment in question. Thus the file used to generate the histogram in Figure 3 was truncated to exclude the section of “open sky” conditions between the start of the file and the southeast corner of the test area, and similarly between the exit from the test area and the end of the file.

Finally, the resulting histogram is combined with a randomized timing model to create a simulator scenario, which is used to re-test the same RUTs shown in Figure 2.

Reference Receiver Considerations. The accuracy of the data collection described above is fundamentally limited by the performance of the reference receiver in several ways.

First, the default output format for GNSS data in many receivers is that of the National Marine Electronics Association (NMEA) 0183 standard (the histograms presented in this article were derived from NMEA data). This is imperfect in that the NMEA standard non-proprietary GSV sentence requires C/N0 values to be quantized to the nearest whole dB-Hz, which introduces small rounding errors to the bin heights in the histograms. (In this study, this effect was addressed by applying a uniformly distributed ±0.5 dB-Hz dither to all values in the corresponding simulated scenario, as discussed below.) If finer-grained histogram plots are required, an alternative data format must be used instead.

Second, many receivers produce data outputs at 1 Hz, limiting the ability to model temporal variations in C/N0 to frequencies less than 0.5 Hz, owing to simple Nyquist considerations. While the raw data for this study was obtained at walking speeds (1 to 2 meters per second), and thus unlikely to significantly misrepresent rapid C/N0 fading, studies done at higher speeds (such as test drives) may require a reference receiver capable of producing C/N0 measurements at a higher rate.

A third limitation is the sensitivity of the reference receiver. Ideally, the reference device would be able to track all signals present during data gathering regardless of signal strength, and would instantaneously reacquire any blocked signals as soon as they became visible again. Such a receiver would fully explore the space of all available signals present in the test environment. Unfortunately, no receiver is infinitely sensitive, so a conventional commercial-grade high sensitivity receiver was used in this context. Thus the resulting histogram is, at best, a reasonable but imperfect approximation of the true signal environment.

Finally, a potentially significant error source may be introduced if the net effects of the reference receiver’s noise figure plus implementation loss (NF+IL) are not properly accounted for in preparing the histograms. (If an active antenna is used, the NF of the antenna’s low-noise amplifier essentially determines the first term.) The effect of incorrectly modeling these losses is that the entire histogram, with the exception of the “blocked” column, is shifted sideways by a constant offset.

The correction applied to the histogram to account for this effect must be verified prior to further acquisition testing. This can be done by generating a simulator scenario from the histogram of interest, as described below, and recording a sufficiently long continuous data set using this scenario and the reference receiver. A corresponding histogram is then built from the reference receiver’s output, as before, and compared to the histogram of the original source data. The amplitude of the “blocked” column and the center of mass are two simple metrics to check; a more general way of comparing histograms is the two-sided Kolmogorov-Smirnov test (see “Results”).

Timing Models

The histograms described in the preceding section specify the amplitude distribution of satellite signals in a given environment, but they contain no information about the temporal characteristics of those signals. This section briefly describes the timing models used in the current study, as well as alternatives that may merit further investigation.

In real-world conditions, the temporal characteristics of a given satellite signal depend on many factors, including the physical features of the test environment, multipath fading, and the velocity of the user during data collection. Various timing models can be used to simulate those temporal characteristics in laboratory scenarios.

Perhaps the simplest model is one in which signal levels are changed at fixed intervals. This is trivial to implement on the simulator side, but it is clearly unlikely to resemble the real-world conditions mentioned above. A second alternative would be to generate timing intervals based on the Allan (or two-sample) variance of individual C/N0 readings observed during data collection as a measure of the stability of the readings. While this is more physically realistic than an arbitrarily chosen interval as described above, it is still a fixed interval. These observations suggest that a timing model including some measure of randomness may represent a more realistic approach.

One statistical function commonly used for real-world modeling of discrete events (radioactive decay, customers arriving at a restaurant, and so on) is the Poisson arrival process. This process is completely described with a single non-negative parameter, λ, which characterizes the rate at which random events occur. Equivalently, the time between successive events in such a process is itself a random variable described by the exponential probability distribution function:

Screen shot 2013-01-09 at 8.21.58 PM Source: Richard Langley(3 )

The resulting inter-event timings described by this function are strictly non-negative, which is at least physically reasonable, and directly controllable by varying the timing parameter λ. For simplicity, then, the Poisson/exponential timing model was chosen as an initial attempt at temporal modeling, and used to generate the results presented in this article.

Two variants of the Poisson/exponential timing model are considered. In the first, defined herein as the “Multi SV” case, a single thread determines the timing of fluctuation events, and the power levels of one or more satellites are adjusted at each event. In the second variant, defined as the “Indiv SV” case, each simulator channel receives its own individual timing thread, and all fluctuation events are interleaved in constructing the timing file for the simulator. These two variants are shown schematically in FIGURE 4.

Figure 4. Fluctuation timing models (top: “Multi SV” variant; bottom: “Indiv SV” variant). Source: Richard Langley

Figure 4. Fluctuation timing models (top: “Multi SV” variant; bottom: “Indiv SV” variant).

Constructing Scenarios

Once a target histogram is available, it is necessary to generate random signal amplitudes for use with a simulator scenario. This is done by means of a technique known as the probability integral transform (PIT). This approach uses the c
umulative distribution function (or, in the discrete case considered here, a modified formulation based on the cumulative mass function) of a probability distribution to transform a sequence of uniformly distributed random numbers into a sequence whose distribution matches the target function.

Finally, the random signal levels generated by the PIT process are assigned to individual simulator channels according to a set of timed events as described in the preceding section, completing the randomized scenario to be used for testing.

Results

Given a simulator scenario constructed as described above, the RUTs originally included in the data collection campaign are again used to conduct acquisition tests, this time driven from the simulator.

To validate that a particular fluctuating scenario properly represents the live data, it is necessary to quantify two things: how well a generated histogram matches the source data, and how well a receiver’s acquisition performance under simulated signals matches its behavior in the field. At first these may appear to be two qualitatively different problems, but a mathematical tool known as the two-sided Kolmogorov-Smirnov (K-S) test can be used for both tasks.

Validation of Experimental Setup. As a first step toward validating that the C/N0 profile of the simulated signals matches that of the reference data, TABLE 1 gives the values of the two-sided K-S test statistic, D (a measure of the greatest discrepancy between a sample and the reference distribution), for histograms generated with the reference receiver for the two timing-thread models described above and several values of the Poisson/exponential parameter, λ. The reference cumulative mass function (CMF) for each test was derived from the histogram generated for the raw (empirically collected) data set.

These results illustrate good agreement (D < 0.05) between the overall signal distribution profile in the empirical data set and that in each of the six simulated fluctuating scenarios.

As a further check, TABLE 2 shows the same K-S statistic for the histogram generated from the “Multi SV” timing model as a function of several NF+IL values. As before, the reference CMF comes from the raw (empirically collected) data set, and the same reference receiver was used to generate data from the simulator scenario. Evidently, an NF+IL value of 4 dB gives good agreement between empirical and simulated data sets.

In-Tables Table: Richard Langley

Validation of Receiver Performance. Finally, TTFF tests with the simulated scenarios described above are conducted with the same receiver(s) used in the original data gathering session. Here, the K-S test is used to compare the live and simulated TTFF results rather than signal distributions. An example result, illustrating cumulative distribution functions of TTFF, is shown in FIGURE 5 for the live data set collected during the original data gathering session, alongside three results from the “Multi SV” fluctuating model, generated with NF+IL = 4 dB and several different values of the Poisson/exponential timing parameter, λ. While agreement with live data is not exact for any of the simulated scenarios, the λ-1 = 3.0 seconds case appears to correspond reasonably well (D < 0.10).

FIGURE 5 Time-to-first-fix cumulative distribution functions from live and simulated data (“Multi SV” variant with NF+IL = 4 dB). Source: Richard Langley

FIGURE 5 Time-to-first-fix cumulative distribution functions from live and simulated data (“Multi SV” variant with NF+IL = 4 dB).

Conclusions and Future Work

This article has introduced a novel approach to testing GNSS receivers based on histograms of C/N0 values observed in real-world environments.

Much additional work remains. For the proposed method to be amenable to standardization, it is obviously necessary to gather data from many additional environments. Indeed, it appears likely that no one histogram will encapsulate all environments of a particular type (such as urban canyons), so significant additional experimentation and data collection will be required here. Also, as mentioned at the beginning of the article, the proposed method will need to be tested with multiple receivers to verify that a particular result is not unique to any specific brand or architecture. Finally, higher rate C/N0 source data may also be necessary to capture the rapid fades that may be encountered in dynamic scenarios, such as drive tests, and the fluctuation timing models will need to be revisited once such data becomes available.

Acknowledgments

The author gratefully acknowledges the assistance of Jakob Almqvist, David Karlsson, James Tidd, and Christer Weinigel in conducting the experiments described in this article. Thanks also to Ronald Walken for valuable insights on the accurate treatment of the source environment in calculating target histograms. This article is based on the paper “Fluctuation: A Novel Approach to GNSS Receiver Testing” presented at ION GNSS 2010.


Alexander Mitelman is the GNSS research manager at Cambridge Silicon Radio, headquartered in Cambridge, U.K. He earned his S.B. degree from the Massachusetts Institute of Technology and M.S. and Ph.D. degrees from Stanford University, all in electrical engineering. His research interests include signal-quality monitoring and the development of algorithms and testing methodologies for GNSS.


FURTHER READING

• GNSS Receiver Testing in General
GPS Receiver Testing, Application Note by Agilent Technologies. Available online at http://cp.literature.agilent.com/litweb/pdf/5990-4943EN.pdf.

• Synthetic GNSS Receiver Testing
Apples to Apples: Standardized Testing for High-Sensitivity Receivers” by A. Mitelman, P.-L. Normark, M. Reidevall, and S. Strickland in GPS World, Vol. 19, No. 1, January 2008, pp. 16–33.

Universal Mobile Telecommunica­tions System (UMTS); Terminal conformance specification; Assisted Global Positioning System (A-GPS); Frequency Division Duplex (FDD), 3GPP Technical Specification 34.171, Release 7, Version 7.0.1, July 2007, published by the European Telecommunications Standards Institute, Sophia Antipolis, France. Available online at http://www.3gpp.org/.

• Realistic GNSS Receiver Testing
Record, Replay, Rewind: Testing GNSS Receivers with Record and Playback Techniques” by D.A. Hall in GPS World, Vol. 21, No. 10, October 2010, pp. 28–34.

“Proper GPS/GNSS Receiver Testing” by E. Vinande, B. Weinstein, and D. Akos in Proceedings of ION GNSS 2009, the 22nd International Technical Meeting of the Satellite Division of The Institute of Navigation, Savannah, Georgia, September 22–25, 2009, pp. 2251–2258.

“Advanced GPS Hybrid Simulator Architecture” by A. Brown and N. Gerein in Proceedings of The Institute of Navigation 57th Annual Meeting/CIGTF 20th Guidance Test Symposium, Albuquerque, New Mexico, June 11–13, 2001, pp. 564–571.

• Receiver Noise
“Measuring GNSS Signal Strength: What is the Difference Between SNR and C/N0?” by A. Joseph in Inside GNSS, Vol. 5, No. 8, November/December 2010, pp. 20–25.

GPS Receiver System Noise” by R.B. Langley in GPS World, Vol. 8, No. 6, June 1997, pp. 40–45.

Global Positioning System: Theory and Applications, Vol. I, edited by B.W. Parkinson and J.J. Spliker Jr., published by the American Institute of Aeronautics and Astronautics, Inc., Washington, D.C., 1996.

• Test Statistics
“The Probability Integral Transform and Related Results” by J. Agnus in SIAM Review (a publication of the Society for Industrial and Applied Mathematics), Vol. 36, No. 4, December 1994, pp. 652–654, doi:10.1137/1036146

“Kolmogorov-Smirnov Test” by T.W. Kirkman on the College of Saint Benedict and Saint John’s University Statistics to Use website: http://www.physics.csbsju.edu/stats/KS-test.html.

NMEA 0183
NMEA 0183, The Standard for Interfacing Marine Electronic Devices, Ver. 4.00, published by the National Marine Electronics Association, Severna Park, Maryland, November 2008.

NMEA 0183: A GPS Receiver Interface Standard” by R.B. Langley in GPS World, Vol. 6, No. 7, July 1995, pp. 54–57.

Unofficial online NMEA 0183 descriptions: NMEA data; NMEA Revealed by E.S. Raymond, Ver. 2.3, March 2010.

This article is tagged with , , and posted in Innovation, OEM

About the Author: Richard B. Langley

Richard B. Langley is a professor in the Department of Geodesy and Geomatics Engineering at the University of New Brunswick (UNB) in Fredericton, Canada, where he has been teaching and conducting research since 1981. He has a B.Sc. in applied physics from the University of Waterloo and a Ph.D. in experimental space science from York University, Toronto. He spent two years at MIT as a postdoctoral fellow, researching geodetic applications of lunar laser ranging and VLBI. For work in VLBI, he shared two NASA Group Achievement Awards. Professor Langley has worked extensively with the Global Positioning System. He has been active in the development of GPS error models since the early 1980s and is a co-author of the venerable “Guide to GPS Positioning” and a columnist and contributing editor of GPS World magazine. His research team is currently working on a number of GPS-related projects, including the study of atmospheric effects on wide-area augmentation systems, the adaptation of techniques for spaceborne GPS, and the development of GPS-based systems for machine control and deformation monitoring. Professor Langley is a collaborator in UNB’s Canadian High Arctic Ionospheric Network project and is the principal investigator for the GPS instrument on the Canadian CASSIOPE research satellite now in orbit. Professor Langley is a fellow of The Institute of Navigation (ION), the Royal Institute of Navigation, and the International Association of Geodesy. He shared the ION 2003 Burka Award with Don Kim and received the ION’s Johannes Kepler Award in 2007.