Your behavior appears to be a little unusual. Please verify that you are not a bot.


Innovation: A PET Project from Finland

March 1, 2014  - By

Automating GNSS Receiver Testing

By Sarang Thombre, Jussi Raasakka, Tommi Paakki, Francescantonio Della Rosa, Mikko Valkama, Laura Ruotsalainen, Heidi Kuusniemi, and Jari Nurmi

GPS World photo

INNOVATION INSIGHTS by Richard Langley

WE HAVE A CAT. My wife and I do, that is. One with a voracious appetite. She likes to be fed on demand, even at the most inopportune times. Like three o’clock in the morning. No, it doesn’t help to close the bedroom door. Her squeaking (yes, some cats squeak) still wakes us up. I was designated as the one to get up in the night to feed her. Sometimes twice. Each night, every night. That got tiresome (literally) very quickly. Automation came to the rescue. We now have a microprocessor-controlled cat feeder, which rotates food compartments into the feeding position at pre-programmed times. Just fill up one or two of the compartments with “crunchies” before retiring, set the activation time to 3:00 a.m., say, and no more middle-of-the-night squeaking interrupting blissful sleep.

This is just one example of how automation — machines replacing (or supplementing) human activity to perform repetitive, difficult, undesirable, or even humanly-impossible tasks — can affect (and benefit) our everyday lives.

As noted on Wikipedia, two common types of automation are ones that involve feedback control, which is usually continuous and involves making measurements using one or more sensors and computing adjustments to keep the measured variables within a set range, and those that involve sequence control, in which a programmed sequence of discrete operations is performed, often based on system logic. An aircraft autopilot is an example of the former while our cat-feeding machine is an example of the latter. Some systems, such as Earth-orbiting satellites, can involve both types.

Automation applications range from the (now) mundane (such as point-and-shoot cameras, smart phones, home control, and factory assembly lines) to the (now) exotic (such as robots to assist the elderly and the infirm and robots to explore space). Laboratories have also benefited from increasing automation, making rapid clinical and analytical testing, for example, possible.

The testing of GNSS receivers can also benefit from automation. This work typically requires the active participation of humans to initiate, control, monitor, and terminate test cases. These manual operations are often inefficient and inaccurate, rendering the test results unreliable.  Furthermore, accessing the internal signals of a receiver at different stages of processing is necessary to pinpoint the exact location of any anomalies. Using traditional black-box testing techniques, it is only possible to test the final outputs of a receiver. In this month’s column, we take a look at an automated test bench for analyzing the overall performance of multi-frequency, multi-constellation GNSS receivers. The system includes a data-capture tool to extract internal process information and controlling software, called the Automated Performance Evaluation Tool or AutoPET, which is able to communicate between all modules of the system for hands-free, one-button-click testing of GNSS receivers. Would my cat appreciate the benefit? Likely not, but GNSS engineers and scientists certainly will.

“Innovation” is a regular feature that discusses advances in GPS technology and its applications as well as the fundamentals of GPS positioning. The column is coordinated by Richard Langley of the Department of Geodesy and Geomatics Engineering, University of New Brunswick. He welcomes comments and topic ideas.


The prototype GNSS receiver developed at the Department of Electronics and Communications Engineering of Tampere University of Technology (TUT), called TUTGNSS, is now in the performance-testing phase. TUTGNSS is a GPS L1/L5 + Galileo E1/E5a dual-frequency dual-constellation receiver jointly developed by TUT and its international partners under two European Union Framework Programme research grants.

During the manual testing of the receiver, it was noticed that the results were often contaminated with errors due to imprecise time-keeping and inconsistent test environments.

It was also strenuous and time consuming to perform repetitive tests over multiple iterations, with decreasing personnel efficiency as the number of iterations increased. The aforementioned problems led to the results being deemed unreliable and unrepeatable. There was thus a need to innovate and automate the testing process and environment. In addition, there was also the need to study the signals as they flowed through the internal signal processing chain, so that the exact location of anomalies could be detected.

Currently, few solutions are available in the commercial and academic domains, which can perform end-to-end fully automated, yet customizable testing of GNSS receivers. A couple of commercial testing tools were recently unveiled, which claim to perform similar automated testing of GNSS receivers. However, these are not fully customizable by the end-user, having the limitation that they can be used only with their parent company’s proprietary signal simulators. Other commercial automated testing tools are available nowadays. However, they are targeted towards electronic systems other than GNSS receivers. It was due to these reasons that we decided to implement an in-house solution. Consequently, we devised the Automated Performance Evaluation Tool (AutoPET), along with a data capture tool.

AutoPET is implemented completely in software (Qt, with C++) and communicates with the receiver under test (RUT) via RS-232 and a National Marine Electronics Association (NMEA) protocol and with a commercial GNSS signal simulator via an RS-232 link. It handles the GNSS test cases with user-defined iterations and other system settings. AutoPET has already been used for making test runs on the TUTGNSS receiver with positive results. It is possible to initiate the overall testing of the receiver with a single button-click and the results are stored in the computer without any human intervention. Test scenarios currently included in the tool’s library are: time-to-first-fix (TTFF), position accuracy, acquisition sensitivity, tracking sensitivity, and reacquisition time. By changing the scenarios in this library, the tool can be used with different simulator models. Another innovative aspect of AutoPET is that it uses multi-threading to perform the receiver testing. Multiple software processing threads are necessary to keep track of the receiver operations and simulator feeds simultaneously, so that an appropriate interrupt can be generated when the receiver has performed the desired operation. This feature is explained in further detail later on.

Data Capture Tool (dCAP) is a hybrid (software-controlled hardware) entity capable of extracting the user-defined internal process data from the different modules (acquisition, tracking, bit decoding, and so on) of the GNSS RUT and stores it in a computer via a 100-Mbps Ethernet link. The dCAP hardware is independent of the receiver module (although implemented on the same softcore) and operates through minimal interference with the receiver operation. This data can then be post-processed to monitor and record the behavior of the receiver and to investigate any anomalies in its intermediate stages. An experimental version of dCAP has already been used to monitor the carrier-to-noise-density ratio (C/N0), carrier Doppler, and code delay from the internal tracking channels, and the raw GNSS signals in I/Q format entering the baseband processing unit (BPU) of the TUTGNSS receiver from its radio front end.

The benefits of AutoPET over state-of-art approaches are that it is portable (software platform independent), easy to use, suitable for testing most receivers using a variety of simulators (provided each of them can communicate with the outside world using some form of communication protocol), and its operational parameters are easy to modify through an external configuration file. dCAP is designed specifically for the TUTGNSS receiver; however, it can be easily replicated for most experimental embedded system receivers. Once implemented, dCAP offers a clear view of the internal operation of the receiver by accessing intermediate signals between the input and output terminals. The speed and size of data capture are limited only by the type of Ethernet connection and the size of the internal and external memories. Additional details of AutoPET and dCAP are provided in the next two sections of this article, while the third section describes the application of these tools in testing the GPS L1 operation of the TUTGNSS receiver.

Automated Performance Evaluation Tool

AutoPET is a software program developed using the Qt platform and the C++ language, which communicates between the GNSS receiver, signal simulator, and its associated computer through a remote PC that houses AutoPET. The set-up is shown in FIGURE 1. This figure also denotes the different communication protocols used between the different modules.

FIGURE 1. Block schematic of the AutoPET assembly.

FIGURE 1. Block schematic of the AutoPET assembly.

At the center is the GNSS receiver, which accepts RF signals from the GNSS signal simulator. These signals represent signals from the sky in accordance with the scenario loaded in the simulator, and therefore represent unidirectional communication. On the other hand, the receiver communicates with the remote PC housing AutoPET using the NMEA-0183 protocol. This is bidirectional communication, as the receiver continuously updates its status via NMEA messages to AutoPET and, in turn, AutoPET sends a response/control command to the receiver. The receiver sends the $GPGGA NMEA message every second, and through reading this message, AutoPET can determine the current status (acquisition, tracking, position fix, and so on) of the receiver.

The TUTGNSS receiver has the capability to perform a cold start to initiate the next test iteration when commanded by AutoPET. For this purpose, we have designed a simple custom message string, which can be identified by the TUTGNSS receiver as a cold-start command. In response, the receiver sends a custom NMEA message, $GPTXT, which identifies that it has successfully performed a cold start. Performing a cold start involves erasing all a priori navigation-related information from the receiver memory. This includes erasing the ephemeris, almanac, and timing information, and ensuring that all satellite tracking is lost.

AutoPET communicates with the GNSS signal simulator through its controlling computer, called the Sim-PC (which runs the control software for the simulator). This communication is bidirectional using a 100-Mbps Ethernet link. The AutoPET library holds the scenario files, through which it remotely controls the simulator. In turn, the Sim-PC returns responses or error messages in the form of Extensible Markup Language (XML) strings to the AutoPET. The communication between the Sim-PC and the simulator is through its proprietary protocols.

AutoPET makes extensive use of multi-threading. The receiver, AutoPET, and the simulator function autonomously of each other and hence are independently controlled using their own processing threads running in parallel. Examples of some processing threads are:

  • Thread 1 – monitors the receiver operation through the received NMEA messages. This thread is responsible for identifying, for example, if the receiver achieves a position fix or if it performs a successful cold start.
  • Thread 2 – monitors the simulator through the received XML error messages and response messages from the Sim-PC. It is responsible for identifying, for example, if the simulator scenario is successfully set up or if the satellite signals are turned on and off when demanded by the test case.
  • Thread 3 – monitors the internal operation of AutoPET itself to check, for example, if a timer has expired or if the user performs any operation on the GUI during the progress of a test.

Each thread generates an internal software interrupt within AutoPET based on which future course of action has to be dynamically determined.

Later in the article, the application of AutoPET for single-frequency, single-constellation operation and testing of the TUTGNSS receiver is described. However, it can just as easily be applied for more complex, multi-frequency, multi-constellation testing. The scenarios are stored in the library of AutoPET, and they can be easily updated without requiring any changes in the tool itself. On the other hand, the receiver operation needs to be updated to perform position fixes with multiple signals and constellations. If the receiver allows updating of its operation mode using software commands, as is the case in TUTGNSS, these commands can also be included within AutoPET.

In the case of TUTGNSS, two configuration settings control the mode of operation and the manner in which it has to be turned on (cold, warm, or hot start) via a 32-bit control word. Table 1 describes the various options and the digital control word bits corresponding to each option. There are eight possible modes of operation that would require three bits to be uniquely represented. However, we have assigned five bits, to accommodate any planned future increase in operating modes. Similarly, there are three ways to turn on the TUTGNSS receiver, and they can be uniquely represented by two bits. Therefore, out of the 32 available bits, only seven bits are currently utilized. The rest of the bits are in reserve for future use. The mode selection bits are in least significant bit positions of the control word. For example, if the receiver should perform a position fix after a warm start using GPS L1 and Galileo E1 signals, the 32 bit control word would be 00000000_0000000_00000000_00100010. Using this control word at the beginning of every test, AutoPET can be used for a simple single constellation or more advanced multi-constellation testing of the receiver.

TABLE 1. Control words for multi-frequency, multi-constellation testing of TUTGNSS.

TABLE 1. Control words for multi-frequency, multi-constellation testing of TUTGNSS.

Data Capture Tool

The overall set up of dCAP is shown in FIGURE 2. The TUTGNSS receiver consists of the radio front end and the BPU implemented on an Altera Stratix-II development board. This board consists of the NIOS-II softcore embedded processor controlled by the MicroC operating system within a field-programmable gate array (FPGA) board. The hardware is programmed using VHSIC Hardware Description Language (VHDL) and consists of the system entity and a few peripheral entities, such as a phase-locked loop (PLL), which are not shown in the figure for sake of simplicity. The system entity consists of (among others) two software-controlled hardware entities, one for the TUTGNSS receiver BPU and the other for the dCAP server, called CPU-0 and CPU-1 respectively. The Control-PC is responsible for the overall programming of the FPGA board through a USB link. It also holds a Qt-based user interface acting as the dCAP client implementation.

FIGURE 2. Overall block schematic of the dCAP assembly.

FIGURE 2. Overall block schematic of the dCAP assembly.

The dCAP client (in the Control-PC) establishes an Ethernet connection with the dCAP server (on the FPGA) and requests a user-specified internal data sample. As an example, let us assume the user requests raw I/Q samples input to the TUTGNSS BPU from the radio front end. The dCAP server software communicates with the TUTGNSS software, which in turn allows the dCAP server hardware access to the requested data from the appropriate region of the TUTGNSS hardware, similar to how a signal across a resistor on a dense printed circuit board is viewed by placing oscilloscope probes across it. The only limitation with dCAP is that the user has to predict, in advance, which internal data parameters are of interest and create access points within the correct hardware entities. The dCAP server hardware will connect to the respective access point when demanded by the client.

This data snapshot is first buffered in the local shared memory entity on the FPGA board due to the requirements of speed, size, and time synchronization. The dCAP server software is responsible for transferring this data from the internal memory to the Control-PC through the Ethernet link. The data is stored on the Control-PC hard drive in the form of a *.bin file. Therefore, the size of each data-packet that can be accessed at a time is limited by the size of the FPGA memory entity, while the total data size is limited only by the size of the hard drive of the Control-PC. The speed of data capture is restricted by the maximum speed of the Ethernet link between the dCAP client and server.

In FIGURE 3, the internal operation of the dCAP server is illustrated, assuming that we would like to access the raw samples from the radio front end. The first block that the samples enter inside the TUTGNSS BPU is the baseband converter unit (BCU). This is where the dCAP hardware probes listen in on the signal samples. Through these probes, the signals are diverted to the first-in-first-out (FIFO) data collector on the dCAP server (CPU-1) in addition to their usual route through the further baseband processing blocks of the receiver. After the FIFO collector, the data undergoes clock arbitration, time synchronization, and master-slave synchronization, before being buffered into the on-chip Synchronous Dynamic Random Access Memory (SDRAM), where it waits until the dCAP server transfers it through the Ethernet-based local network to the requesting dCAP client within the Control-PC. In the case where different internal data has to be monitored, the probes simply reorient to the correct access point within the correct hardware entity (for example, to monitor the signal C/N0, the probes access the tracking loops).

FIGURE 3. Block schematic of an example of the dCAP internal operation.

FIGURE 3. Block schematic of an example of the dCAP internal operation.

TUTGNSS Receiver Performance Testing

During the GPS L1 performance testing of the TUTGNSS receiver, the reference receiver position in the simulator was set randomly. Ionosphere and troposphere errors were turned off in the simulator. On average, 100 iterations were performed for each test, and the total duration to complete all tests was two weeks. dCAP was used in monitoring the tracking channels and extracting information such as the C/N0, carrier Doppler, and code-delay estimates for the satellites being tracked. Access to these parameters enabled testing the acquisition and tracking sensitivity of the TUTGNSS receiver, thus confirming the results of the tests performed using AutoPET.

Acquisition Sensitivity. Acquisition sensitivity for the TUTGNSS receiver was measured to be -141.5 dBm via AutoPET and -141 dBm via dCAP. Each coherent integration interval was 4 milliseconds, and 256 such intervals were integrated non-coherently. Using AutoPET, 100 acquisition iterations were performed at every power level, and the average number of satellites acquired was recorded. It was observed that no satellites were acquired at -142 dBm. The acquisition sensitivity test using dCAP involved extracting the carrier Doppler and code-delay estimates. A successful acquisition was assumed only if the code-delay estimate error was less than ±1 chip (300 meters) and the carrier Doppler estimate error was less than ±150 Hz. Based on these criteria, 96.72% of acquisitions were found to be successful when the satellite power was maintained at -141 dBm in the simulator as shown in the histograms in FIGURES 4 and 5.

FIGURE 4. Code-delay estimate within ±1 chip (300 meters).

FIGURE 4. Code-delay estimate within ±1 chip (300 meters).

FIGURE 5. Carrier Doppler estimate within ±150 Hz.

FIGURE 5. Carrier Doppler estimate within ±150 Hz.

Tracking Sensitivity. Tracking sensitivity for the TUTGNSS receiver was measured to be -151 dBm via both tools, assuming a coherent integration interval of 20 milliseconds. Using AutoPET, 100 tracking iterations were performed at every power level and the average number of satellites tracked was recorded. Using dCAP, this test was performed by selecting one satellite and observing how the receiver C/N0 tracked this satellite during high and low signal power conditions. Twenty tracking iterations of 90 seconds each were performed for a particular satellite. In each iteration, the satellite power in the simulator was maintained at the nominal condition of -130 dBm (equivalent to 38 dB C/N0 in the receiver) for the first 30 seconds. Subsequently, the power of the satellite was dropped to -151 dBm (equivalent to 17 dB C/N0 in the receiver).

As visible in Figure 6, the receiver was able to continue tracking the satellite at -51 dBm in 19 out of the 20 iterations. In the case where tracking was lost, the C/N0 can be seen to diverge rapidly to 0. To make sure that in the rest of the 19 cases the receiver was really tracking the satellite at low power, the power of the satellite was increased again after an additional 30 seconds. In each of the 19 cases, the receiver successfully continued to track the satellite.

FIGURE 6. Tracking C/N0 in one tracking channel using dCAP.

FIGURE 6. Tracking C/N0 in one tracking channel using dCAP.

3D Position Accuracy and TTFF. Computation of the position fix was performed using a least-squares algorithm without any filtering. Using only AutoPET, 100 position fix iterations were performed and the average 3D error in meters was computed. Within the same test case, the time for achieving a position fix was also recorded. The initial (0–30 seconds) position fix estimates are not very accurate. This is because only the first four acquired satellites are used for the position computation. As more satellites are acquired and tracked, their inclusion into the computation gradually improves the position accuracy to within 1 meter. The average TTFF was computed to 60.59 seconds.

Validity of C/N0 Estimator. FIGURE 7 presents a comparison of C/N0 measurements between the TUTGNSS receiver (extracted using dCAP) and a commercial receiver. The input power from the simulator was varied between -130 dBm and -151 dBm with steps of around 2 dB for 10 seconds each. The C/N0 readings from the two receivers were measured at each power level and plotted on the same scale. The reference power level represents the C/N0 readings of a hypothetical (ideal) receiver with zero radio front-end losses. As the figure shows, on average there is close conformance between the estimated values of C/N0 in the two receivers. The difference between the two receivers and the reference is approximately 5 dB, which includes radio front-end noise and other losses. The TUTGNSS receiver displays lower C/N0 estimation peak-to-peak inconsistency than the commercial receiver.

FIGURE 7. C/N0 measurement using dCAP: Comparison between TUTGNSS, a commercial, and a hypothetical receiver.

FIGURE 7. C/N0 measurement using dCAP: Comparison between TUTGNSS, a commercial, and a hypothetical receiver.

Other Uses of dCAP. During initial prototype validation, we noticed that satellite tracking was inconsistent even under high C/N0 conditions. dCAP was used to extract detailed baseband tracking information that helped to identify the source of the problem: signal anomalies due to insufficient clock buffering on an experimental RF front end, as shown in FIGURE 8. Such anomalies would have been impossible to detect with traditional black-box testing practices. Once the problem was rectified, dCAP was used once again to monitor the RF front-end signals and performance of the baseband tracking loops, where FIGURES 9 and 10 show a marked improvement in the receiver signal processing and satellite tracking performance.

FIGURE 8. Signal anomaly in the Q-branch signal due to insufficient clock buffering in the experimental RF front end: detected using dCAP.

FIGURE 8. Signal anomaly in the Q-branch signal due to insufficient clock buffering in the experimental RF front end: detected using dCAP.

FIGURE 9. Code Doppler extracted from one tracking loop.

FIGURE 9. Code Doppler extracted from one tracking loop.

FIGURE 10. Carrier Doppler extracted from one tracking loop using dCAP.

FIGURE 10. Carrier Doppler extracted from one tracking loop using dCAP.

Conclusion

In this article, we have demonstrated the results of the TUTGNSS prototype receiver testing using AutoPET and dCAP. Results were presented, analyzed, and conclusions drawn for the GPS L1 performance of the receiver. Furthermore, the procedures can be easily replicated through software modifications for testing more advanced multi-frequency, multi-constellation modes of the receiver.

Added to the benefits of automation in terms of improved accuracy and personnel efficiency, the proposed AutoPET is a cost-effective solution to anyone working on GNSS receiver technology to understand its most important performance parameters. This tool is portable (software platform-independent), easy to install, and easy to execute on any computer with the basic scientific software. From an academic point of view, dCAP is useful for teaching the spectral characteristics of GNSS signals at every stage from deep inside the receiver to researchers or university students in laboratory exercises. Together, these tools have assisted in the complete characterization of the TUTGNSS receiver at TUT, and can be easily adapted, enhanced, and applied to other research-based receivers as well. In other words, the proposed research has an academic as well as practical appeal.

Acknowledgments

This research work received support from the Tampere Doctoral Programme in Information Science and Engineering (TISE), Nokia Foundation, and the Ulla Tuominen Foundation. It has also been partially supported by the Academy of Finland (under the projects: 251138 “Digitally-Enhanced RF for Cognitive Radio Devices”, and 256175 “Cognitive Approaches for Location in Mobile Environments”). We wish to gratefully acknowledge each of these institutions. This article is based on the paper “Automated Test-bench Infrastructure for GNSS Receivers – Case Study of the TUTGNSS Receiver” presented at the 26th International Technical Meeting of the Satellite Division of The Institute of Navigation held in Nashville, Tennessee, September 16–20, 2013.

Manufacturers

The tests described in this article used a Spirent Federal Systems STR4500 multi-channel GPS/SBAS simulator and a u-blox AG EVK-5P GNSS receiver evaluation kit with a LEA-5P receiver module.


SARANG THOMBRE is a GNSS research scientist in the Department of Navigation and Positioning at the Finnish Geodetic Institute (FGI), Helsinki.

JUSSI RAASAKKA is a GNSS R&D scientist at Honeywell International s.r.o. in the Czech Republic.

TOMMI PAAKKI is a teaching assistant and a doctoral student at the Department of Electronics and Communications Engineering, Tampere University of Technology (TUT).

FRANCESCANTONIO DELLA ROSA is the project manager of the Multitechnology Positioning Professionals (MULTI-POS) Marie Curie Initial Training Network and a research scientist at TUT.

MIKKO VALKAMA is a full professor and the head of the Department of Communications Engineering at TUT.

LAURA RUOTSALAINEN is the deputy head of the Department of Navigation and Positioning and aspecialist research scientist at FGI.

HEIDI KUUSNIEMI is a professor and the acting head of the Department of Navigation and Positioning at FGI.

JARI NURMI is a professor in the Department of Electronics and Communications Engineering at TUT.


FURTHER READING

• Authors’ Conference Paper

“Automated Test-bench Infrastructure for GNSS Receivers – Case Study of the TUTGNSS Receiver” by S. Thombre, J. Raasakka, T. Paakki, F. Della Rosa, M. Valkama, and J. Nurmi in Proceedings of ION GNSS+ 2013, the 26th International Technical Meeting of the Satellite Division of The Institute of Navigation, Nashville, Tennessee, September 16–20, 2013, pp. 1919–1930.

• TUTGNSS

TUTGNSS – University Based Hardware/Software GNSS Receiver for Research Purposes” by T. Paakki, J. Raasakka, F. Della Rosa, H. Hurskainen, and J. Nurmi, in Proceedings of Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), 2010, Helsinki, Finland, October 14–15, 2010, doi: 10.1109/UPINLBS.2010.5654337.

• Automated GNSS Receiver Testing

GPS Interference Testing: Lab, Live, and LightSquared” by P. Boulton, R. Borsato, B. Butler, and K. Judge in InsideGNSS, Vol. 6, No. 4, July/August 2011, pp. 32–45.

“Software-based GNSS Signal Simulators: Past, Present and Possible Future” by S. Thombre, E.S. Lohan, J. Raquet, H. Hurskainen, and J. Nurmi, in Proceedings of ENC GNSS 2010, the European Navigation Conference 2010, Braunschweig, Germany, October 19–21, 2010.

• GNSS Receiver Testing in General

Simulating GPS Signals: It Doesn’t Have to Be Expensive” by A. Brown, J. Redd, and M.-A. Hutton in GPS World, Vol. 23, No. 5, May 2012, pp. 44–50.

Realistic Randomization: A New Way to Test GNSS Receivers” by A. Mitelman in GPS World, Vol. 22, No. 3, March 2011, pp. 43–48.

Record, Replay, Rewind: Testing GNSS Receivers with Record and Playback Techniques” by D.A. Hall in GPS World, Vol. 21, No. 10, October 2010, pp.28–34.

• NMEA 0183

NMEA 0183, The Standard for Interfacing Marine Electronic Devices, Ver. 4.10, published by the National Marine Electronics Association, Severna Park, Maryland, June 2012.

NMEA 0183: A GPS Receiver Interface Standard” by R.B. Langley in GPS World, Vol. 6, No. 7, July 1995, pp. 54–57.

Unofficial online NMEA 0183 descriptions: “NMEA data”; “NMEA Revealed” by E.S. Raymond, Ver. 2.13, November 2013.

This article is tagged with , , , and posted in GNSS, Innovation

About the Author: Richard B. Langley

Richard B. Langley is a professor in the Department of Geodesy and Geomatics Engineering at the University of New Brunswick (UNB) in Fredericton, Canada, where he has been teaching and conducting research since 1981. He has a B.Sc. in applied physics from the University of Waterloo and a Ph.D. in experimental space science from York University, Toronto. He spent two years at MIT as a postdoctoral fellow, researching geodetic applications of lunar laser ranging and VLBI. For work in VLBI, he shared two NASA Group Achievement Awards. Professor Langley has worked extensively with the Global Positioning System. He has been active in the development of GPS error models since the early 1980s and is a co-author of the venerable “Guide to GPS Positioning” and a columnist and contributing editor of GPS World magazine. His research team is currently working on a number of GPS-related projects, including the study of atmospheric effects on wide-area augmentation systems, the adaptation of techniques for spaceborne GPS, and the development of GPS-based systems for machine control and deformation monitoring. Professor Langley is a collaborator in UNB’s Canadian High Arctic Ionospheric Network project and is the principal investigator for the GPS instrument on the Canadian CASSIOPE research satellite now in orbit. Professor Langley is a fellow of The Institute of Navigation (ION), the Royal Institute of Navigation, and the International Association of Geodesy. He shared the ION 2003 Burka Award with Don Kim and received the ION’s Johannes Kepler Award in 2007.