Expert Advice: Product Testing: Simulation and Beyond

May 1, 2013  - By

By Pierre Nemry and Jean-Marie Sleewaegen, Septentrio Satellite Navigation

Today’s customers ask for high-accuracy positioning everywhere, even in the most demanding environments. The time is long gone that the only requirement for a receiver was to track GPS L1 and L2 signals in open-sky conditions. State-of-the-art receivers operate in increasingly difficult conditions, cope with local radio-frequency interference, survive non-nominal signal transmissions, decode differential corrections from potentially untrusted networks — and more!

Difficult real-life operating conditions are typically not addressed in textbooks or in the specialized literature, and yet they constitute the real challenge faced by receiver manufacturers. Most modern GNSS receivers will perform equally well in nominal conditions, or when subjected to nominally degraded conditions such as the ones that correspond to standard multipath models. However, the true quality of a GNSS receiver reveals itself in the environment in which it is intended to be used.

In view of this, a GNSS manufacturer’s testing revolves around three main pillars:
◾    identifying the conditions and difficulties encountered in the environment of the intended use,
◾    defining the relevant test cases, and
◾    maintaining the test-case database for regression testing.

In developing new receiver functionality, it is important to involve key stakeholders to comprehend the applications in which the feature will be used and the distinctive environment in which the receiver will function. For example, before releasing the precise-point-positioning (PPP) engine for the AsteRx2eL, we conducted a field-test campaign lasting a full month on a ship used for dredging work on the River Thames and in the English Channel. This enabled engineers to capture different types of sea-wave frequency and amplitude, assess multipath and signal artifacts, and characterize PPP correction data-link quality.

Most importantly, we immersed the team in the end-user environment, on a work boat and not simply in a test setup for that purpose. As another example, in testing our integrated INS/GNSS AsteRxi receiver for locating straddle carriers in a container terminal, we spent months collecting data with the terminal operator. This was necessary to understand the specificities of a port environment, where large metal structures (shore cranes, container reach-stackers, docked ships) significantly impair signal reception.

Furthermore, the close collaboration between the GNSS specialist, the system integrator, and the terminal owner was essential to confirm everything worked properly as a system. In both examples, in situ testing provide invaluable insight into the operating conditions the receivers have to deal with, much surpassing the possibilities of a standard test on a simulator or during an occasional field trip.

Once an anomaly or an unusual condition has been identified in the field, the next step is to reproduce it in the lab. This involves a thorough understanding of the root cause of the issue and leveraging the lab environment to reproduce it in the most efficient way. Abnormalities may be purely data-centric or algorithmic, and the best approach to investigate and test them would be software-based. For example, issues with non-compliance to the satellite interface control document or irregularities in the differential correction stream are typically addressed at software level, the input being a log file containing GNSS observables, navigation bits, and differential corrections.

Other issues are preferably reproduced by simulators, for example those linked to receiver motion, or those associated to a specific constellation status or location-dependent problems. Finally, certain complicated conditions do not lend themselves to being treated by simulation. For example, the diffraction pattern that appears at the entrance of a tunnel is hard to represent using standard simulator scenarios. For these circumstances, being able to record and play back the complete RF environment is fundamental.

Over the years, GNSS receiver manufacturers inventoried numerous cases they encountered in the field with customers or during their own testing. For each case, once it has been modeled and can be reproduced in the lab, it is essential to keep it current. As software evolves and the development team changes, the danger exists that over time, the modifications addressing a dysfunctional situation get lost, and the same problem is reintroduced. This is especially the case for conditions that do not occur frequently, or do not happen in a systematic way. Good examples are the GLONASS frequency changes, which arise in an unpredictable way, making it very difficult for the receiver designer to properly anticipate. This stresses the importance of regression testing. It is not enough to model all intricate circumstances for simulation, or to store field-recorded RF samples to replay later. It is essential that the conditions of all previously encountered incidents be recreated and regularly tested in an automated way, to maintain and guarantee product integrity.

The coverage of an automated regression test system must range from the simplest sanity check of the reply-to-user commands to the complete characterization of the positioning performance, tracking noise, acquisition sensitivity, or interference rejection. Every night in our test system, positioning algorithms including all recent changes are fed with thousands of hours of GNSS data, and their output compared to expected results to flag any degradation. Next to the algorithmic tests, hardware-in-the-loop tests are executed on a continuous basis using live signals, constellation simulators, and RF replay systems, with the signals being split and injected in parallel into all our receiver models. Such a fully automated test system ensures that any regression is found in a timely manner, while the developer is concentrated on new designs, and that a recurring problem can be spotted immediately. The test-case database is a valuable asset and an essential piece of a GNSS company’s intellectual property. It evolves continuously as new challenges get detected or come to the attention of a caring customer-support team. Developing and maintaining this database and all the associated automated tests is a cornerstone of GNSS testing.

This article is tagged with , , , and posted in OEM, Opinions
GPS World staff

About the Author:

GPS World covers all aspects of the GPS and GNSS industry for our readers. To submit news, please send your release to

Comments are currently closed.