Innovation: The continued evolution of the GNSS software-defined radio

Getting better all the time

In this month’s column, we review the history and future of software-defined radios (SDRs), looking in particular at GNSS SDRs.

This online version of the print article includes two bonus sections for which there wasn’t room in the magazine: New Frontiers: GNSS SDRs in Space and The Economics of SDRs.

By James T. Curran, Carles Fernández-Prades, Aiden Morrison and Michele Bavaro

Innovation Insights with Richard Langley

Innovation Insights with Richard Langley

I had a fairly normal childhood—as a nerd. I was interested in radio and so was my sister. For her, it was the local AM radio stations where she could hear the latest Beatles’ hits on her six-transistor handheld portable. But for me, it was shortwave radio. I received a Knight-Kit two-tube regenerative shortwave receiver for Christmas 1963 when I was 14. It used one tube for the RF section and one tube for the audio amplifier. Using a random-length antenna above my mother’s clothesline, I was able to log radio stations from more than 100 countries during my high-school days.

With the pressures of university studies and starting to work for a living, I put my radio hobby on hold. But on an Air Canada flight to a conference early in 1985, I spotted an advertisement in the inflight magazine for the diminutive Sony ICF-7600D portable shortwave receiver — the height of miniaturization of microprocessor-controlled receivers at the time — and I acquired one in Hong Kong in May of that year before starting a lecture tour in the People’s Republic of China. I used the Sony receiver extensively at home and on trips overseas and heard many interesting broadcasts over the years including President Gorbachev’s resignation speech live from Radio Moscow.

Fast forward to 2013, when I purchased my first software-defined radio (SDR) receiver, a FUNcube Dongle Pro+, with frequency coverage from longwave up to the L-band. Interfaced via USB to a computer and bespoke software, an SDR receiver allows one to monitor a wide swath of the radio spectrum or record it for future analysis as in-phase and quadrature components. I have since acquired several other SDR receivers, and the capability of these units keeps getting better and better, delighting me and my fellow radio hobbyists. But these improvements in SDR technology extend to other uses of the radio spectrum including GNSS. In this month’s column, we review the history and future of SDRs looking in particular at GNSS SDRs. And what the Beatles said about improving one’s nature as a human being also aptly describes the performance of SDRs: it’s getting better all the time.


The software-defined radio (SDR) has an infinite number of interpretations depending on the context for which it is designed and used. By way of a starting definition, we choose to use that of a reconfigurable radio system whose characteristics are partially or fully defined via software or firmware. In various forms, the SDR has permeated a wide range of user groups, from military and business to academia and the hobby radio community.

SDR technology has evolved steadily over the decades following its birth in the mid-1980s, with various surges of activity being generally aligned with new developments in related technologies (processor power, serial busses, signal processing techniques and SDR chipsets). At present, it appears that we are experiencing one such surge, and the GNSS SDR is expanding in many directions. The proliferation of collaboration and code-sharing sites such as GitHub has enabled communities to share and co-develop receiver technology; the rise in the maker-culture and crowdsourcing has led to the availability of high-performance radio-frequency (RF) front ends; and the adoption of SDRs by some major telecommunications companies has led to the availability of suitable integrated circuits.

These contributing factors have played a part in an increased uptake of GNSS SDRs in military, scientific and commercial applications. In this article, we explore the recent trends and the technology behind them.

SDR TOPOLOGIES

The software-defined radio for GNSS has evolved over the past decade, both in terms of the adoption of new frequencies, new signals and new systems, as they have become available; as well as the adoption of new processing platforms and their associated processing techniques. Shown in FIGURE 1 is a (simplified) depiction of how the topology of the software-defined GNSS receiver has evolved over the years (a–d) with a hint at where it might go next (e, f).

FIGURE 1. A simplified depiction of different SDR topologies (GPP = general-purpose processor, GPU = graphics processing unit, FPGA = field-programmable gate array, SoC = system on chip, RFSoM = radio-frequency system on module, RFSoC = radio-frequency system on chip).

In a traditional GNSS SDR, as depicted in Figure 1 (a), the RF front end typically interfaces with the general-purpose processor (GPP) through a standard bus, and intermediate-frequency (IF) samples are streamed to a buffer. Once on the GPP, basic operations such as correlation, acquisition/tracking, measurement generation and positioning were performed.

Of all of the operations performed by a GNSS receiver, correlation is (by some orders of magnitude) the most computationally intensive. However, the correlation operations are relatively simple, often requiring only integer arithmetic, and can be easily parallelized. When running on modern processors, optimized software receivers can avail themselves of multi-threading (task parallelism) or the operations can be vectorized to exploit data parallelism (single-instruction, multiple data).

Beyond a certain number of GNSS signals and a certain bandwidth, a GPP simply cannot cope, and many SDR receivers looked to hardware acceleration for the correlation process. This either took the form of a graphics processing unit (GPU), or a field-programmable gate array (FPGA), as depicted in Figure 1(b), both of which are well suited to highly parallel tasks. These processing platforms can be powerful and efficient, and so can almost alleviate all challenges associated with correlation. This is not the only way to alleviate the processing burden, as it is also possible to delegate the correlation task to a network of computers. This “cloud” receiver architecture, depicted in Figure 1(e), has received particular attention of late, showing promise for certain niche applications. This computation-in-the-cloud trend has partially reverted with the proliferation of many-core desktop and mobile processors, but at a certain level of signal or processing complexity, the extensions remain applicable.

Nowadays, data throughput becomes an important consideration. When considering multi-constellation, multi-frequency receivers, the objective is often to preserve signal quality, which implies high bandwidth and high digitizer resolution. A triple-frequency front end might easily produce in excess of 100 or even 500 megabytes per second. When this data is delivered to the GPP or somewhere in the host computer, and then offloaded to the GPU (or any other hardware accelerator), it might be handled twice, exacerbating the bottleneck. To overcome this problem (and for other practical architectural reasons) it can be preferable to interface the front end directly with the accelerator, where correlation was performed, and leave the brains of the receiver (including loop closure; data processing; and position, velocity and time computation) on the GPP. This is a particularly convenient approach when using an FPGA accelerator, as shown in Figure 1(d).

A similar architecture can be achieved using modern system-on-chip (SoC) integrated circuits (ICs), which can offer a large FPGA and a powerful GPP on the same piece of silicon, as depicted in Figure 1 (d). Indeed, a number of receivers using this architecture have seen commercial and scientific success, having many of the benefits of dedicated silicon while retaining the benefits of the software-defined radio (for example, the Swift Navigation Piksi Multi GNSS Module). Recent developments in the field have seen the world’s first RF system-on-module (RFSoM) or system-on-chip (RFSoC) devices, targeting 5G mobile communications applications. With an architecture similar to that of Figure 1(f), the IC touts up to eight inputs and eight outputs (8×8) multiple input, multiple output (MIMO) with 12-bit analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) running at rates of 2/4 gigasamples per second. Depending on how this trend evolves (assuming lighter versions become available), this might offer an exciting new platform for GNSS SDRs, simultaneously capable of multi-frequency and multi-antenna operation.

RF HARDWARE: THE ENABLER

GNSS SDRs see the world through a hardware peripheral, and the capability of this hardware defines the perimeter between what the receiver can and cannot do. In essence, the front-end peripheral converts one or more analog RF signals at the antenna to a stream or sequence of packets of digital-baseband/IF data to the GPP.

A software-defined radio for GNSS benefits greatly from being flanked in the RF spectrum on both sides by signals that are of interest to the civilian population. Applications such as Digital Video Broadcasting — Terrestrial (DVB-T) and Digital Video Broadcasting — Satellite Second Generation (DVB-S2) receivers have resulted in the availability of a wide range of low-cost RF ICs that are tunable to GNSS frequencies (typically spanning from 900 MHz to 2.1 GHz), which, along with dedicated GPS ICs, were at the heart of early GNSS SDR front ends. Later developments in ICs designed around the 2/3/4G mobile communications standards brought another generation of ICs, bringing higher instantaneous bandwidth, higher ADC resolution and MIMO, and re-transmit capability. With the increase in popularity of the software-defined radio for cognitive radio, Wi-Fi, 3G and Long-Term Evolution or LTE, and enjoying the benefits of a crowdfunding movement, a wide range of front-end peripherals quickly appeared. Many of these front ends are compatible with GNSS, offering significantly increased performance relative to their predecessors. A selection of some GNSS-compatible SDR peripherals (both new and old) is shown in TABLE 1.

TABLE 1. A selection of GNSS-compatible SDR front ends (Half duplex = transmit and receive but not simultaneously; Full duplex = transmit and receive simultaneously).

Reference Oscillators. Although many of the requirements of modern telecommunications ICs are beyond what is needed for GNSS (such as ADC resolution, frequency range, bandwidth and linearity), clock stability is often inadequate. Communications signals are generally received at high signal-to-noise ratio so the carrier can be easily recovered, even given very poor clock stability.

In contrast, clock stability can be critical for GNSS applications, due to the required comparatively long coherent integration period (greater than 1 millisecond) for a couple of reasons. Firstly, because the search-space granularity is related to the integration period and the size of the search space to the frequency uncertainty, clock accuracy is important, as an uncertainty of some tens of kHz might increase acquisition time. Secondly, the short-term stability is important as a large degree of phase wander can be challenging when attempting to track the carrier phase with a loop-update rate below 1 kHz. In fact, this issue was so pronounced on early RTL-SDR DVB-T front ends, that later revisions upgraded the quartz reference oscillator to a more respectable 0.5 parts per million temperature-compensated crystal oscillator (TCXO). Typically, a TCXO with an accuracy of better than 1 part per million is preferable, but this metric alone is far from sufficient.

Depending on the class of signals for which the SDR front end will be used, the characteristics of the oscillator, the configuration of its support electronics, and even whether the mixers and analog-to-digital conversion process use the same reference can vary. For example, not all TCXOs are suitable for GNSS applications due to the way in which they internally apply their temperature compensations. If a given TCXO uses a stepwise compensation configuration based on any form of digital feedback, the size of the resulting steps can severely impact the GNSS tracking loops. Even if a given TCXO has a suitable compensation curve and implementation, as well as low and acceptable intrinsic phase noise, every other link in the clock chain must preserve this performance. In some front-end implementations, swapping out a low-quality clock for a higher quality one is sufficient, but in others there can be design limitations in the oscillator power supply, the oscillator signal conditioning, subsequent clock generation steps, or distribution routing that can prevent the design from ever being suitable for GNSS use. This can be critical in cases where the carrier phase is of interest, for example, where phase coherence between channels is important for multi-frequency linear combinations, or for multi-antenna systems.

Fortunately, many modern SDR front ends support the use of an external clock. This feature can also be important when attempting to combine two front-end peripherals to effect a dual-frequency or dual-antenna software receiver.

The Bus. An intrinsic bottleneck for any SDR system is the fact that some form of connection or bus is needed to carry data from the collection point to the processing element. In a fully integrated system, this connection still exists, but it is typically a trace on a circuit board or even a pathway within an integrated device. In contrast, in an SDR this often takes the form of a cable or connector between the physically discrete system modules. In cases where the devices are discrete, it is often necessary to implement some data buffering on both ends of the bus.

The suitability of a particular bus is often determined by the sustained data throughput rate required by the application and, in some cases, the latency of the bus. An example of a number of interfaces popular in modern SDR front ends is shown in FIGURE 2, illustrating the nominal throughput and the minimum latency of each. In the case of a GNSS SDR, the minimum conceivable throughput required would be hundreds of megabytes per second, but a system could easily use in excess of 200 megabytes per second for multi-frequency, high-bit-depth data.

Of course, in post-processing applications, bus latency is not a factor. However, certain applications may require that this latency is small, or bounded, or somehow deterministic. Applications such as closed-loop vehicle control or certain safety systems might impose tight requirements on latency. High or unpredictable latency in GNSS measurements might lead to loop instability, in the case of a control system, or might erode safety margins. Although the trend in modern interfaces is for higher throughput, only certain interfaces offer low latency.

FIGURE 2. Bandwidth vs. latency scatter plot for popular buses.

The Silicon. In comparison with less-flexible fixed-function GNSS receiver chips, GNSS SDR hardware platforms provide the opportunity to exchange one to three orders of magnitude of power consumption and system size to gain substantial control over the characteristics of the design. Moreover, one of the other main differences between GNSS front ends and general purpose SDR front ends is the number of bits of ADC resolution and the conversion linearity. Both contribute to power consumption. However, it may be worth considering that GNSS-specific front ends have not received as much attention as telecommunications front ends and, consequently, there is at least a generational gap in silicon mask technology (most GNSS products are at the 350-nanometer level).

In terms of GNSS-specific devices, products such as the SiGe SE4110L, the Maxim MAX2769 and Saphyrion’s SM1027U provide a solution for slightly flexible L1 GPS, Galileo or, in some chip revisions, GLONASS operation. These kinds of chips support a few sampling rates and filtering configurations.

In the middle ground are the much more flexible chips from Maxim including the MAX2120 and MAX2112, which provide total L-band coverage, a myriad of filtering options, and adjustable gain control, all within a 0.3-watt power budget per channel (RF portion only). These chips allow for single-band coverage of adjacent GNSS signals such as GPS and GLONASS L1 or L2 in a single non-aliased RF band.

In terms of multi-channel options, devices such as the Maxim MAX19994A or the NTLab NT1065 offer dual- or quad-channel functionality, respectively. Similar functionality can be achieved by pairing downconversion and IF receiver ICs such as, for example, the Linear Technologies LTC5569 dual-active downconverting mixer and the Analog Devices AD6655 IF receiver, which might offer sufficient performance for high-accuracy dual-frequency positioning.

Higher up the cost, power and complexity structure are radios designed explicitly to support SDR applications that happen to cover GNSS bands such as the Lime LMS6002d/LMS7002M and the Analog Devices AD9364. Notably, these provide receive and transmit channels and frequency coverage up to 6 GHz.

Another interesting and relevant trend is in the use of direct RF sampling ICs, which offer the possibility of full L-band coverage and multi-antenna support. Examples include the Texas Instruments ADS54J40, which offers a dual-channel, 14-bit, 1.0-gigasamples-per-second ADC, or the LM97600 offering a 7.6 bit, quad-channel, 1.25-gigasamples-per-second ADC.

Future Trends, Limitations and Opportunities. Most of the innovation in SDR peripherals has taken place in the telecommunications domain. The GNSS SDR community, being comparatively small, has benefited from these innovations, insofar as they were applicable, but has had little influence over their design.

Looking at the bigger picture, it is clear that GNSS SDRs will simply have to follow the road paved by telecommunications SDRs. We will have to use what is made available, and so future trends in GNSS SDRs will likely be driven by the needs of the telecommunications SDR community.

So what are these trends and will they be aligned with GNSS trends? The answer seems to be yes and no. One of the bigger trends in modern GNSS receivers is the move to dual- or multi-frequency and a second trend is towards multi-antenna receivers for attitude determination or multi-element antennas for interference management. Meanwhile, telecommunications applications are almost universally using MIMO transceivers; however, they don’t seem to be using multiple (simultaneous) carriers.

What is particularly interesting is that the requirements for a MIMO transceiver are well aligned with that of a null-steering GNSS antenna: namely high linearity and high ADC resolution, and phase-coherence between channels (provided by, for example, the Lime Microsystems LMS7002M or the Analog Devices AD9361). As a result, it is possible (or even likely) that in the near future we will see more innovation in GNSS SDRs in the area of multi-antenna processing than in multi-frequency processing.

Signal Processing Techniques for SDRs. As mentioned above, signal correlation for acquisition and tracking is the most computationally intensive operation conducted by a GNSS receiver. In software receivers, many signal acquisition strategies are built around the fast Fourier transform (FFT) algorithm with a signal tracking rake of three or more correlators per signal. When targeting real-time processing, these operations need to be applied to a stream of signal samples arriving at a rate of many megasamples per second. This is a challenge for GPPs when implementing a multi-constellation, multi-frequency GNSS receiver.

The processing task can either be alleviated or accelerated. Assistance data can allow the receiver to reduce the size of the search acquisition space, thereby dramatically reducing the overall computational load. In many cases, the software receiver is running on a host computer with many connectivity options. Alternatively, a variety of options are available for accelerating the tasks.

Parallelization. The main approach for accelerating GNSS signal processing is parallelization. Shared-memory parallel computers can execute different instruction streams (or threads) on different processors, or by interleaving multiple instruction streams on a single processor (simultaneous multithreading or SMT), or both. This approach is referred to as task parallelism, and it is well supported by the main programming languages, compilers and operating systems. This approach fits naturally with the architecture of a GNSS receiver, which has many channels (one per satellite and frequency band) operating in parallel over the same input data. When programmed with the appropriate design, execution can be accelerated almost linearly with the number of processing cores. However, the spreading of processing tasks along different threads must be carefully designed in order to avoid bottlenecks (either in the processing or in memory access).

In combination with task parallelization, software-defined receivers can still resort to another form of parallelization: instructions that can be applied to multiple data elements at the same time, thus exploiting data parallelism. This computer architecture is known as Single Instruction Multiple Data (SIMD), where a single operation is executed in one step on a vector of data, as illustrated in FIGURE 3.

FIGURE 3. Illustration of the operation of single-instruction multiple-data (SIMD) processors, which take a multiple-data input (arguments) and produce multiple results, given a single instruction operated in parallel in a set of processing units (PUs).

In GNSS receivers, this type of instruction can implement operations like multiply-and-accumulate across multiple (16, 32, 64 and so on) samples in a single clock cycle. Intel introduced the first instance of 64-bit SIMD extensions, called MMX, in 1997. Later SIMD extensions, SSE 1 to 4, added multiple 128-bit registers. AMD quickly followed and SIMD is now present in almost all modern processors.

Later, Intel introduced more new instruction sets called Advanced Vector Extensions (AVX) featuring 256-bit registers, new instructions and a new coding scheme. In 2013, AVX-2 expanded most integer commands to 256 bits and by 2016, the introduction of AVX-512 provided 512-bit extensions. SIMD technology is also present in embedded systems: NEON technology is a 128-bit SIMD architecture extension for the ARMv7 Cortex-A series processors, providing 32 registers, 64-bits wide (dual view as 16 registers, 128-bits wide), and AArch64 NEON for ARMv8 processors, which provides 32 128-bit registers. In many cases, well written code will be automatically implemented as some combination of these SIMD intrinsics. In other cases, they can be coded explicitly.

Hardware Acceleration. Another possibility for accelerating signal processing is to offload computation-intensive portions of the workload to a device external to the main GPP executing the software. This is the case of graphics processing units (GPUs). Such processor architecture follows another parallel programming model called Single Instruction, Multiple Threads (SIMT). While in SIMD elements of short vectors are processed in parallel, and in SMT instructions of several threads are run in parallel, SIMT is a hybrid between vector processing and hardware threading. Currently, Open Computing Language or OpenCL is the most popular open GPU computing language that supports devices from several manufacturers, while CUDA (originally, Compute Unified Device Architecture) is the dominant proprietary framework specific for Nvidia GPUs. The key idea is to exploit the computation power of both GPP cores and GPU execution units in tandem for better utilization of available computing power. The main constraint in using GPUs is memory bandwidth. If not programmed carefully, most of the time will be spent on transferring data back and forth between the GPP and the GPU, instead of in the actual processing. A possible solution to this is an approach known as zero-copy operations, which consists of a unified address space for the GPP and the GPU that facilitates the passing of pointers between them, thus reducing the memory bandwidth requirements.

Similar benefits can be had by offloading correlation to reconfigurable hardware such as  FPGAs. The correlation duties can be offloaded to an FPGA and the loop-closure and navigation engine can remain in the GPP. The FPGA is particularly well suited to the GNSS correlation tasks and can implement dedicated low-resolution (such as 1-4 bit) multiply-and-accumulate blocks, where the equivalent 8-, 16- or 32-bit operations on a GPP would be excessive or inefficient. Early approaches involved an FPGA connected as a peripheral device via Ethernet, Peripheral Component Interconnect Express (PCIe) or a similar bus. However, similar to the GPU, the data transfer quickly becomes a bottleneck. This challenge is addressed by integrating the GPP-FPGA packages. An early example of this approach was the Intel Atom E6x5C package hosting an Altera FPGA. More recent examples are Xilinx’s Zynq 7000 family integrating ARM and FPGA processors in a single encapsulation. These SoCs allow the direct injection of signal samples from the RF front end into the FPGA, greatly reducing the amount of information to be interchanged with the GPP. This approach provides flexibility with regard to how tracking and correlation resources are allocated, allowing configurable architectures according to the targeted signals of interest and application at hand, and enabling the execution of full-featured software-defined receivers in small form factor devices.

THE CLOUD

The ability to manage resources as logical entities instead of as physical, hardwired units dedicated to a given application has materialized in business models such as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructures as a Service (IaaS). A network of software-defined GNSS receivers executed in the cloud, appears to be the next natural step in this technology trend, in which the GNSS receiver is no longer a physical device but a virtualized function provided as a service (see FIGURE 4).

FIGURE 4. Illustration of the cloud-based GNSS signal-processing paradigm. (Courtesy of SPCOMNAV, Universitat Autònoma de Barcelona)

A virtualized software application is a program that can be executed regardless of the underlying computer platform. This can be achieved by packaging the application and all its software requirements (the operating system, supporting libraries and programs) in a single, self-contained software entity, which can be then run on any platform. An instance of a software-defined GNSS receiver executed in a virtual environment can then be called a virtualized GNSS receiver.

Early virtualization was in the form of full or machine virtualization (virtual machine or VM), which is a software application that emulates the hardware environment and functionality of a physical computer. With VMs, a software component called a hypervisor interfaces between the VM environment and the underlying hardware (CPU), providing the necessary layer of abstraction. A VM can run a full operating system, so conventional software applications (such as a software-defined GNSS receiver) can run within a VM without any required change.

Recently, the use of operating system virtualization or software containers has become more popular as they are often faster and more lightweight than VMs. Instead of a hypervisor, software containers use a daemon that supplements the host kernel, and can therefore be more efficient in making use of the underlying hardware. Examples of these software containers are Docker and Ubuntu Snaps. An example of an open-source software-defined GNSS receiver packaged as a Docker container is available.

Virtualized GNSS receivers bring important benefits in two fields: business-wise, as a technology enabler for new GNSS-based services; and also the use of GNSS SDRs as scientific tools, to ensure reproducibility.

As a service enabler, virtualized GNSS receivers allow for automatic and elastic creation, execution and destruction of application instances as required, and intelligent spread of the running instances across computing resources, regardless of processor architecture, host operating system or physical location. Several solutions are reported in the technical literature, many based on the GNSS snapshot-receiver, in which a short batch of data is sent to the software for position, velocity and time computation. Notable examples of such an approach are Microsoft’s energy-efficient GPS sensing with cloud offloading and the system running on Amazon Web Services. These approaches allow extremely low power consumption to the user equipment, at the expense of limited accuracy (ranging from 10 to 100 meters of error) and high latency. Commercially, Trimble offers Catalyst, a subscription-based GNSS receiver cloud-based service for which the user is charged according to the provided accuracy level, although the exact details are not yet public.

Virtualization technologies also offer a convenient solution for security-related applications (such as GPS M-code and Galileo PRS), since the encryption module remains on the service provider’s premises, and there is no need for a security module in the receiver equipment. This approach may enable the widespread use of restricted/authorized signals by the civilian population.

Finally, virtualization also offers important benefits for science. The flexibility of SDR receivers makes them an ideal tool for scientific experiments, since an implementation released under an open source license would allow a scientist to share a complete description of the processing from raw signal samples to the final research results.

STANDARDIZATION EFFORTS

GNSS signals are generally introduced to the front end through a standard interface, perhaps an SMA, MCX, or U.FL RF connector, and the digitized signals depart through another standard interface, perhaps USB, PCIe, or RJ45. However for a GNSS SDR, this is where the standardization ends. As discussed above, it is clear that there is a wide range of possibilities when capturing and digitizing a GNSS spectrum. Before processing this stream of digitized samples, details such as sample rate, center frequency, sample resolution and format/packing, and a variety of other parameters must be established. This is particularly important in a variety of scenarios such as when sharing/post-processing archived datasets in scientific applications, when offloading computational burden to a cloud-computer, or when interfacing different data-capture devices with different receivers. Ad-hoc methods of digitized data formats do not encourage interoperability and instead cultivate the potential for technology segmentation.

To address this challenge, The Institute of Navigation has lead an effort to develop a specification for standardized metadata, which would accurately and unambiguously describe the digitized data. Adoption of this metadata standard both by the data collection hardware and the software-defined radio receiver can promote interoperability, and can reduce the potential for error. Similarly, an SDR processor’s utility is extended when it is capable of supporting many file formats from multiple sources seamlessly. For more detail on the initiative, readers are encouraged to visit sdr.ion.org.

NEW FRONTIERS: GNSS SDRS IN SPACE

In space, GNSS receivers need to operate in scenarios that are quite different from those of ground-based receivers: higher (albeit predictable) dynamics conditions, low signal-to-noise-density ratios and poor positioning geometry. It is then an excellent scenario for SDRs, since it requires non-standard features from the receiver.

However, space is a harsh environment for semiconductor devices. Charged particles and gamma rays create ionization, which can alter device parameters. In addition to permanently damaging complementary metal-oxide semiconductor (CMOS) ICs, radiation may cause single-event effects, which are caused by ionizing radiation strikes that discharge the charge in storage elements, such as configuration memory cells, user memory and registers. When those effects happen, the system is usually recoverable with a power reset or a memory rewrite, but they also may destroy the device.

Until recently, radiation-hardened solutions were limited to application-specific integrated circuits or ASICs and one-time-programmable solutions. However, recently there has been an increase in the availability of space-grade FPGAs and memory devices. As examples, we can mention Xilinx’s Virtex-5QV, Microsemi’s RTG4 and Atmel’s ATF80 FPGA processors, and commercial SDR platforms such as GOMspace’s GOMX-3. Those devices allow the implementation of space-qualified GNSS receivers fully defined by software.

SDR receivers offer both reprogrammability (or upgradeability) and self-healing (or auto-remediation) capabilities. Examples could be the possibility to upload algorithms yet-to-be-invented at the receiver’s launch time, or the ability to recover from a single-event effect by remotely rewriting damaged functionalities, reducing the need of onboard redundancy.

THE ECONOMICS OF SDRS

Flexibility has a cost—and more flexibility costs more. This is why an FPGA implementation of a complex system can never compete with the unit cost of a fixed function ASIC. An example of a virtuous overlap might be seen in the Maxim 2120 and 2112 line of DVB-S2 TV receiver ICs, which have been successfully co-opted for GNSS SDR front ends due to their features (configurable mixers, gains, filters, operating power range and so on), which happen to be a good-enough match for the GNSS domain. On initial inspection, this allows for flexibility between the two application spaces and provides an ideal platform for SDRs supporting both TV decoding or GNSS on the same hardware radio module, but soon problems appear. The MAX21xx series are designed for TV applications, and TV applications tend to use 75-ohm input impedances while GNSS has standardized on 50 ohms. Certainly, one could add a software-defined impedance-selector block to the design, but we are now spending real hardware resources to accommodate SDR options. Adding an application that requires reception and transmission such as Wi-Fi, adds an entire signal chain to the design, as well as a large increase in the required dynamic range of the system. Adding an application that exploits MIMO, multiplies the hardware resources needed.

The flexibility of SDR makes it an indispensable research, development, validation and hobbyist tool, but system design is about target selection and trade-offs. To quote one of the most successful engineers of the current era and Eckert-Mauchly Award winner Dr. Robert P. Colwell: “Pick your [technical] targets judiciously. … Pick your vision and then chase it. You can’t pick everything as your vision, that’s a recipe for mediocrity. If you can’t pick your target you’re not going to hit any of them.” For SDR-based systems, this would seem to mean that we should focus on applications where the flexibility afforded offsets the inevitable platform cost push, or where it allows targets of opportunity that require a subset of the capabilities of the platform already being used.

At the same time, our earlier definition of an SDR as “a reconfigurable radio system whose characteristics are partially or fully defined via software or firmware” means that SDRs are already everywhere around us on some level. Cellular phones provide an example of devices that connect a large number of hardware radios to a dizzying array of applications that process, consume, modify and sometimes retransmit the received data, while consumer devices such as wireless routers can often add support for protocol changes or tweaks via firmware. While the economics might prevent radio systems from being universal on all dimensions, there are very few radio devices now sold that don’t expose at least a few parameters via software.

CONCLUSION

It seems that we are at an interesting epoch in the evolution of the software-defined GNSS receiver. The GNSS community has begun to springboard off developments and advances in RF equipment and is enjoying both an increase in functionality and a reduction in cost.

Simultaneously, the software-defined GNSS receiver architecture has morphed in multiple directions, enjoying virtually unlimited processing power of cloud computing, or availing itself of fully integrated RF and host-processor modules. As the use cases and host environments for GNSS receivers continue to diversify and the need for flexibility in the receiver continues to increase, it may be that the software-defined GNSS receiver emerges as a contender for the ASIC receiver for certain specialized use cases. Furthermore, as navigation is increasingly provided by an internet-connected device, the software-defined radio may even carve out its own niche, to become the go-to solution.

ACKNOWLEDGMENTS

The authors thank Sanjeev Gunawardena at the Air Force Institute of Technology and José López-Salcedo of Universitat Autònoma de Barcelona for their discussions and correspondence and for providing valuable insight and suggestions.


JAMES T. CURRAN received a Ph.D. in electrical engineering in 2010 from the Department of Electrical Engineering, University College Cork, Ireland. He is a radio-navigation engineer at the European Space Agency in the Netherlands.

CARLES FERNÁNDEZ-PRADES received an M.Sc. and a Ph.D. in electrical engineering from the Universitat Politecnica de Catalunya, Barcelona, Spain, in 2001 and 2006, respectively. In 2006, he joined Centre Tecnològic Telecomunicacions Catalunya, Barcelona, where he holds a position as senior researcher and serves as head of the Communications Systems Division.

AIDEN MORRISON received his Ph.D. in 2010 from the University of Calgary, where he worked on ionospheric phase scintillation characterization using multi-frequency civil GNSS signals. He works as a research scientist at SINTEF Digital in Trondheim, Norway.

MICHELE BAVARO received his master’s degree in computer science from the University of Pisa, Italy, in 2003. After working for several organizations including his own consulting firm, he was appointed as a technical officer at the Joint Research Centre of the European Commission in Brussels. He now works at Swift Navigation in San Francisco, California.

FURTHER READING

• Software-Defined GNSS Receivers

Python GNSS Receiver: An Object-Oriented Software Platform Suitable for Multiple Receivers” by E. Wycoff, Y. Ng and G.X. Gao in GPS World, Vol. 26, No. 2, February 2015, pp. 52–57.

Digital Satellite Navigation and Geophysics: A Practical Guide with GNSS Signal Simulator and Receiver Laboratory by I.G. Petrovski and T. Tsujii with foreword by R.B. Langley, published by Cambridge University Press, Cambridge, U.K., 2012.

Software GNSS Receiver: An Answer for Precise Positioning Research” by T. Pany, N. Falk, B. Riedl, T. Hartmann, G. Stangl, and C. Stöber in GPS World, Vol. 23, No. 9, September 2012, pp. 60–66.

Simulating GPS Signals: It Doesn’t Have to Be Expensive” by A. Brown, J. Redd and M.-A. Hutton in GPS World, Vol. 23, No. 5, May 2012, pp. 44–50.

A Software-Defined GPS and Galileo Receiver: A Single-Frequency Approach by K. Borre, D.M. Akos, N. Bertelsen, P. Rinder, and S.H. Jensen, published by Birkhäuser Engineering, Springer-Verlag GmbH, Heidelberg, 2007.

GNSS Software Defined Radio: Real Receiver or Just a Tool for Experts?” by J.-H. Won, T. Pany, and G. Hein in Inside GNSS, Vol. 1, No. 5, July–August 2006, pp. 48–56.

Satellite Navigation Evolution: The Software GNSS Receiver” by G. MacCougan, P.L. Normark, and C. Ståhlberg in GPS World, Vol. 16, No. 1, January 2005, pp. 48–55.

• GNSS Software Defined Receiver Metadata Standard

The Institute of Navigation’s GNSS SDR Metadata Standard” by J. Curran, M. Arizabaleta, T. Pany and S. Gunawardena in Inside GNSS, Vol. 12, No. 6, November/December 2017, pp. 50–55.

The Institute of Navigation SDR Metadata Standard Website

• Snapshot Positioning

“Snapshot Positioning for Unaided GPS Software Receivers” by Y. Qian, X. Cui, M. Lu and Z. Feng in Proceedings of ION GNSS 2008, the 21st International Technical Meeting of the Satellite Division of The Institute of Navigation, Savannah, Georgia, September 16–19, 2008, pp. 2343-2350.

• Cloud GNSS Signal Processing

“A Cloud Optical Access Network for Virtualized GNSS Receivers” by C. Fernández-Prades, C. Pomar, J. Arribas, J.M. Fàbrega, J. Vilà-Valls, M. Svaluto Moreolo, R. Casellas, R. Martínez, M. Navarro, F.J. Vílchez, R. Muñoz, R. Vilalta, L. Nadal and A. Mayoral in Proceedings of ION GNSS+ 2017, the 30th International Technical Meeting of the Satellite Division of The Institute of Navigation, Portland, Oregon, Sept. 25–29, 2017, pp. 3796–3815.

“Computational Performance of a Cloud GNSS Receiver Using Multi-thread Parallelization” by V. Lucas-Sabola, G. Seco-Granados, J.A. López-Salcedo, J.A. García-Molina, and M. Crisci in Proceedings of Navitec 2016, the 8th Satellite Navigation Technologies and European Workshop on GNSS Signals and Signal Processing, Noordwijk, The Netherlands, Dec. 14–16, 2016, doi: 10.1109/NAVITEC.2016.7849357.

“CO-GPS: Energy Efficient GPS Sensing with Cloud Offloading” by J. Liu, B. Priyantha, T. Hart, Y. Jin, W. Lee, V. Raghunathan, H.S. Ramos and Q. Wang in IEEE Transactions on Mobile Computing, Vol. 15, No. 6, June 2016, pp. 1348–1361, doi: 10.1109/TMC.2015.2446461.

• High-Performance RF Sampling

“A 13b 4GS/s Digitally Assisted Dynamic 3-stage Asynchronous Pipelined-SAR ADC” by B. Vaz, A. Lynam and B. Verbruggen in Proceedings of 2017 ISSCC, the IEEE International Solid-State Circuits Conference, San Francisco, California, Feb. 5–9, 2017, pp. 276-277, doi: 10.1109/ISSCC.2017.7870368.

About the Author: James T. Curran

James T. Curran received a Ph.D. in electrical engineering in 2010 from the Department of Electrical Engineering, University College Cork, Ireland. He is a radio-navigation engineer at the European Space Agency in the Netherlands.

About the Author: Carles Fernández-Prades

Carles Fernández-Prades received an M.Sc. and a Ph.D. in electrical engineering from the Universitat Politecnica de Catalunya, Barcelona, Spain, in 2001 and 2006, respectively. In 2006, he joined Centre Tecnològic Telecomunicacions Catalunya, Barcelona, where he holds a position as senior researcher and serves as head of the Communications Systems Division.

About the Author: Aiden Morrison

Aiden Morrison received his Ph.D. in 2010 from the University of Calgary, where he worked on ionospheric phase scintillation characterization using multi-frequency civil GNSS signals. He works as a research scientist at SINTEF Digital in Trondheim, Norway.

About the Author: Michele Bavaro

Michele Bavaro received his master’s degree in computer science from the University of Pisa, Italy, in 2003. After working for several organizations including his own consulting firm, he was appointed as a technical officer at the Joint Research Centre of the European Commission in Brussels. He now works at Swift Navigation in San Francisco, California.