The State of the Consumer Industry
By Frank van Diggelen
At the start of a new decade, let’s examine the state of the GNSS consumer market and technology. In the December 2009 issue of GPS World, I described the developments that put GPS in cell phones over the last decade. That technology revolution has brought GPS a very long way. Having come this far, we can ask that most famous of all navigation questions:
Are we there yet?
In this column, I focus on the question for the consumer segment of GNSS. Has the consumer market reached the point we expected it to be by now? Has the technology reached levels we anticipated?
The cell-phone GPS revolution began with the catalyst of U.S. E911 legislation, which mandated that when an emergency (911) call is made from a cell phone, the location of the cell phone must be provided. Among several competing location technologies, GPS proved to be the big winner, thanks to seven technology enablers: assisted GPS, massive parallel correlation, high sensitivity, coarse-time navigation, low TOW, host-based GPS, and RF-CMOS.
All of these together enable very low-cost implementation of GPS in cell phones, even phones on networks such as GSM and W-CDMA that do not have fine-time synchronization (that is, they are not precisely synchronized with the GPS system). GPS is now found in roughly 500 million phones in use today.
Four Milestones. From a consumer market perspective, we have exceeded forecasts. From a technology perspective, we have kept track with Moore’s law. Chips and receivers are cheaper than expected — because, as well as Moore’s law, we have seen greatly increased volumes and competition. Low-cost chips have not come at the expense of performance; in fact, the opposite — as chips have evolved, they have become less costly and better performing.
Small, cheap antennas have affected performance, but given the same antenna, I will demonstrate that a receiver with a single-die GPS chip costing less than $4 can outperform a $19,000 receiver.
This sounds paradoxical, even impossible — indeed many of you may be penning letters to the editor right now! But the time-to-first-fix, sensitivity, and urban-accuracy data will prove my point.
As a consequence of chip evolution, we are reaching plateaus of development for GPS-only systems. However, there remain many problems to solve, especially in urban canyons and indoors. These problems may never be solved with GPS alone, or with any single system alone. This decade will be characterized by GPS-plus; the days of GPS-only will soon recede into the past.
Don’t interpret this as a failing of GPS — quite the opposite. Because GPS-only systems have worked so well, they have found their way into half a billion cell phones, and we are boldly taking GPS to places no navigation has gone before. As we do, we start to encounter the limitations of GPS-only performance.
We will see the proliferation of GPS-plus: GPS+MEMS, GPS+Wi-Fi, GPS+NMR, and GPS+GLONASS, Compass, QZSS, and Galileo. The winners will be those with the greatest levels of integration. To paraphrase Winston Churchill, this is not the end of GPS, it is not even the beginning of the end. But it is, perhaps, the end of the beginning.
GNSS Consumer Market
For market forecasts made a few years ago, we can look at summaries provided in GNSS Markets and Applications, by Len Jacobson: a 2006 Frost & Sullivan report estimated the market for PNDs and handheld devices (not including cell phones) in 2010 would be $2.7 billion, with 8.3 million units, at an average selling price (ASP) of $325. In fact, this market today is approximately $6 billion, with 40 million units, at an ASP of $150.
Twice the Size. The consumer market, not including cell phones, is twice as big (in dollars) as forecast just a few years ago, even though prices are less than half forecast. Unit sales are more than four times forecast.
For the cell-phone market segment, in 1999 when the E911 rules were enacted in the United States, it was anticipated that A-GPS would be adopted only in fine-time (synchronized) networks, such as Verizon and Sprint CDMA. In coarse-time (non-synchronized) networks such as GSM, the expectation was that terrestrial wireless location techniques, such as time-difference-of-arrival (TDOA) and enhanced-offset-time-difference (E-OTD), would dominate. Today, only a few niches use TDOA, E-OTD is extinct, and GPS rules in coarse-time networks worldwide, including GSM in Europe and North America, and W-CDMA in Japan.
The consumer market, in particular the cell-phone market, has grown so rapidly that more receivers have been built in cell phones in the last three years than all other GPS built, ever. Today, L1 C/A-code GPS accounts for more than 99 percent of all GNSS receivers manufactured each year.
From a consumer market perspective, have we reached the point we expected to be by now?
Not only have we arrived, we have far surpassed expectations.
GPS and Moore’s Law
Moore’s law says that for a given number of transistors, the chip size will halve every two years. Table 1 shows what this looks like in practice. For a particular class of GPS chip, the A-GPS receiver with massive parallel correlation, it shows release dates of different generations of these chips, and the technology process, which is the linear dimension of a single gate on the silicon die. As this dimension reduces to 70 percent of the previous value, the 2-dimensonal chip size reduces by 2 times. You can see Moore’s law in action here: approximately every two years, the technology process moves to the next level, and the chip size reduces by 2X. People are now talking about GPS chips in 45 nanometers, the next step.
For a comparison, consider the Broadcom BCM 4751 chip, designed for cell phones. This chip is 2.9 X 3.1 millimeters, the size of the letter B on this page. This is a single-die host-based GPS/SBAS receiver, including RF front end, low-noise amplifier, baseband, and power management unit. Ten iterations of Moore’s law have passed in the last 20 years. The same chip, had it been built 20 years ago, would have been 210 times (a thousand times) bigger.
There were never chips that big. GPS chips aren’t just getting smaller with Moore’s law, they are getting vastly more complex and more capable.
At an elemental level, a GPS receiver does just three things: it starts, it tracks weak signals, and it computes position, velocity, and time. Strip away the obfuscating details, and performance may be summed up by: how fast, how sensitive, how accurate.
Since the 1990s, time to first fix ( TTFF) and sensitivity have improved dramatically, thanks to the seven technology enablers discussed earlier. TTFF for assisted cold starts, or unassisted warm starts, is now as good as one second, even without fine-time. This is a 45X improvement on typical GPS performance of the 1990s. Sensitivity increased roughly 30X (to -150 dBm) in 1998, then another 10X, (to -160 dBm) in 2006, and perhaps another three times to date, for a total of almost 1,000X extra sensitivity.
What about accuracy?
Some perceive low-cost chips as synonymous with low accuracy. This is not true. It is true that small, cheap antennas reduce accuracy; but given the same antennas, the lowest cost receivers on the market today will outperform the most expensive in typical environments where cell phones are used. The following figures show data to prove this point.
First we connect one of the smallest, lowest cost GPS receivers t
o one of the best antennas, a choke ring, on a rooftop with a clear view of the sky. Figure 1 shows the scatter of positions. The blue circle shows the median distribution, which is 0.9 meters for this dataset of 2000 fixes.
The adjacent plot shows the positions obtained from a $19,000 survey-grade GPS receiver, connected to the same antenna. The survey-grade GPS, with a median distribution of 0.3 meters, shows a 60-centimeter advantage over the cell-phone GPS, or maybe a 3X advantage depending on how you look at it. But don’t get too hung up on this result, because this is neither the typical consumer scenario (on a rooftop with choke-ring antenna), nor the main challenge facing us today.
Next we look at the accuracy achieved with a more typical consumer antenna, in a more typical environment. Figure 2 shows the positions obtained in downtown San Jose with an active patch antenna, such as found in PNDs. San Jose is a fairly typical U.S. city, not the hardest place to use GPS, but not the easiest either. Lightstone Alley, adjacent to tall buildings, is only five meters wide.
To evaluate accuracy we used a truth-reference system combining GPS and a tactical-grade IMU with ring laser gyro to produce the blue dots on the figure. The white dots are the low-cost GPS positions. Most of the time, the white dots appear to be on top of the blue, but occasionally you see some separation, and there the red lines show the horizontal error. The median horizontal error is 4.4 meters.
Figure 3 shows the comparison of low- and high-cost receivers, with the survey-grade receiver connected to the same patch antenna as the cell-phone GPS. There are many position gaps from the survey-grade receiver, and the position walks around when the vehicle is stationary (at the intersections, bottom left and top of the figure). This is because of the weak signals available in the urban environment. But don’t get too hung up on this result either, since we are still not at the real challenge of consumer GPS: location in severe urban canyons, such as San Francisco, New York, Chicago, Shanghai, Taipei, Shinjuku, and similar. In these, typically, only one or two GPS satellites can be seen directly. Other satellites may be tracked, but only by observing purely reflected signals. This is not classic GPS multipath, the combination of a direct and reflected signal; instead this is the combination of nothing but reflected signals. The direct signals are usually completely blocked by many buildings, and are not observable at all. So the whole premise of GPS — observing range from time of flight — breaks down, and it is very difficult to get good accuracy.
Figure 4 compares the cell-phone GPS with the survey-grade GPS, connected to the same small antenna, under such circumstances in San Francisco’s Financial District. There are no fixes at all from the survey-grade receiver. Why?
In Montgomery Street, there was only one directly visible satellite, with a signal strength of -132 dBm. All the other satellites were at -140 dBm or weaker, and traditional GPS receivers cannot acquire signals at this level. Hence the only receivers that work in this environment are modern high-sensitivity receivers most commonly found in cell phones.
You can see that the move to lower-cost receivers has not come at the expense of performance. In fact, the opposite: TTFF and sensitivity have improved dramatically, while accuracy has not been compromised, and is in fact much better in urban environments than legacy receivers, and even modern survey-grade receivers.
But are we there yet?
Although the consumer GPS market has irrefutably arrived, from a technical perspective the answer is more nuanced. Consumer GPS technology has made tremendous leaps forward. But precisely because of these improvements, we are taking GPS where it was never expected to go. It is no longer enough for GPS to work indoors (which it can). The demand is now for it to work as well as if it were outdoors (which, presently, it cannot).
Performance improvements seen with GPS-only will almost certainly not continue at the recent rate. We do not anticipate yet another 45X improvement in TTFF, or another 30 dB of sensitivity, for GPS alone. However, we do expect order-of-magnitude performance increases with the addition of other technologies. Figure 5 shows data from a TomTom 950, a GPS+MEMS containing the same GPS chip used in the earlier tests, MEMS accelerometers, and MEMS rate gyros. When tightly integrated and tested in the same deep urban canyons of San Francisco, the effect on position is good: median accuracy improved by 30 percent, worst-case errors are more than halved. But the result on heading accuracy is especially dramatic.
The bar graph shows the worst-case heading accuracy in each street. With GPS-only (red), the worst-case error is around 45 degrees, a familiar result to anyone who has used any GPS-only device in a similar environment: sooner or later the map will veer erroneously. However, with the integration of the MEMS rate gyros (blue), the worst-case heading errors drop to around 3 degrees, a 15X improvement in a key metric, similar to the improvements of the last decade, but now thanks to the effect of GPS-plus.
We will soon see GPS-plus many other technologies: Wi-Fi, NMR/MRL (power measurements from GSM and 3G phones), and of course GPS+GLONASS, Compass, QZSS, and Galileo. Because many mobile devices now include GPS, Wi-Fi, and 3G, there is a natural path for the evolution of GPS technology to include Wi-Fi and MRL measurements.
There is a also natural trend to source different radios from the same chip supplier. After all, why would you wish to undertake a do-it-yourself effort at removing co-existence issues in different radios, when a chip supplier has already done it for you?
Looking forward, it is very likely that this new decade will be characterized by GPS-plus other technologies, and the winners will be those with the greatest levels of integration.
Frank van Diggelen is senior technical director of GPS systems and chief navigation officer for Broadcom Corporation. He holds more than 45 U.S. patents, has a Ph.D. in electrical engineering from Cambridge University, and is the author of A-GPS: Assisted GPS, GNSS & SBAS.