Your behavior appears to be a little unusual. Please verify that you are not a bot.


Survey Perspectives: RTK Networks Webinar Q&A Follow-Up

May 6, 2009  - By
Image: GPS World

 

I really enjoy doing webinars and the RTK Network webinar on April 21 was no exception. One of the reasons I really enjoy them are the questions and comments I receive because it gives me some feedback as to what the user community is thinking and wondering about. Clearly, RTK networks are a hot topic these days. The registration for the RTK Networks webinar was one of the highest in history for GPS World.

If you missed the webinar, you can still download the file and listen to it.

Now without further ado, following are questions that listeners sent in and my comments from the RTK Networks webinar.

Question #1: Can you say anything about the proposed National Geodetic Survey Real-Time Networking (NGS RTN) guidelines?

Gakstatter: The NGS is still in the early stages of developing the RTN guidelines so the agency would prefer public comment be withheld at the moment. It’s are working on guidelines to cover four areas: site considerations; planning and design; administration; and users. The agency has assembled quite a team of government and industry people to develop these guidelines. The team hopes to have draft versions ready by September 30, 2009.

However, the NGS Real-Time User Guidelines (Ver. 2.0.4) is available to the public. Though these guidelines are targeted at classical RTK users (non-RTK network), it contains some solid procedures.

Also, an interesting study was published recently by Newcastle University Civil Engineering and Geosciences specific to network RTK. Stakeholders in the report include The Survey Association (UK), Ordnance Survey (UK), Leica Geosystems, Trimble, and Royal Institute of Chartered Surveyors. They did some extensive testing and generated basic guidelines:

  1. Configure the rover according to manufacturer guidelines. According to the report, significant deviations from recommended settings can introduce unacceptable errors.
  2. Consider lowering the GDOP (PDOP) mask to 3 instead of 5. Generally, in a clear-sky environment, you’re going to get this anyway and it will increase the robustness of solutions in challenging areas.
  3. Pay close attention to quality indicators on the rover (for example, RMS values). They generally reflect actual performance of the rover. An RMS value more than 10 centimeters generally indicates there is a problem such as loss of ambiguity resolution or other satellite loss of lock. Those positions should not be used. However, in challenging environments (such as obstructed satellite visibility and multipath) quality indicators (especially vertical) maybe be “overly optimistic” by a factor of 3 to 5.
  4. The report commented on occupation times, which I’ve written about in a previous article. Using a 5-second average on topographic will reduce the effect of individual epoch variations.When vertical is important (as in establishing secondary control), two different sessions of at least 180 seconds should be recorded. The report indicated that a time separation between sessions of 20 minutes will yield an accuracy improvement of 10 to 20 percent. A time separation of 45 minutes will yield an accuracy improvement of 15 to 30 percent. A time separation of greater than 45 minutes did not provide “appreciable further improvement. This was very interesting to me as most guidelines I’ve read (including NGS guidelines) dictate a four-hour separation between sessions.
  5. GLONASS improves satellite visibility (thus increasing productivity), but doesn’t necessarily improve accuracy. *
    This conclusion doesn’t surprise me, but I think there needs to be an asterisk here since there are significantly more GLONASS satellites available now than there were a year ago. In a scenario where there are only five GPS satellites and four GLONASS satellites, my guess is that at least the robustness of the solution will be better, and generally the accuracy as well, due to the improved geometry (PDOP).

Their recommendations make a lot of sense to me. Probably the most controversial is the separation time (45 minutes versus four hours) between sessions. This is against most standard practice that I’ve read, but then again I don’t have empirical data to support it either way, whereas the report does. It is clearly an area that needs a closer look. The time savings in the field could be reduced considerably for setting secondary control if this practice was adopted.

Question #2: What manufacturers for RTK-network implementation would you recommend?

Gakstatter: Well, there aren’t many choices. The market is dominated by Trimble and Leica Geosystems, with Topcon on the fringe.

I don’t know if anyone can say with confidence which one is better from a technology standpoint. I’ve used rovers on all three networks and all seemed to behave as expected.

Both Trimble and Leica networks have been implemented in large geographic areas (state-wide, country-wide) so they’ve experienced the growing pains and presumably have worked out any major issues.

There are many issues other than which network software vendor you select. A big one is the information technology (IT) component. Without support from your IT department (or control over IT with a competent IT project manager), getting a network to run smoothly will be a really rough road. I don’t pretend to have gone through the process of setting one up, but I’ve talked to enough people to know this is a common theme among them.

Trimble VRS

Leica Spider

Topcon TopNet

Question #3: How different is the RTK processing for network versus cluster?

Gakstatter: A cluster is essentially a group of reference stations set up in a geographic area. The user selects which reference station to use (usually the closest one) and receives corrections just like a user would from a reference station he set up himself. Communications from reference station to user is generally accomplished via UHF/VHF/spread spectrum radio or wireless network (GSM, CDMA).

With a network, data is collected by all reference stations and sent to a central server where the data is processed; corrections are generated and sent to the user. Sophisticated atmospheric modeling is done and incorporated into the corrections. In theory, this eliminates distance-dependent errors within the network.

Question #5: Does anyone know of any other published RTN user guidelines?

Gakstatter: See answer to #1. The Newcastle University report is available here.

Question #6: Could you talk a little about post-processing?

Gakstatter: Well, it’s a subject worthy of more space than can be accommodated here, but it certainly has its place in setting primary survey/geodetic control and is the preferred method.

Also, single-frequency GPS units are still the price leaders for entry-level GPS surveying. Even today, many people use GPS L1 units with post-processing for collecting topo survey data.

Question #7: We are in Philadelphia and we use the Trimble VRS Network. We download and import a .dc file into Trimble Office. I don’t feel as confident using this network as I did when we got an OPUS solution and adjusted the base station. Procedure-wise, do you have any advice on how to capture the data? We are doing a morning session and an afternoon session and averaging the results.

Gakstatter: I deferred to Bill Henning who is the RTK network specialist with the National Geodetic Survey. NGS has developed RTK user guidelines. Here is Bill’s opinion:

“RTK will give you coordinate information and not much else. You can set the data collector to keep covariance records, which will allow you to dump the data in the office program and actually perform a tweaking of the coordinate positions if you have redundancy in some form (another location on the point of interest). I would never use just one RTK location for any significant point — there are too many variables. Any point that you will reuse or that is important in itself to the job should be located redundantly (see the summary table in Section V. of the single base guidelines).  Also, any point whose elevation is important to less than 3 centimeters should be leveled (or produced from a total station shot from a known point, and so on). In another vein, typical RTK accuracies (say 0.03′ horizontal, 0.05′ vertical) can be achieved through a localization to known and trusted passive monuments surrounding the project.

My recommendation for a project site without existing trusted control would be:

  • Perform two OPUS-RS set-ups on the site control points. These would be 15-minute sessions staggered by 4 hours. Even better (but not usually in the cards), perform the second session on a different day and/or with different weather (still staggered by 4 hours, though). Site control should form a rectangle around the project with additional internal control for large sites.
  • Use the RTN to check values on the OPUS-derived coordinates. This is where the datums and epochs of the RTN come into play! If the RTN is using coordinates aligned to the NSRS within a couple of centimeters, all should be well (to that accuracy). Search for outliers. Evaluate these for the error source (user, OPUS, RTN) and correct or discard.
  • Perform a site “localization” to the site control from the RTN. This will let the user now use the RTN for internal work based on the site control as the “truth.” This is most important for the verticals. All features that require an elevation accuracy RMS less than 0.05′ (say 1.5 cm), should be done redundantly or better, by more precise means such as leveling or total stations.
  • Make sure of the integrity of the site control for future work. Points should be outside of the disturbance area with good stability.

Question #8: How do you feel about the appropriateness of RTK for “boundary” locations? What QA/QC can be done in the field?

Gakstatter: Many surveyors I know use RTK for setting boundaries. Some even use single-baseline RTK for this task, which is essentially just a radial survey (no redundancy). I’d say that almost all who I know that are doing this have used their RTK systems enough to understand the limitations. In fact, I think most have run RTK and total stations side-by-side on jobs to gain confidence and understand RTK in the field.

I’m sure I’ll get blasted by some folks for not downplaying RTK for determining boundary locations, but I don’t think it serves any purpose to ignore what’s actually happening in the field. There is so much pressure, especially in these economic times, to reduce field time and increase efficiency that RTK ends up filling that need.

At a minimum, I would occupy each point at least twice with the base station set up on two different monuments. If you’re using corrections from an RTK network, I’d occupy twice with a 4-hour separation between occupations (for example, once in the morning and once in the afternoon). I’d even dump the antenna a couple of times with each occupation to get two or three “fresh” measurements.

The above assumes that you have a clear view of the sky (no blockage by trees or buildings), are tracking at least six GPS satellites, and have a PDOP of 3 or less. If you’re up against a tree line, tracking five satellites, and the PDOP is 5, I wouldn’t accept it even if the RMS indicators looked good.

I’ll leave at that for now, as I could write a column just on this subject. I certainly would not support someone new to RTK to cut their teeth on boundary locations. I’d suggest building confidence and experience with RTK on applications where there is more wiggle room.

Question #9: Could you address the ability of the RTK network or cluster to adequately service dynamic surveys verses static?

Gakstatter: Dynamic is really the issue here. In my experience, there are at least a couple of issues to be aware of.

  1. There’s generally a “lag time” between when you press the button on the data collector and when the measurement is taken. I don’t have any empirical data on this, but it’s something I’ve experienced and I’ve seen that some make and models of equipment do better than others. If you’re moving at 8 mph on a 4-wheeler and the lag time between pressing the data collector button and the actual measurement is 1 second, you will travel approximately 12 feet before the measurement is recorded.
  2. A few years ago, a client of mine wanted to measure the acceleration of a vehicle after it was impacted by another vehicle. We determined that recording data at 1 Hz (one measurement per second) wouldn’t provide sufficient resolution. Nearly all RTK systems come preset to record at 1 Hz. However, most RTK equipment is able to record faster than 1 Hz. We ended up recording at 10 Hz (10 measurements per second).

Question #10: It is possible to use a single-frequency receiver as a rover in the RTK technique, or it is a limitation?

Gakstatter: I’ve got just a little experience in attempting to use L1 RTK on an RTK network. It didn’t work very well for me for centimeter-level accuracy, but worked OK for sub-foot accuracy.

L1 RTK systems generally have some specific needs in order for them to work optimally. For example, some are able to utilize SBAS satellites as observables. RTK networks don’t support this type of observable (at least the ones I know of), so optimal performance from L1 RTK is achieved when the user operates his or her own reference station instead of using an RTK network.

Question #11: You should discuss the advantages of using PPP if a reference survey monument is not available when setting up/initializing RTK.

 

lign=”left”>Gakstatter: PPP (precise pointing positioning) is a very interesting subject and I intend to dedicate a column to it in a few months. In the meantime, GPS World Contributing Editor Dr. Richard Langley provided a column on PPP in the April 2009 issue of GPS World.

Question #12: For the states out west, any challenges you are aware of in collaborating with the PBO on upgrading stations to real time and receiving the raw data?

Gakstatter: Plate Boundary Observatory (PBO) has a tremendous number of reference stations in the Western United States, I think more than 800. I’ve spoken to a few different RTK network administrators in the Western U.S. who have incorporated PBO reference stations into their RTK networks. The general consensus is that PBO site communications is the major challenge. RTK networks require that the data stream travels from each reference station to the network server and then to the user within two seconds, so reliable communications is very important. PBO sites weren’t designed for this sort of communications in mind so that portion has to be upgraded in order for it to serve in an RTK network.

For new PBO sites, I’ve talked to an RTK network operator who has collaborated with PBO successfully in building the site and including “RTK-network compatible” communications facilities during site construction.

Question #13: Do you foresee penetration of GNSS RTK network technology in mass-market applications such as location-based services (LBS)?

Gakstatter: Not in the near future. LBS are not yet as much about accuracy as they are about applications — mostly navigation, family tracking, and social networking applications but many more are to come. None of these applications require the high degree of accuracy that RTK networks are built for.

Question #14: What is the estimated number of users in America? Say this year and three years later.

Gakstatter: I don’t have specific numbers, but I would say that this is one of the fastest growing areas in GNSS. It crosses many different industries such as survey engineering, construction, mining, and agriculture. Also, machine control is expected to grow worldwide at a CAGR of 23%-28% in the next five years and real-time positioning is a critical component for this.

Question #15: Does latency in cell signals affect accuracy in clusters or networks?

Gakstatter: Yes, very much so. The industry standard latency ceiling seems to be two seconds from the time the data leaves the reference station, travels back to the server, is processed, then is received by the user. Any hiccup in the communications process will affect accuracy.

Question #16: Our network recently performed a readjustment. This shifted the H by .08′ and the V by .10′. If you are using the network for real property boundaries, do you want to stay on a current epoch? Or have your property move with the crust, thus forcing recalibration on every readjustment?

Gakstatter: Again, I deferred to Bill Henning who is the RTK network specialist with the National Geodetic Survey. NGS is developing user and administrator guidelines for RTK networks. Here is Bill’s opinion:

“What has happened is either the RTN needed to be readjusted to be more accurate — due to new data, perhaps — or the RTN adopted a new realization [say NAD 83(NSRS2007) from NAD 83 (HARN)], or due to significant movement of the stations it was felt the coordinates should be maintained as current rather than at a prior epoch. For whatever reasons, you can see that the metadata on the RTN stations would be critical to consistent positioning. Because as the NGS CORS network is referenced to a particular epoch of time (ITRF 2000 realization of the ITRS at epoch 1997.0 transformed to NAD 83 realized at CORS adjustment 1996 at epoch 2002.0), with velocities supplied in both datums, the user can position from these stations to his epoch of survey by applying the shifts in coordinates produced by applying the velocities. All RTN should do the same.

“We have been spoiled in most of the U.S.A. by having a datum that moves with us and therefore has little residual movement relative to our position. NGS is now moving towards adopting a true geocentric datum aligned either to a certain epoch of a certain ITRF realization and fixed on a stable North American tectonic plate, or one that will adopt the worldwide velocities referenced in the ITRS datum. To be consistent, surveyors (and all geospatial professionals) should be sure to provide the proper metadata on their work, which will state the coordinate datum basis, source of coordinates, epoch date of the coordinates, estimated velocities as published, and whether the distances reference grid or ground coordinates. They can opt to provide coordinates based on the epoch date of the RTN or they can provide them for the date of survey, but they must provide the metadata for those following afterwards — including planners, designers, engineers, GIS, and future boundary retracers.

Question #16: Will network RTK win (render obsolete) or improve SBAS?

Gakstatter:  I don’t think so. SBAS (WAAS, EGNOS, MSAS) was designed and built to serve the aviation community. That is a separate and distinct system that will be stand-alone. Aviation navigation system infrastructure won’t (and shouldn’t) share resources like we do in the commercial sector. Aviation navigation infrastructure needs to be a stand-alone system under full control of the governing aviation authority (for example, in the United States, it’s the Federal Aviation Administration).

Question #17: Are RTK clusters/networks providing services for users that were once only available through the National Differential GPS stations?

Gakstatter: Not really. NDGPS is one source of DGPS corrections. WAAS is another source, and there are also commercial DGPS correction providers such as OmniSTAR. RTK networks are one more that can be added to the list.

Although RTK networks were created to provide centimeter-level accuracy. They are also able to provide DGPS corrections (sub-meter accuracy) like NDGPS, WAAS, and OmniSTAR. But unlike NDGPS and WAAS (which are free), it costs money to utilize an RTK network. Even if a subscription to an RTK network is free, the user still must pay for access to the GSM/CDMA network.

This article is tagged with , and posted in Opinions, Survey