Innovation: Getting Along - GPS World

Innovation: Getting Along

November 1, 2012  - By

Collaborative Navigation in Transitional Environments

By Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

GPS World photo

INNOVATION INSIGHTS by Richard Langley

COLLABORATION,  n. /kəˌlæbəˈreɪʃən/, n. of action. United labour, co-operation; esp. in literary, artistic, or scientific work — according to the Oxford English Dictionary. Collaboration is something we all practice, knowingly or unknowingly, even in our everyday lives. It generally results in a more productive outcome than acting individually. In scientific and engineering circles, collaboration in research is extremely common with most published papers having multiple authors, for example.

The term collaboration can be applied not only to the endeavors of human beings or other living creatures but also to inanimate objects, too. Researchers have developed systems of miniaturized robots and unmanned vehicles that operate collaboratively to complete a task. These platforms must navigate as part of their functions and this navigation can often be made more continuous and accurate if each individual platform navigates collaboratively in the group rather than autonomously. This is typically achieved by exchanging sensor measurements by some kind of short-range wireless technology such as Wi-Fi, ultra-wide band, or ZigBee, a suite of communication protocols for small, low-power digital radios based on an Institute of Electrical and Electronics Engineers’ standard for personal area networks.

A wide variety of navigation sensors can be implemented for collaborative navigation depending on whether the system is designed by outdoor use, for use inside buildings, or for operations in a wide variety of environments. In addition to GPS and other global navigation satellite systems, inertial measurement units, terrestrial radio-based navigation systems, laser and acoustic ranging, and image-based systems can be used.

In this month’s article, a team of researchers at The Ohio State University discusses a system under development for collaborative navigation in transitional environments — environments in which GPS alone is insufficient for continuous and accurate navigation. Their prototype system involves a land-based deployment vehicle and a human operator carrying a personal navigator sensor assembly, which initially navigate together before the personal navigator transitions to an indoor environment. This system will have multiple applications including helping first responders to emergencies. Read on.

“Innovation” is a regular feature that discusses advances in GPS technology andits applications as well as the fundamentals of GPS positioning. The column is coordinated by Richard Langley of the Department of Geodesy and Geomatics Engineering, University of New Brunswick. He welcomes comments and topic ideas. To contact him, see the “Contributing Editors” section on page 6.


Collaborative navigation is an emerging field where a group of users navigates together by exchanging navigation and inter-user ranging information. This concept has been considered a viable alternative for GPS-challenged environments. However, most of the developed systems and approaches are based on fixed types and numbers of sensors per user or platform (restricted in sensor configuration) that eventually leads to a limitation in navigation capability, particularly in mixed or transition environments.

As an example of an applicable scenario, consider an emergency crew navigating initially in a deployment vehicle, and, when subsequently dispatched, continuing in collaborative mode, referring to the navigation solution of the other users and vehicles. This approach is designed to assure continuous navigation solution of distributed agents in transition environments, such as moving between open areas, partially obstructed areas, and indoors when different types of users need to maintain high-accuracy navigation capability in relative and absolute terms.

At The Ohio State University (OSU), we have developed systems that use multiple sensors and communications technologies to investigate, experimentally, the viability and performance attributes of such collaborative navigation. For our experiments, two platforms, a land-based deployment vehicle and a human operator carrying a personal navigator (PN) sensor assembly, initially navigate together before the PN transitions to the indoor environment.

In the article, we describe the concept of collaborative navigation, briefly describe the systems we have developed and the algorithms used, and report on the results of some of our tests. The focus of the study being reported here is on the environment-to-environment transition and indoor navigation based on 3D sensor imagery, initially in post-processing mode with a plan to transition to real time.

The Concept

Collaborative navigation, also referred to as cooperative navigation or positioning, is a localization technique emerging from the field of wireless sensor networks (WSNs). Typically, the nodes in a WSN can communicate with each other using wireless communications technology based on standards, such as Zigbee/IEEE 802.15.4. The communication signals in a WSN are used to derive the inter-nodal distances across the network. Then, the collaborative navigation solution is formed by integrating the inter-nodal range measurements among nodes (users) in the network using a centralized or decentralized Kalman filter, or a least-squares-based approach.

A paradigm shift from single to multi-sensor to multi-platform navigation is illustrated conceptually in Figure 1. While conventional sensor integration and integrated sensor systems are commonplace in navigation, sensor networks of integrated sensor systems are a relatively new development in navigation. Figure 2 illustrates the concept of collaborative navigation with emphasis on transitions between varying environments. In actual applications, example networks include those formed by soldiers, emergency crews, and formations of robots or unmanned vehicles, with the primary objective of achieving a sustained level of sufficient navigation accuracy in GPS-denied environments and assuring seamless transition among sensors, platforms, and environments.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 1. Paradigm shift in sensor integration concept for navigation.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 2. Collaborative navigation and transition between varying environments.

Field Experiments and Methodology

A series of field experiments were carried out in the fall of 2011 at The Ohio State University (OSU), and in the spring of 2012 at the Nottingham Geospatial Institute of the University of Nottingham, using the updated prototype of the personal navigator developed earlier at the OSU Satellite Positioning and Inertial Navigation Laboratory, and land-based multisensory vehicles. Note that the PN prototype is not a miniaturized system, but rather a sensor assembly put together using commercial off-the-shelf components for demonstration purposes only.

The GPSVan (see Figure 3), the OSU mobile research navigation and mapping platform, and the recently upgraded OSU PN prototype (see Figure 4) jointly performed a variety of maneuvers, collecting data from multiple GPS receivers, inertial measurement units (IMUs), imaging sensors, and other devices. Parts of the collected data sets have been used for demonstrating the performance of navigation indoors and in the transition between environments, and it is this aspect of our experiments that will be discussed in the present article.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 3. Land vehicle, OSU GPSVan.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 4. Personal navigator sensor assembly.

The GPSVan was equipped with navigation, tactical, and microelectromechanical systems (MEMS)-grade IMUs, installed in a two-level rigid metal cage, and the signals from two GPS antennas, mounted on the roof, were shared among multiple geodetic-grade dual-frequency GPS receivers. In addition, odometer data were logged, and optical imagery was acquired in some of the tests.

The first PN prototype system, developed in 2006–2007, used GPS, IMU, a digital barometer, a magnetometer compass, a human locomotion model, and 3D active imaging sensor, Flash LIDAR (an imaging light detection and ranging system using rapid laser pulses for subject illumination). Recently, the design was upgraded to include 2D/3D imaging sensors to provide better position and attitude estimates indoors, and to facilitate transition between outdoor and indoor environments. Consequently, the current configuration allows for better distance estimation among platforms, both indoors and outdoors, as well as improving the navigation and tracking performance in general.

The test area where data were acquired to support this study, shown in Figure 5, includes an open parking lot, moderately vegetated passages, a narrow alley between buildings, and a one-storey building for indoor navigation testing. The three typical scenarios used were:
1)    Sensor/platform calibration: GPSVan and PN are connected and navigate together.
2)    Both platforms moved closely together, that is, the GPSVan followed the PN’s trajectory.
3)    Both platforms moved independently.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Image-Based Navigation

The sensor of interest for the study reported here is an image sensor that actually includes two distinct data streams: a standard intensity image and a 3D ranging image, see Figure 6. The unit consists primarily of a 640 × 480 pixel array of infrared detectors. The operational range of the sensor is 0.8–10 meters, with a range resolution of 1 centimeter at a 2-meter distance.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 6. PN captured 3D image sequence from inside the building.

In this study, the image-based navigation (no IMU) was considered. To overcome this limitation, the intensity images acquired simultaneously with the range data by the unit were leveraged to provide crucial information. The two intensity images were processed utilizing the Scale Invariant Feature Transform (SIFT) algorithm to identify matching features between the pair of 2D intensity images.

The SIFT algorithm has been primarily applied to 1D and 2D imagery to date; the authors are not aware of any research efforts to apply SIFT to 3D datasets for the expressed purpose of positioning. Analysis at our laboratory supported well-published results regarding the exceptional performance of SIFT with respect to both repeatability and extraction of the feature content. The algorithm is remarkably robust to most image corruption schema, although white noise above 5 percent does appear to be the primary weakness of the algorithm. The algorithm suffers in three critical areas with respect to providing a 3D positioning solution. First, the algorithm is difficult to scale in terms of the number of descriptive points; that is, the algorithm quickly becomes computationally intractable for a large number (>5,000) of pixels. Secondly, the matching process is not unique; it is exceptionally feasible for the algorithm to match a single point in one image to multiple points in another image. Finally, since the algorithm loses spatial positioning capabilities to achieve the repeatability, the ability to utilize matching features for triangulation or trilateration becomes impaired. Owing to the noted issues, SIFT was not found to be a suitable methodology for real-time positioning based on 3D Flash LIDAR datasets.

Despite these drawbacks, the intensity images offer the only available sensor input beyond the 3D ranging image. As such, the SIFT methodology provides what we believe to be a “best in class” algorithmic approach for matching 2D intensity images. The necessity of leveraging the intensity images will be apparent shortly, as the schema for deriving platform position is explained.

The algorithm has been developed and implemented by the second author (see Further Reading for details). The algorithm utilizes eigenvector “signatures” for point features as a means to facilitate matching. The algorithm is comprised of four steps:
1)    Segmentation
2)    Coordinate frame transformation
3)    Feature matching
4)    Position and orientation determination.

The algorithm utilizes the eigenvector descriptors to merge points likely to belong to a surface and identify the pixels corresponding to transitions between surfaces. Utilizing an initial coarse estimate from the IMU system, the results from the previous frame are transformed into the current coordinate reference frame by means of a Random Sampling Consensus or RANSAC methodology. Matching of static transitional pixels is accomplished by comparing eigenvector “signatures” within a constrained search window. Once matching features are identified and determined to be static, the closed form quaternion solution is utilized to derive the position and orientation of the acquisition device, and the result updates the inertial system in the same manner as a GPS receiver within the common GPS/IMU integration. The algorithm is unique in that the threshold mechanisms at each step are derived from the data itself, rather than relying upon a-priori limits. Since the algorithm only utilizes transitional pixels for matching, a significant reduction in dimensionality is generally accomplished and facilitates implementation on larger data frames.

The key point in this overview is the need to provide coarse positioning information to the 3D matching algorithm to constrain the search space for matching eigenvector signatures. Since the IMU data were not available, the matching SIFT features from the intensity images were correlated with the associated range pixel measurements, and these range measurements were utilized in Horn’s Method (see Further Reading) to provide the coarse adjustment between consecutive range image frames. The 3D-range-matching algorithm described above then proceeds normally.

The use of SIFT to provide the initial matching between the images entails the acceptance of several critical issues, beyond the limitations previously discussed. First, since the SIFT algorithm is matching 2D features on the intensity image; there is no guarantee that the matched features represent static elements in the field of view. As an example, SIFT can easily “match” the logo on a shirt worn by a moving person; since the input data will include the position of non-static elements, the resulting coarse adjustment may possess very large biases (in position). If these biases are significant, constraining the search space may be infeasible, resulting in either the inability to generate eigenvector matches (worst case) or a longer search time (best case). Since the 3D-range-matching algorithm checks the two range images for consistency before the matching process begins, this can be largely mitigated in implementation. Secondly, the SIFT features are located with sub-pixel location, thus the correlation to the range pixel image will inherently possess an error of ± 1 pixel (row and column). The impact of this error is that range pixels utilized to facilitate the coarse adjustment may in fact not be correct; the correct range pixel to be matched may not be the one selected. This will result in larger errors during the initial (coarse) adjustment process. Third, the uncertainty of the coarse adjustment is not known, so a-priori estimates of the error ellipse must be made to establish the eigenvector search space. The size and extent of these error ellipses is not defined on-the-fly by the data, which reduces one of the key elements of the 3D matching algorithm. Fourth, the limited range of the image sensor results in a condition where intensity features have no associated range measurement (the feature is out of range for the range device). This reduces the effective use of SIFT features for coarse alignment. However, using the intensity images does demonstrate the ability of the 3D-range-matching algorithm to generically utilize coarse adjustment information and refine the result to provide a navigation solution.

Data Analysis

In the experiment selected for discussion in this article, initially, the PN was initially riding in the GPSVan. After completing several loops in the parking lot (the upper portion of Figure 5), the PN then departed the vehicle and entered the building (see Figure 7), exited the facility, completed a trajectory around the second building (denoted as “mixed area” in Figure 5), and then returned to the parking lot.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 7. Building used as part of the test trajectory for indoor and transition environment testing; yellow line: nominal personal navigator indoor trajectories; arrows: direction of personal navigator motion inside the building; insert: reconstructed trajectory section, based on 3D image-based navigation.

While minor GPS outages can occur under the canopy of trees, the critical portion of the trajectory is the portion occurring inside the building since the PN platform will be unable to access the GPS signal during this portion of the trajectory. Our efforts are therefore focused on providing alternative methods for positioning to bridge this critical gap.

Utilizing the combined intensity images (for coarse adjustment via SIFT) and the 3D ranging data, a trajectory was derived for travel inside the building at the OSU Supercomputing Facility. There is a finite interval between exiting the building and recovery of GPS signal lock during which the range acquisition was not available; thus the total extent of travel distance during GPS signal outage is not precisely identical to the travel distance where 3D range solutions were utilized for positioning. We estimate the distance from recovery of GPS signals to the last known 3D ranging-derived position to be approximately 3 meters. Based upon this estimate, the travel distance inside the building should be approximately 53.5 meters (forward), 9.5 meters (right), and 0.75 meters (vertical). Based upon these estimates, the total misclosure based upon 3D range-derived positions is provided in Table 1. The asterisk in the third row indicates the estimated nature of these values.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Table 1. Approximate positional results for the OSU Supercomputing Facility trajectory.

The average positional uncertainty reflects the relative, frame-to-frame error reported by the algorithm during the indoor trajectory. This includes both IMU and 3D ranging solutions. The primary reason for the rather large misclosure in the forward and vertical directions is the result of three distinct issues. First, the image ranging sensor has a limited range; during certain portions of the trajectory the sensor is nearly “blind” due to lack of measurable features within the range. During this period, the algorithm must default to the IMU data, which is known to be suspect, as previously discussed. Secondly, the correlation between SIFT features and range measurement pixels can induce errors, as discussed above. Third, the 3D range positions and the IMU data were not integrated in this demonstration; the range positions were used to substitute for the lost GPS signals and the IMU was drifting. Resolving this final issue would, at a minimum, reduce the IMU drift error and improve the overall solution.

A follow-up study conducted at a different facility was completed using the same platform and methodology. In this study, a complete traverse was completed indoors forming a “box” or square trajectory, which returned to the original entrance point. A plot of the trajectory results is provided in Figure 8. The misclosure is less than four meters with respect to both the forward (z) and right (x) directions. While similar issues exist with IMU drift (owing to lack of tight integration with the ranging data), a number of problems between the SIFT feature/range pixel correlation portion of the algorithm are evident; note the large “clumps’ of data points, where the algorithm struggles to reconcile the motions reported by the coarse (SIFT-derived) position and the range-derived position.

Source: Dorota A. Grejner-Brzezinska, J.N. (Nikki) Markiel, Charles K. Toth and Andrew Zaydak

Figure 8. Indoor scenario: square (box) trajectory.

Conclusions

As demonstrated in this paper, the determination of position based upon 3D range measurements can be seen to have particular potential benefit for the problem of navigation during periods of operation in GPS-denied environments. The experiment demonstrates several salient points of use in our ongoing research activities. First, the effective measurement range of the sensor is paramount; the trivial (but essential) need to acquire data is critical to success. A major problem was the presence of matching SIFT features but no corresponding range measurement. Second, orientation information is just as critical as position; the lack of this information significantly extended the time required to match features (via eigenvector signatures). Third, there is a critical need for the sensor to scan not only forward (along the trajectory) but also right/left and up/down. Obtaining features in all axes would support efforts to minimize IMU drift, particularly in the vertical. Alternatively, a wider field of view could conceivably accomplish the same objective. Finally, the algorithm was not fully integrated as a substitute for GPS positioning and the IMU was free to drift. Since the 3D ranging algorithm cannot guarantee a solution for all epochs, accurate IMU positioning is critical to bridge these outages. Fully integrating the 3D ranging solution with a GPS/IMU/3D schema would significantly reduce positional errors and misclosure.

Our study indicates that leveraging 3D ranging images to achieve indoor relative (frame-to-frame) positioning shows great promise. The utilization of SIFT to match intensity images was an unfortunate necessity dictated by data availability; the method is technically feasible but our efforts would suggest there are significant drawbacks to this application, both in terms of efficiency and positional accuracy. It would be better to use IMU data with orientation solutions to derive the best possible solution. Our next step is the full integration within the IMU to enable 3D ranging solutions to update the ongoing trajectory, which we believe will reduce the misclosure and provide enhanced solutions supporting autonomous (or semi-autonomous) navigation.

Acknowledgments

This article is based on the paper “Cooperative Navigation in Transitional Environments,” presented at presented at PLANS 2012, the Institute of Electrical and Electronics Engineers / Institute of Navigation Position, Location and Navigation Symposium held in Myrtle Beach, South Carolina, April 23–26, 2012.

Manufacturers

The equipment used for the experiments discussed in this article included a NovAtel Inc. SPAN system consisting of a NovAtel OEMV GPScard, a Honeywell International Inc. HG1700 Ring Laser Gyro IMU, a Microsoft Xbox Kinect 3D imaging sensor, and a Casio Computer Co., Ltd. Exilim EX-H20G Hybrid-GPS digital camera.


DOROTA GREJNER-BRZEZINSKA is a professor and leads the Satellite Positioning and Inertial Navigation (SPIN) Laboratory at OSU, where she received her M.S. and Ph.D. degrees in geodetic science.

J.N. (NIKKI) MARKIEL is a lead geophysical scientist at the National Geospatial-Intelligence Agency. She obtained her Ph.D. in geodetic engineering at OSU.

CHARLES TOTH is a senior research scientist at OSU’s Center for Mapping. He received a Ph.D. in electrical engineering and geoinformation sciences from the Technical University of Budapest, Hungary.

ANDREW ZAYDAK is a Ph.D. candidate in geodetic engineering at OSU.

FURTHER READING

◾ The Concept of Collaborative Navigation

“The Network-based Collaborative Navigation for Land Vehicle Applications in GPS-denied Environment” by J-K. Lee, D.A. Grejner-Brzezinska and C. Toth in the Royal Institute of Navigation Journal of Navigation; in press.

“Positioning and Navigation in GPS-challenged Environments: Cooperative Navigation Concept” by D.A. Grejner-Brzezinska, J-K. Lee and C. K. Toth, presented at FIG Working Week 2011, Marrakech, Morocco,  May 18-22, 2011.

“Network-Based Collaborative Navigation for Ground-Based Users in GPS-Challenged Environments” by J-K. Lee, D. Grejner-Brzezinska, and C.K. Toth in Proceedings of ION GNSS 2010, the 23rd International Technical Meeting of the Satellite Division of The Institute of Navigation, Portland, Oregon, September 21-24, 2010, pp. 3380-3387.

◾ Sensors Supporting Collaborative Navigation

“Challenged Positions: Dynamic Sensor Network, Distributed GPS Aperture, and Inter-nodal Ranging Signals” by D.A. Grejner-Brzezinska, C.K. Toth, J. Gupta, L. Lei, and X. Wang in GPS World, Vol. 21, No. 9, September 2010, pp. 35-42.

“Positioning in GPS-challenged Environments: Dynamic Sensor Network with Distributed GPS Aperture and Inter-nodal Ranging Signals” by D.A. Grejner-Brzezinska, C. K. Toth, L. Li, J. Park, X. Wang, H. Sun, I.J. Gupta, K. Huggins and Y. F. Zheng (2009): in Proceedings of ION GNSS 2009, the 22nd International Technical Meeting of the Satellite Division of The Institute of Navigation, Savannah, Georgia, September 22-25, 2009, pp. 111–123.

“Separation of Static and Non-Static Features from Three Dimensional Datasets: Supporting Positional Location in GPS Challenged Environments – An Update” by J.N. Markiel, D. Grejner-Brzezinska, and C. Toth in Proceedings of ION GNSS 2007, the 20th International Technical Meeting of the Satellite Division of The Institute of Navigation, Fort Worth, Texas, September 25-28, 2007, pp. 60-69.

◾ Personal Navigation

“Personal Navigation: Extending Mobile Mapping Technologies Into Indoor Environments” by D. Grejner-Brzezinska, C. Toth, J. Markiel, and S. Moafipoor in Boletim De Ciencias Geodesicas, Vol. 15, No. 5, 2010, pp. 790-806.

“A Fuzzy Dead Reckoning Algorithm for a Personal Navigator” by S. Moafipoor, D.A. Grejner-Brzezinska, and C.K. Toth, in Navigation, Vol. 55, No. 4, Winter 2008, pp. 241-254.

“Quality Assurance/Quality Control Analysis of Dead Reckoning Parameters in a Personal Navigator” by S. Moafipoor, D. Grejner-Brzezinska, C.K. Toth, and C. Rizos in Location Based Services & TeleCartography II: From Sensor Fusion to Context Models, G. Gartner and K. Rehrl (Eds.), Lecture Notes in Geoinformation & Cartography, Springer-Verlag, Berlin and Heidelberg, 2008, pp. 333-351.

“Pedestrian Tracking and Navigation Using Adaptive Knowledge System Based on Neural Networks and Fuzzy Logic” by S. Moafipoor, D. Grejner-Brzezinska, C.K. Toth, and C. Rizos in Journal of Applied Geodesy, Vol. 1, No. 3, 2008, pp. 111-123.

◾ Horn’s Method

“Closed-form Solution of Absolute Orientation Using Unit Quaternions” by B.K.P. Horn in Journal of the Optical Society of America, Vol. 4, No. 4, April 1987, p. 629-642.

This article is tagged with and posted in From the Magazine, Innovation, OEM