Your behavior appears to be a little unusual. Please verify that you are not a bot.


Innovation: Seeing the Light

July 1, 2015  - By

A Vision-Aided Integrity Monitor for Precision Relative Navigation Systems

By Sean M. Calhoun, John Raquet and Gilbert L. Peterson

INNOVATION INSIGHTS by Richard Langley

INNOVATION INSIGHTS by Richard Langley

TO MEET THE ACCURACY,  availability, continuity and integrity requirements for many navigation applications, multiple-sensor systems are commonly used. For example, a GPS receiver might be combined with an inertial measurement unit, electronic compass and an altimeter to permit enhanced navigation accuracy, availability and continuity in obstructed or otherwise difficult environments. The use of arrays of sensors can also help to ensure that systems used in safety-critical navigation applications provide safe information by maintaining a high level of integrity.

An important group of devices that can be used in multi-sensor systems is one whose processes are based on light. These optical or vision-based devices include laser rangefinders and digital cameras. We could even consider our eyes to be in this group. In common with many other animals, we have built-in visual sensors to get around in our daily lives. Together with our memories, we use our eyes to get safely from one place to another. Ancient mariners tended to sail close to shore so that they could use visual cues for navigation. Later on, they learned how to use the light from celestial objects to navigate in the open ocean. And these days, while we could use the so-called “Mark 1 Eyeball” to continuously monitor the performance of a navigation system, this is often impractical, impossible or unwise.

In this month’s column, we’ll take a look at the development of a generalized vision-aided integrity monitor for precision relative navigation applications. The work is based on the concept of using a single-camera vision system, such as a visible-light or infrared electro-optical sensor, to monitor the occurrence of unacceptably large and potentially unsafe relative navigation errors. A vision-aided integrity monitor of this type could be extremely valuable in augmenting existing precision relative navigation systems, such as GPS, for many different safety-critical aerospace applications such as formation flying, aerial refueling, rendezvous/docking systems, and even precision landing.

It is particularly appropriate that such vision-aided systems be discussed at the present time since 2015 is the International Year of Light and Light-based Technologies, or IYL 2015. This United Nations initiative aims to raise awareness of the achievements of light science and its applications, and its importance to humankind. As mentioned on the IYL 2015 website, “[l]ight plays a vital role in our daily lives and is an imperative cross-cutting discipline of science in the 21st century. It has revolutionized medicine, opened up international communication via the Internet, and continues to be central to linking cultural, economic and political aspects of the global society.”

2015 is also an important anniversary year for several notable developments in our understanding of light. It is the 1,000th anniversary of the work of the Arabic scholar Ibn Al-Haytham, which culminated in his Book of Optics. A Latin translation significantly influenced a number of scholars in medieval and renaissance Europe including Leonardo da Vinci, Galileo Galilei, and Johannes Kepler. 2015 is also the 200th anniversary of Augustin-Jean Fresnel’s proposal that light behaves as a wave and the 150th anniversary of the publication of James Clerk Maxwell’s paper describing electromagnetic wave propagation as we discussed in “Insights” this past March. And we should also mention that 2015 is the 100th anniversary of the publication of Albert Einstein’s general theory of relativity, which includes a description of the propagation of light and other electromagnetic waves in the presence of a gravitational field.  And where would GPS and the other global navigation satellite systems and their augmentations be without the understanding that general relativity provides? Nowhere.


“Innovation” is a regular feature that discusses advances in GPS technology and its applications as well as the fundamentals of GPS positioning. The column is coordinated by Richard Langley of the Department of Geodesy and Geomatics Engineering, University of New Brunswick. He welcomes comments and topic ideas. Email him at lang @ unb.ca.


Recently, there has been an increased recognition of GNSS limitations in terms of robustness, availability and interference. As a result of this recognition, there has been renewed interest in developing non-GNSS-based navigation systems to augment system capability. This has become particularly important with the trend toward autonomous systems, where required navigation performance (RNP) metrics, such as accuracy, integrity, continuity and availability become operational drivers. Because of this trend, there is renewed interest in gaining navigational diversity using imaging or vision-aided navigation approaches. Early research with vision systems used 3-D terrain databases and imaging systems to provide periodic position updates in collaboration with onboard inertial navigation systems (INS), much like radar systems did prior to the wide proliferation of GNSS.

For precision relative navigation applications such as formation flying, aerial refueling, rendezvous and docking systems and even precision landing, there is a significant body of research for the use of vision navigation systems. For example, a vision-based relative navigation solution for aerial refueling with the use of an a priori 3-D tanker model has been developed. Results from flight tests showed that image-rendering relative navigation is a viable precision navigation technique for close formation flight, specifically aerial refueling, and  demonstrated 95% relative navigation accuracies on the order of 35 centimeters within the operational envelope.

As the body of vision-aided navigation research continues to grow, consideration of other RNP metrics is required. Ensuring that systems are providing safe information and maintaining a high level of integrity is paramount when considering safety-critical navigation applications, but is largely neglected in current vision-navigation research.

The concept of integrity, particularly for navigation systems, refers to the level of trust that can be placed in a navigation system in terms of detecting gross errors and divergences. Many navigation applications have adopted the use of protection levels, which are real-time navigation system outputs that bound the navigation errors to the required probability of integrity risk. For the case of vertical navigation, the vertical navigation system error (NSE) is bounded by the real-time vertical protection level (VPL), and as the long as the VPL is below the vertical alert limit (VAL), the system can continue its operation. Loss of integrity is defined by the case when the NSE > VAL without an alert or, in other words, when NSE > VAL and VPL ≤ VAL.

One of the richest sources of information for how integrity can be handled for precision relative navigation systems can be found with the Local Area Augmentation System (LAAS), which focused on providing integrity under fault-free and single ground reference receiver failure conditions. LAAS employs several quality monitors such as receiver autonomous integrity monitoring (RAIM).

Much of the vision-aided navigation research to date has focused more on system and algorithmic robustness, rather than quantitative and verifiable integrity, particularly for feature-based processing. One approach has introduced the concept of regional bounding for feature correspondence between time-sequenced image frames, including some feature-unique criteria that can provide some protection from feature correspondence errors. Although this approach does yield some robustness for the algorithms, no quantitative integrity characterization was developed. Another approach introduced a truly quantitative integrity monitor for failures in the mapping of features to pixels, particularly in the presence of a bias. This approach predicts the largest possible position error in the presence of one such bias due to feature mismatch using a GPS RAIM-type approach. The current state of research addressing integrity for vision navigation, using an image-rendering or template-matching approach, is even less mature. In fact, we have not identified any previous integrity-specific work for image-rendering vision navigation.

The research presented in this article generalizes the concept of integrity in terms of operating and alerting regions. Applications that use navigation systems generally have objective operating regions that require a certain navigation performance, whether this be around a glide-slope, a formation flight position or even a flight-path clearance. Navigation integrity becomes critical because large divergences from these operating regions, without an alert, can become safety risks. The alert limit is simply the instantiation of this concept. It is the threshold or measure of how much undetected divergence from the operating region can be tolerated without inducing unacceptably large safety risks.

The remaining sections of this article will describe the development of a rigorous and quantitative vision-aided integrity monitor for precision relative navigation systems. First, an introduction to relative navigation using image rendering will be covered in order to describe the fundamental vision navigation approach. This will be followed by a detailed derivation of the proposed vision-aided integrity monitor and simulation based performance results.

Using Image Rendering

The basis of our research is that vision-aided techniques, specifically image rendering, can be used to construct a high-performance integrity monitor for precision relative navigation systems. Image rendering approaches and/or template matching have been used extensively in vision applications such as machine vision, medical image registration, object detection and pose estimation, and recently as a precision navigation system for applications such as aerial refueling and formation flight. The general concept of image-rendering precision relative navigation was evaluated for an automated aerial refueling application, using the approach illustrated in Figure 1. The image rendering approach is based on comparing image sensors with rendered imagery from high-fidelity models, to estimate a relative location based on the best image correspondence.

FIGURE 1. Image rendering relative navigation approach.

FIGURE 1. Image rendering relative navigation approach.

The image correspondence process is the most critical aspect of the image-rendering or template-matching navigation approach, but the focus of our research is not to make claims of optimality or performance-difference judgments between these image correspondence techniques, but rather show feasibility in the overall vision-aided integrity approach using some of these techniques. Most image correspondence approaches transform the images into feature space, such as scale-invariant feature transform, silhouette, edges and corners, to name a few, and then compute a distance metric between the feature sets, such as Minkowski or Mahalanobis distance, to determine the degree of matching.

Once the actual sensor image is converted to feature space, rendered images are generated based on the relative navigation state estimate using the model, converted to feature space, and compared to the sensor features. This process is repeated across the navigation state space, computing an image correspondence value for each state estimate. The selected navigation state estimate is based on the “best” image correspondence value across the state space.

An example result of this process is presented in FIGURE 2, which shows correspondence values for an edge-based image-correspondence process. In this case, the minimum correspondence value represents the best estimate of the relative navigation state. These image correspondence values between the sensor image (IS) and the rendered reference images (IR) will form the basis for the integrity monitor detection rule.

FIGURE 2. GRD-based image correspondence illustration as a function of 2-D relative navigation state.

FIGURE 2. GRD-based image correspondence illustration as a function of 2-D relative navigation state.

Vision-Aided Integrity Monitor Development

As indicated in the preceding sections, our research is based on defining a vision-aided integrity monitor in terms of detecting when the system navigation state (x) is within a specified operating region (XOR) versus being within the alert region state space (XAR). The integrity monitor can yield four distinct conditions: rejection (PR), misdetection (PMD), detection (PD) and false-alarm (PFA). The performance of this type of binary (H0/H1) detection scheme can be characterized using just two of these metrics, the detection and false-alarm rates, which will be the two primary performance metrics for this research. PD is the primary metric measuring navigation integrity, describing the probability that the monitor successfully detects the condition when x ∈ XAR.

Bayesian, Minimax and Neyman-Pearson are a few of the detection schemes available to solve this type of binary detection problem. These detection schemes rely on the knowledge of the underlying statistics of the H0 and H1 condition, often characterized in terms of the probability density functions (PDFs). The main difference between these approaches is the resulting detection rule value (δ). Once δ has been established, the resulting theoretical performances of the detectors are computed by integrating the underlying PDFs of the H0 and H1 conditions, pH0 and pH1 respectively. The probability of detection (PD) is computed as

Inn-eq1(1)

The integrity performance of the monitor can also be described in terms of integrity risk or probability of missed detection

(PMD), which is computed as

Inn-eq2(2)

Similarly, the probability of false-alarm (PFA) is computed as

Inn-eq3(3)

This is represented graphically in FIGURE 3.

FIGURE 3. Graphical illustration of detection performance.

FIGURE 3. Graphical illustration of detection performance.

The PDFs represent the statistical distributions of image correspondence values for the respective H0/H1 condition. The general detection rule premise is such that for a given sensor image, the underlying PDF for the “best” image correspondence with the rendered reference set is sufficiently distinct when the sensor image is in an H0 condition versus H1. The characteristics of the H0/H1 PDFs that dictate the monitor performance are dependent on many factors, including the fidelity and accuracy of the world model, the general observability of the image rendering process and the image correspondence approach for the specific application. For our research, we used two image correspondence techniques to evaluate the overall integrity monitor approach.

The first image correspondence technique evaluated is a simple binary silhouette (SIL). In this approach, both the sensor image IS(xand reference image set IR(x-characterare converted to a silhouette using pre-defined thresholds to first convert the red-green-blue (RGB) images to gray scale and then subsequently to a binary image. An image correspondence function computes the percentage of overlap between the silhouettes.

The resulting image correspondence is based on the ratio of the cardinality of these sets. The navigation state estimate (x-character) that yields the maximum image correspondence value from the set of rendered reference images or template database is considered the most likely for that particular image sensor (IS).

The second image correspondence utilizes edge features for the image correspondence process. Under this approach, magnitude of gradient (GRD) processing is used, in which the sensor image and the rendered reference images are preprocessed through a Prewitt filter to determine changes in image intensities between adjacent pixels. This process computes the components of the gradient. The gradient magnitude is computed by root-sum-squaring the x-y components and normalized, resulting in an edge detection. A Gaussian blur filter is then applied to the output of the edge detection.

The application of the Gaussian blurring compensates for the spatial discrepancies between the discrete reference set or template database and the sensor image. Finally, the resulting feature images, including both the reference image (IR_GRDand the sensor image (IS_GRD), are processed through a sum-squared-difference (SSD) image correspondence.

The resulting PDFs are based on the best image correspondence with the RE reference set, which is the minimum for the GRD processing.

These image correspondences build the basis of the detection metric, utilizing both the sensor image (ISand the rendered reference set (IR), which is spatially distributed across the operating region, illustrated by FIGURE 4. This illustrated example shows instances of both a H0 and H1 sensor image (blue and red, respectively). The underlying H0/H1 PDFs for establishing the detection threshold are determined by sampling sensor images from XOR and XAR and computing the image correspondence against IR. This can be done through a combination of high-fidelity simulation and/or test data. The overall performance of the integrity monitor will be dictated by these underlying distributions. The following sections show the results of this integrity monitor approach for an aerial refueling application.

FIGURE 4. Simplified example of rendered reference set (IR) illustrating image correspondence process for integrity monitoring.

FIGURE 4. Simplified example of rendered reference set (IR) illustrating image correspondence process for integrity monitoring.

Simulation Evaluation

To explore the performance of the proposed integrity monitor approach, an aerial refueling (AR) application was modeled within a simulation environment. The AR operation lends itself well to the construct of the proposed integrity monitor and is developed to show that the system (refueling aircraft) is in the refueling envelope (RE) and has not violated the alert limit, which in the AR case is the safety boundary (SB). In this operational case, H0 is defined as the condition when the integrity monitor determines the refueling aircraft is in the RE, and H1 as the case when the integrity monitor determines the refueling aircraft to be within the SB. A validity region is also defined in order to bound the problem, in which it is assumed that the refueling aircraft is always within, under both H0 and H1 conditions, as shown in FIGURE 5.

FIGURE 5. Integrity regions of interest for an aerial refueling application and illustrated example of a rendered H0 image set for the refueling envelope used as the correspondence basis for the integrity detection metric.

FIGURE 5. Integrity regions of interest for an aerial refueling application and illustrated example of a rendered H0 image set for the refueling envelope used as the correspondence basis for the integrity detection metric.

To determine the underlying H0/H1 distributions, a set of reference images uniformly sampled from the RE was rendered using the associated tanker and camera models. This rendered image set was used as the common basis for performing the image correspondence with the actual sensor image.

The baseline RE reference set used for this research was developed using 504 rendered images distributed in a spherically uniform manner across the entire RE volume. Then, two random sets of simulated sensor images were generated and drawn from both RE and SB regions. It is assumed that the refueling aircraft and corresponding sensor images are within the validity region in order to bound the simulation. This bounding assumption is an acceptable constraint, given that the system most likely had to pass several operational checks to ensure the refueling aircraft is in the general region of the RE as defined by the validity region. To get detailed statistical representation of the PDFs, particularly at the tails of the distribution, both RE and SB image sets included more than 100,000 simulated sensor images, representing true states of the refueling aircraft. The simulation environment for this analysis uses the same refueling tanker model for the sensor images and the RE reference set, which eliminates the effects of modeling errors. Additionally, variations in the attitude are currently not considered. The resulting PDFs for H0 (blue) and H1 (red) conditions are shown in FIGURE 6.

FIGURE 6. Underlying image correspondence distribution for H0 (blue) and H1 (red) conditions.

FIGURE 6. Underlying image correspondence distribution for H0 (blue) and H1 (red) conditions.

Figure 6 shows generally good distinction between the H0 and H1 hypotheses — a necessary condition to achieve good detection performance. Several techniques were evaluated for determining the PDF including histogram, nearest neighbor and kernel with a Gaussian weighting function. These underlying H0 and H1 distributions will be used as the basis for designing the detection thresholds, based on the image correspondence of the sensor image with the RE reference set. These results assume uniform prior distributions across the RE and SB regions; however, it would be relatively straightforward to incorporate non-uniform prior information, based on a particular application, as available.

Detection schemes are often characterized using receiver operating characteristics or ROC curves, which illustrate the detection-monitor trade-off between probability of detection and probability of false alarm. The predicted detection performance for this AR application is a function of these underlying H0/H1 PDFs, and this performance is captured in the ROC curves shown in FIGURE 7. The ROC curves show that 10-3 level integrity-monitor detection performance (PDis realizable for both SIL and GRD image correspondence approaches, while still maintaining a reasonable probability of false alarm (PFA) of less than 0.05 (5%). The SIL approach demonstrates slightly better performance than GRD under the chosen image resolution and RE reference set density. Normally, theoretical ROC curves would extend through the whole range of values [0,1] for both PD and PFA; however, this assumes unbounded PDFs. Doing so would require an infinite number of simulation cases and is obviously not practical for a simulation evaluation to gain statistics necessary to extend the PDFs near the entire theoretical ranges. Overbounding of the PDF tails could be performed to extrapolate and extend the tails of H0/H1 PDFs to determine the integrity detection performance beyond the current ranges, but this was not performed as part of this research.

FIGURE 7. Predicted integrity detection performance for both SIL and GRD image correspondence techniques.

FIGURE 7. Predicted integrity detection performance for both SIL and GRD image correspondence techniques.

In most applications, conditions exist that are outside of the nominally defined operational envelope, but yet are not significant enough deviations to be considered safety risks that require alerts and action. Such a case exists for the refueling operation under consideration in this research, where there exists a region outside the RE, but not in the SB, which we will refer to as the operational limit volume (OLV). The current definitions of H0 and H1 for the vision-aided integrity-monitor approaches developed above only consider conditions within the RE or the SB volume, and not within the OLV volume. OLV conditions were omitted since they technically aren’t considered a safety or integrity risk. However, it is possible under certain implementations and operational considerations that integrity monitoring coverage is desired under these OLV conditions.

Using the same analysis process as the original evaluation, an updated simulation was performed, this time considering all points within the validity region, including the OLV points. To construct a detection scheme under this new paradigm, the OLV conditions must be either mapped to the existing H0 or H1 hypotheses, or a new hypothesis must be defined, possibly creating an M-ary hypothesis scenario. The approach taken for this research was to consider OLV conditions as a safety risk, which is a conservative approach, rather than defining any new hypotheses. The resulting image correspondence distributions are shown in FIGURE 8. Subplots (a) and (b) show the difference the OLV points have on the underlying PDF distributions. As expected, when the OLV points are excluded, the PDFs track the original distributions quite well. The impact of including sensor locations from the OLV is clear from these figures, yielding a much bigger overlap between the H0/H1 conditions.

FIGURE 8. Simulation testing results assuming OLV states are a safety risk. The prediction represents expected performance without consideration of the OLV states. (a) SIL image correspondence PDFs,(b) GRD image correspondence PDFs, (c) SIL ROC curve, (d) GRD ROC curve.

FIGURE 8. Simulation testing results assuming OLV states are a safety risk. The prediction represents expected performance without consideration of the OLV states. (a) SIL image correspondence PDFs,(b) GRD image correspondence PDFs, (c) SIL ROC curve, (d) GRD ROC curve.

Much like the PDFs, the ROC curves align with the previous results quite well when the OLV conditions are omitted, but take a order of magnitude integrity performance hit when OLV is captured under the existing H0/H1 definition and detection thresholds. Even under this conservative assumption, the overall monitor performance still yields a 0.96 (96%) detection rate at a 0.05 (5%) false-alarm rate, as illustrated by the ROC curves shown in subplots (c) and (d) of Figure 8. It is likely that these results could be significantly improved by redefining the terms of the H0 and H1 conditions or defining an H2 condition specifically for the OLV region.

Sensitivity Analysis

In addition to the baseline integrity monitor results, various sensitivity studies were performed to evaluate the integrity monitor performance impacts of environmental and hardware considerations. These sensitivity evaluations focused on common vision-based considerations such as sensor distortions and lighting conditions, and monitor design choices such as pixel resolution and reference image density. The sensitivity aspects that were evaluated under this research included the number of reference images, the effects of image distortion, pixel resolution and lighting conditions.

Reference Set Density. In addition to our standard reference set of 504 RE images, we conducted tests using 288 and 729 images. While a larger number of images improves integrity detection performance, processing speed is decreased. It is possible to trade off processing power for performance as necessary for a particular application and the associated integrity monitor performance requirements.

Image Distortion. We applied radial and tangential distortions to the simulated sensor images (ISsuch that they represented a 95% certainty of the residual error to represent an outer envelope case for this type of sensor. The impact on the H0/H1 PDFs is very minimal, and the results demonstrate a potential robustness to this common type of sensor effect.

Pixel Resolution. We evaluated eight different pixel resolutions from 12 × 9 to 1280 × 1024 pixels per image. Our results showed a surprising robustness to pixel resolution, indicating only marginal performance impacts down to extremely limited pixel densities.

Lighting Conditions. To explore the impact of lighting conditions, the simulated sensor images (ISused as the basis for the sensitivity analysis were regenerated under a secondary lighting condition, intended to emulate a much brighter background environment, and processed against the original RE reference set. The results demonstrate that under these varying lighting conditions, the system again demonstrates a high level of robustness, particularly using the SIL image correspondence approach.

Ratio Test Integrity Test

The initial integrity monitor results discussed thus far only used reference images from the operational region, RE. However, it is also possible to use a reference image set created with rendered images from the alert region, SB, by including an additional image correspondence process between the sensor image and rendered SB reference set. This is done to create a ratio test statistic as the detection metric. We compute the ratio of the highest image correspondence between the RE and SB reference sets. This approach is very analogous to the use of ratio tests for GNSS carrier-phase integer fixing.

The resulting ROC detection performance of the ratio threshold approach showed that, as with the single RE reference set, the SIL image correspondence approach yields the best H1 detection performance, resulting in the best integrity protection.

The GRD ratio detection performance also yields improved performance and is comparable to the SIL image correspondence approach solely with RE reference set.

Conclusions and Future Work

In this article, we have discussed the feasibility of a vision-aided integrity monitor for precision relative navigation systems. The research posed the relative navigation integrity problem within the context of an aerial refueling application. Using image rendering, where an imaging sensor and high-fidelity 3-D model is used, we have shown that 10-3 to 10-5 level of integrity monitoring is attainable for aerial refueling and formation flight applications. Having this level of independent monitoring could provide significant relief to a GPS-based precision relative-navigation system from a system-safety and certification perspective. The research demonstrated the proposed integrity monitor was robust against several degrading imaging effects, including lens distortions, lighting conditions and reductions in pixel resolution. Although more work is required to validate the results of this research, which was based on simulated images, the results show high promise for this type of integrity monitor approach.

Disclaimer

The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government.

Acknowledgment

This article is based on the paper “Vision-Aided Integrity Monitor for Precision Relative Navigation Systems” presented at ITM 2015, the 2015 International Technical Meeting of The Institute of Navigation held in Dana Point, Calif., Jan. 26–28, 2015.


SEAN CALHOUN is the managing director at CAL Analytics, Columbus, Ohio, and is pursuing his Ph.D. degree at the Air Force Institute of Technology (AFIT), Wright-Paterson Air Force Base, Ohio.

JOHN RAQUET is the director of the Autonomy and Navigation Technology Center at AFIT, where he is also a professor of electrical engineering.

GILBERT L. PETERSON is a professor of computer science at AFIT and vice chair of the International Federation for Information Processing Working Group 11.9, Digital Forensics.

FURTHER READING

  • Authors’ Conference Paper

“Vision-Aided Integrity Monitor for Precision Relative Navigation Systems” by S.M. Calhoun, J. Raquet and G. Peterson in Proceedings of ITM 2015, the 2015 International Technical Meeting of The Institute of Navigation, Dana Point, Calif., Jan. 26–28, 2015.

  • Image-Sensor Navigation

“Flight Test Evaluation of Image Rendering Navigation for Close-Formation Flight” by S.M. Calhoun, J. Raquet and J. Curro in Proceedings of ION GNSS 2012, the 25th International Technical Meeting of the Satellite Division of The Institute of Navigation, Nashville, Tenn., Sept. 17–21, 2012, pp. 826–832.

Using Predictive Rendering as a Vision-Aided Technique for Autonomous Aerial Refueling by A.D. Weaver, M.S. thesis, Air Force Institute of Technology, Wright-Patterson Air Force Base, Ohio, March 2009.

“Fusing Low-Cost Image and Inertial Sensors for Passive Navigation” by M. Veth and J. Raquet in Navigation: Journal of The Institute of Navigation, Vol. 54, No. 1, Spring 2007, pp. 11–20. doi: 10.1002/j.2161-4296.2007.tb00391.x.

“Automated Rendezvous and Docking Sensor Testing at the Flight Robotics Laboratory” by J.D. Mitchell, S.P. Cryan, D. Strack, L.L. Brewster, M.J. Williamson, R.T. Howard and A.S. Johnston in Proceedings of 2007 IEEE Aerospace Conference, Big Sky, Mont., March 3–10, 2007, doi: 10.1109/AERO.2007.352723.

“Performance of Integrated Electro-Optical Navigation Systems” by T. Hoshizaki, D. Andrisani II, A.W. Braun, A.K. Mulyana and J.S. Bethel in Navigation: Journal of The Institute of Navigation, Vol. 51, No. 2, Summer 2004, pp. 101–121, doi: 10.1002/j.2161-4296.2004.tb00344.x.

  • Simultaneous Localization and Mapping

“A Review of Recent Developments in Simultaneous Localization and Mapping” by G. Dissanayake, S. Huang, Z. Wang and R. Ranasinghe in Proceedings of 6th IEEE International Conference on Industrial and Information Systems, Kandy, Sri Lanka, Aug. 16–19, 2011, pp. 477–482, doi: 10.1109/ICIINFS.2011.6038117.

  • Navigation Integrity

“Developing a Framework for Image-based Integrity” by C. Larson, J.F. Raquet and M.J. Veth in Proceedings of ION GNSS 2009, the 22nd International Technical Meeting of the Satellite Division The Institute of Navigation, Savannah, Ga., Sept. 22–25, 2009, pp. 778–789.

“From RAIM to NIOAIM: A New Integrity Approach to Integrated Multi-GNSS Systems” by P.Y. Hwang and R.G. Brown in Inside GNSS, Vol. 3, No. 4, May-June 2008, pp. 24–33.

Minimum Aviation System Performance Standards for Local Area Augmentation System (LAAS), DO-245A, by RTCA SC-159 WG-4, RTCA Inc., Washington, D.C., December 2004.

  • Camera Calibration

“Flexible Camera Calibration by Viewing a Plane from Unknown Orientations” by Z. Zhang in Proceedings of ICCV99, the Seventh IEEE International Conference on Computer Vision, Kerkya, Greece, Sept. 20–27, 1999, Vol. 1, pp. 666–673, doi: 10.1109/ICCV.1999.791289.

  • Digital Image Processing

Digital Image Processing, 4th Ed., by W.K. Pratt, published by John Wiley & Sons, New York, 2007.

Digital Image Processing, 3rd Ed., by R.C. Gonzalez and R.E. Woods, published by Prentice Hall, Upper Saddle River, N.J., 2007.

  • Signals and Noise

Detection of Signals in Noise, 2nd Ed., by R. N. McDonough and A.D. Whalen, published by Academic Press, Inc., Waltham, Mass., 1995.

An Introduction to Signal Detection and Estimation, 2nd Ed., by H.V. Poor, published by Dowden & Culver, an imprint of Springer, New York. 1994.