Academia.eduAcademia.edu

Integrity Monitoring Techniques for Vision Navigation Systems

2012

In aviation applications, navigation integrity is paramount. Integrity of GPS systems is well established with set standards. Vision based navigation systems have been found to be an adequate substitute for GPS, when it is unavailable but are unlikely to be utilized until there is a measure for system integrity. Work has been done to detect the effect of a single measurement pair being corrupted with a bias; however, the measurement geometry varies greatly with the environment. The environment could be sparse in visual features to track, or the environment could be rich with features. With more features, there is a greater probability of having multiple corrupted measurements. It is essential that multiple corrupt measurements are detected and excluded to assure the integrity and reliability of the system. This thesis focuses on understanding the existing integrity monitoring methods and using them for the detection of multiple errors in vision-based navigation systems. The current techniques are found to have the ability to detect single and multiple errors, but they cannot isolate them. This thesis develops an algorithm with the ability to isolate multiple erroneous measurements adding to the capabilities of existing integrity monitoring fault detection and isolation techniques.

Wright State University CORE Scholar Browse all Theses and Dissertations Theses and Dissertations 2012 Integrity Monitoring Techniques for Vision Navigation Systems Nicholas Allen Baine Wright State University Follow this and additional works at: https://corescholar.libraries.wright.edu/etd_all Part of the Electrical and Computer Engineering Commons Repository Citation Baine, Nicholas Allen, "Integrity Monitoring Techniques for Vision Navigation Systems" (2012). Browse all Theses and Dissertations. 609. https://corescholar.libraries.wright.edu/etd_all/609 This Thesis is brought to you for free and open access by the Theses and Dissertations at CORE Scholar. It has been accepted for inclusion in Browse all Theses and Dissertations by an authorized administrator of CORE Scholar. For more information, please contact library-corescholar@wright.edu. Integrity Monitoring Techniques for Vision Navigation Systems A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in Engineering by Nicholas A. Baine B.S.E.E., Wright State University, 2007 2012 Wright State University August 4, 2012 Wright State University SCHOOL OF GRADUATE STUDIES August 4, 2012 I HEREBY RECOMMEND THAT THE THESIS PREPARED UNDER MY SUPERVISION BY Nicholas A. Baine ENTITLED Integrity Monitoring Techniques for Vision Navigation Systems BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in Engineering. Kuldip Rattan, Ph.D. Thesis Director Kefu Xue, Ph.D. Chair, Department of Mechanical and Materials Engineering Committee on Final Examination Kuldip Rattan, Ph.D. John Gallagher, Ph.D. Xiaodong Zhang, Ph.D. Andrew Hsu, Ph.D. Dean, School of Graduate Studies ABSTRACT Baine, Nicholas. M.S.Egr., Department of Electrical Engineering, Wright State University, 2012. Integrity Monitoring Techniques for Vision Navigation Systems. In aviation applications, navigation integrity is paramount. Integrity of GPS systems is well established with set standards. Vision based navigation systems have been found to be an adequate substitute for GPS, when it is unavailable but are unlikely to be utilized until there is a measure for system integrity. Work has been done to detect the effect of a single measurement pair being corrupted with a bias; however, the measurement geometry varies greatly with the environment. The environment could be sparse in visual features to track, or the environment could be rich with features. With more features, there is a greater probability of having multiple corrupted measurements. It is essential that multiple corrupt measurements are detected and excluded to assure the integrity and reliability of the system. This thesis focuses on understanding the existing integrity monitoring methods and using them for the detection of multiple errors in vision-based navigation systems. The current techniques are found to have the ability to detect single and multiple errors, but they cannot isolate them. This thesis develops an algorithm with the ability to isolate multiple erroneous measurements adding to the capabilities of existing integrity monitoring fault detection and isolation techniques. iii Contents 1 Introduction 1.1 Mathematical Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Thesis Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Background 2.1 Kalman Filtering . . . . . . . . . . . . . . . . . . 2.2 Extended Kalman Filter . . . . . . . . . . . . . . . 2.3 Attitude/Orientation Representation . . . . . . . . 2.3.1 Euler Angles . . . . . . . . . . . . . . . . 2.3.2 Direction Cosine Matrix Cnb . . . . . . . . 2.3.3 Cnb matrix with small angles . . . . . . . . 2.4 Global Positioning System . . . . . . . . . . . . . 2.5 GPS Integrity Monitoring . . . . . . . . . . . . . . 2.5.1 Receiver Autonomous Integrity Monitoring 2.5.2 Parity Vector . . . . . . . . . . . . . . . . 2.5.3 Least Square Residuals . . . . . . . . . . . 2.5.4 Slope Method . . . . . . . . . . . . . . . . 2.6 Vision Based Navigation . . . . . . . . . . . . . . 2.6.1 Projection Model . . . . . . . . . . . . . . 2.6.2 Measurement Model . . . . . . . . . . . . 2.7 Current Vision Integrity Monitoring . . . . . . . . 2.7.1 Parity Vector and Slope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 4 5 5 8 9 10 12 13 13 17 17 18 21 23 24 26 30 31 32 3 Multiple Errors in Vision Systems 35 3.1 Test Statistic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.2 Bayes Algorithm for Isolating Corrupt Measurements . . . . . . . . . . . . 36 4 Results 39 5 Conclusion 45 Bibliography 46 iv List of Figures 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Nomenclature and the Steps of the Kalman Filter. . . . Yaw/Heading rotation following right hand rule. . . . . Pitch rotation following right hand rule. . . . . . . . . Roll rotation following right hand rule. . . . . . . . . . Thin lens camera model. . . . . . . . . . . . . . . . . Pinhole Camera Model. . . . . . . . . . . . . . . . . . Camera projection model. . . . . . . . . . . . . . . . . Target to image transformation geometry. . . . . . . . Chart showing the areas of Vision Integrity Monitoring. 4.1 Probability of getting a good subset of measurements assuming that there is a total of 100 measurements and 5 measurements in a given subset. . . Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 5 of 100 measurements bad. . . . . . . Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 10 of 100 measurements bad. . . . . . Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 15 of 100 measurements bad. . . . . . Steady-state average sum of the error probability vector P~ vs. the number of bad measurements out of 100 total measurements. . . . . . . . . . . . Steady-state average sum of the error probability vector P~ vs. the probability of getting a good subset of data with 100 total measurements and a varying number of bad measurements. . . . . . . . . . . . . . . . . . . . 4.2 4.3 4.4 4.5 4.6 v . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 10 11 12 26 26 27 29 32 . 40 . 41 . 42 . 43 . 44 . 44 Acknowledgment I would like to take this opportunity to extend my thanks to Jacob Campbell and the rest of the Sensors Directorate of the Air Force Research Laboratory who have helped me in my research. I would also like to thank DAGSI and AFRL for funding my research, and my advisor Kuldip Rattan and my committee members, John Gallagher and Xiaodong Zhang for their guidance and support. vi Introduction Modern aerospace operations require accurate navigation systems. These operations include the basic navigation needs for civilian aircraft for approach, departure, and en route navigation. Additional military operations requiring navigation include precision bombing, aerial refueling, formation flying, and unmanned operations. The creation and development of GPS has brought remarkable levels of precision and accuracy for both civilian and military navigation users. With the installation of additional Global Navigation Satellite Systems (GNSS), the performance level will likely continue to improve. The accuracy that can be achieved by this or any other system is based on the ability of the system to estimate the true position given the current operating conditions and the quality of the sensor measurements. Regardless of the system, modeling errors, environmental factors, and equipment limitations prevent a perfect estimation of the true position. An Inertial Navigation System (INS) uses specific forces and angular rate measurements to estimate the position, velocity, and orientation of a vehicle. The measurements for an INS are produced by an Inertial Measurement Unit (IMU), which consists of accelerometers and gyroscopes. This type of navigation has the benefit of being independent of external signals, hence making it impossible to jam or spoof. Consequently, it is a very useful system for military applications and for critical civilian systems. Unfortunately, this system is also very dependent on the quality of the sensors and is influenced by sensor misalignments, drifts, and biases resulting from limitations in materials and design of sensors. 1 In addition, the specific forces measured by the accelerometers are dependent on gravity. Errors in the gravity model used to convert the specific forces into accelerations also result in system errors. An INS works through integrating measurements of acceleration over time to obtain position. This integration process causes errors in the measurements to accumulate and grow quadratically over time, resulting in a system accuracy that degrades with time. Because of this limitation, a stand-alone INS is used primarily for short term navigation. For long term navigation, an INS is coupled with another navigation system such as GPS. By fusing the two systems together, the INS is able to operate with its error bounded and the combined result is better than either system can produce operating on their own. This still leaves the navigation system vulnerable because of dependance upon external signals to operate the GPS. These external RF signals can be either jammed or spoofed, rendering the system inoperative or resulting in false navigation solutions. It is this vulnerability relating to external signals that has led to the research in alternatives to an otherwise excellent system for navigation. One such alternative that is being explored to remove the dependance upon GNSS is the fusion of an INS and a vision system [5, 6, 8, 12, 13, 24, 25, 31, 33, 38, 44, 42, 43, 41, 46, 45]. Such a system is not reliant upon external signals and is completely passive, emitting no RF emissions. Not emitting signals is a benefit for stealth applications and ensures that there will be no problems regarding spectrum allocation if such a system is used on a large scale. Such a system can also be very small, inexpensive, and require little power to operate relative to their GPS counterparts [31]. The cited research shows the feasibility of a vision-aided navigation system in many different configurations. These proposed systems are shown to provide reasonable accu- 2 racy on the level needed for aviation applications. The accuracy available is dependent on the camera system used, the INS measurement errors mentioned above, the feature tracking algorithm, vehicle trajectory, and the image scene [5]. With vision-aided inertial navigation systems quickly developing into viable alternatives to GPS, there has been some research done in developing integrity monitoring for vision based navigation systems [16, 18, 17]. In the cited research, an important first step was taken; the methods used in GPS integrity monitoring are redefined and used for vision systems. However, in these methods, it is assumed that there is only one bad measurement. This was a good assumption with GPS, but in vision systems, the probability of multiple errors goes up as the number of features tracked goes up. This thesis focuses on understanding the existing integrity monitoring methods and using them for the detection of multiple errors in vision-based navigation systems. 1.1 Mathematical Notation For clarity, the following is a description of the mathematical notation that will be used in this proposal: Scalars: Scalars are represented by italic characters such as a or b. Vectors: Vectors are represented by bold lower case letters, shown as a or b, and are usually in column form. Matrices: Matrices are represented by bold upper case letters, such as A or B and the scalar values of a matrix can be referenced as Aij with the ith row and j th column element. 3 Transpose: A vector or matrix transpose is denoted by a superscript T , as in aT . Estimated Variables: Estimates of random variables are denoted by adding a ”hat” symbol, such as â. Calculated and Measured Variables: Variables that contain errors due to their being measured are distinguished by a tilde symbol, as in ã. Reference Frame: Navigation vectors are defined with respect to reference frames; a superscript letter is used to denote the current frame of reference, as in xa . Direction Cosine Matrix: Direction Cosine Matrices are matrices that rotate vectors from one frame of reference to another, as in Cba which, when premultiplied to a vector, converts the vector from the a-frame to the b-frame. Identity Matrix: Identity mateices are denoted by a bold capital letter I, as in I Relative Position or Motion: When a vector represents relative position or motion, subscripts are combined, as in pcab is the position of the a-frame with respect to the b-frame expressed in c-frame coordinates. 1.2 Thesis Layout In section 2, relevant background needed to understand this work in introduced. In section 2.5, an overview of the current methods used in GPS integrity monitoring are discussed. Section 2.7, reviews the current literature on integrity monitoring of vision navigation systems. In section 3, a method is introduced to detect and isolate multiple errors and the results of this method are shown in section 4. This thesis is concluded with a discussion in section 5. 4 Background 2.1 Kalman Filtering A Kalman filter is an effective means of combining various types of sensor information and system knowledge in the form of a model and generate an optimal estimate of the states of the system. The name filter is often used when something is being purified (rid of unwanted contaminants). In essence, the Kalman Filter is a filter for measurements, filtering out unwanted uncertainty (measurement noise and model noise).[9] The Kalman filter has two distinct steps that are repeated in discrete time at each instance. The first step is the prediction/extrapolation, which utilizes a model of the system to predict the states of the system after one time interval. The second step is the observation/update, during which measurements are used in combination with the prediction to estimate the states of the system at that time interval. A linear system is often modeled using state-space representation, as seen in equations (2.1) and (2.2) for a continuous time system, and equations (2.3) and (2.4) for a discrete time system. 5 Continuous Time System: ẋ(t) = F x(t) + w(t) (2.1) y(t) = Cx(t) + v(t) (2.2) x(k + 1) = F x(k) + w(k) (2.3) z(k) = x(k) + v(k) (2.4) Discrete Time System: System Model and Measurement Noise / Uncertainty: E{w(k)} = 0 E{w(k)wT (k)} = Q(k) E{v(k)} = 0 E{v(k)v T (k)} = R(k) (2.5) (2.6) (2.7) (2.8) (2.9) • Prediction / Extrapolation: This step of the Kalman filter extrapolates the state estimate and error covariance matrix, using equations (2.10) and (2.11). Equation (2.10) predicts the states of the system by using a rough state space model F. The error covariance matrix is then updated with equation (2.11) using the model F and Q, which describes the uncertainty of the model in terms of variance. 6 Observation zk Prediction k Observation Prediction zk+1 x̂k−1 (+) x̂k ( ) x̂k (+) x̂k+1 ( ) x̂k+1 (+) Pk−1 (+) Pk ( ) Pk (+) Pk+1 ( ) Pk+1 (+) k k+1 1 t Figure 2.1: Nomenclature and the Steps of the Kalman Filter. State Estimate Extrapolation: x̂k+1 ( ) = F x̂k (+) (2.10) Error Covariance Extrapolation: Pk+1 ( ) = F Pk (+)F T + Qk (2.11) • Observation / Update: This step of the Kalman filter uses equations (2.12), (2.13) and (2.14) to update the state estimate, x̂k+1 , with a measurement/observation zk+1 , the error covariance matrix, Pk+1 , and the Kalman gain matrix K. The combining of the observation and the prediction is done using a special gain, known as the kalman gain, K. The Kalman gain is based on the knowledge of the uncertainties State Estimate Update: x̂k+1 (+) = x̂k+1 ( ) + Kk+1 [zk+1 7 Hk+1 x̂k+1 ( )] (2.12) Error Covariance Update: Pk+1 (+) = [I Kk+1 Hk+1 ]Pk+1 ( ) (2.13) Kalman Gain: Kk = Pk ( )HkT [Hk Pk ( )HkT + Rk ] 2.2 1 (2.14) Extended Kalman Filter The original Kalman filter is an excellent method for state estimation of linear systems. Unfortunately, not all systems are linear. For those cases, there is the extended Kalman filter. The extended Kalman filter (EKF) works by using the non-linear model to predict the states and measurements, and linearizes the model about the state-estimate for computing the covariance P . [9] The discrete extended kalman filter is implemented using the following equations: • Computing the predicted state estimate: x̂( ) = k 1 (x̂k 1 (+)) (2.15) • Computing the predicted measurement: ẑk = hk (x̂k ( )) (2.16) • Linear approximation: [1] Φk 1 ⇡ 8 @ k |x=x̂k (+) @x (2.17) • Conditioning the predicted estimate on the measurement: x̂k (+) = x̂k ( ) + K̄k (zk ẑk ), (2.18) • Linear approximation: [1] Hk 1 ⇡ @hk |x=x̂k ( @x (2.19) ) • Computing a priori covariance matrix: [1] [1]T 1 Pk ( ) = Φk 1 Pk 1 (+)Φk + Qk (2.20) 1 • Computing the Kalman Gain: [1]T K̄k = Pk ( )Hk [1]T [1] [Hk Pk ( )Hk + Rk ] 1 (2.21) • Computing the a posteriori covariance matrix: Pk (+) = {I 2.3 [1] K̄k Hk }Pk ( ) (2.22) Attitude/Orientation Representation There are two different ways that a vehicle’s attitude will be represented in this thesis. There are the Euler angles and the rotation matrix (Cnb ). On the frame of a vehicle, a strapdown IMU will be used. The data from it will be in terms of the body frame and will need to be transformed into the navigation frame. That is exactly what the matrix Cnb does. It rotates vectors from the the body frame (b) to the navigation frame (n). This rotation is 9 North xn x1 ψ East z n,1 yn down y1 Figure 2.2: Yaw/Heading rotation following right hand rule. done with three angles called Euler angles (roll , pitch ✓, and yaw ) and are performed relative to the body frame. 2.3.1 Euler Angles • Yaw/Heading : The rotation about the z-axis is known as the heading and yaw of a vehicle. The z-axis is pointing down through the bottom of the vehicle and to rotate around it would change the direction of travel in the navigation frame. Heading and yaw are usually the same unless the vehicle is in a sideslip, which can be caused by wind or dynamics of the airframe[32]. 3 2 6 cos( ) sin( ) 07 7 6 n 7 R( ) = 6 6 sin( ) cos( ) 07 = C1 5 4 0 0 1 (2.23) • Pitch ✓: The rotation about the y-axis of the body attached frame (frame-1 in this case) is referred to as the pitch angle in avionics. This is because the y-axis typically 10 x2 y 1,2 θ Up x1 z2 z1 Figure 2.3: Pitch rotation following right hand rule. points down the wings of an aircraft. To rotate about the y-axis would ”pitch” the nose of the vehicle up or down relative to the horizon[32]. 2 6cos(✓) 0 6 R(✓) = 6 1 6 0 4 sin(✓) 0 3 sin(✓)7 7 1 0 7 7 = C2 5 cos(✓) (2.24) • Roll : The rotation about the x-axis of the body attached frame (frame-2 in this case) is known as the roll when referring to avionics. This is because the x-axis goes from an origin at the center of mass of a vehicle through the front of the vehicle in the direction the vehicle usually travels. For aircraft, this rotation would be about the center-line of the airframe. Since this a right hand coordinate system, a clockwise rotation would be a positive roll angle[32]. 11 y2 x2,b down φ yb zb z2 Figure 2.4: Roll rotation following right hand rule. 2 61 6 R( ) = 6 60 4 0 2.3.2 0 0 3 7 7 2 cos( ) sin( ) 7 7 = Cb 5 sin( ) cos( ) (2.25) Direction Cosine Matrix Cnb In this representation, the rotations are maintained in a matrix form. The matrix can be calculated using the Euler angles and vice-versa. To get a direction cosine matrix, the Euler angle rotations need to be put in a sequence, which will create a rotation matrix that rotates a vector from the navigation frame to the body frame. One such sequence and the one used in this thesis can be seen in equation (2.26) [32]. 2 3 6c11 c12 c13 7 7 6 7 Cnb = R( )R(✓)R( ) = 6 6c21 c22 c23 7 5 4 c31 c32 c33 (2.26) The direction cosine matrix (Cnb ) is a 3x3 orthonormal matrix; therefore the matrix has the following properties: 3⇥3 Cnb CnT b = I 12 (2.27) det(Cnb ) = 1 (2.28) Equation (2.27) represents a very useful property of orthogonal matrices, the transpose of the direction cosine matrix is equivalent to its inverse. Equation (2.28) represents the normality of the direction cosine matrix, which ensures that when multiplied by a vector, the result will only be rotated and not scaled. [37] 2 3 6cos(✓) cos( ) sin( ) sin(✓) cos( ) sin( ) cos( ) cos( ) sin(✓) cos( ) + sin( ) sin( )7 6 7 7 Cnb = 6 6 cos(✓) sin( ) cos( ) sin(✓) sin( ) cos( ) cos( ) cos( ) sin(✓) sin( ) sin( ) cos( )7 4 5 sin(✓) sin( ) cos(✓) cos( ) cos(✓) (2.29) 2.3.3 Cnb matrix with small angles For small angles, the first order Taylor series expansion can be used for the trigonometric functions in equation (2.29) (i.e. cos( ) ⇡ 1 and sin( ) ⇡ ). Applying this to equation (2.29) yields a skew symmetric rotation matrix as seen in equation (2.30).[37] 2 61 6 Cnb ⇡ 6 6 4 ✓ 2.4 3 1 2 3 ✓ 7 6 7 7 6 7 7 = I3⇥3 + I3⇥3 ⇥ 6 ✓ 7 7 6 7 5 4 5 1 (2.30) Global Positioning System The Global Positioning System is a space-based navigation system comprised of satellites in medium-earth orbit. It provides accurate three-dimensional position and timing information globally. The GPS system has excellent long-term accuracy, but has low short term precision due to high frequency noise errors, which effect short-term performance. This is 13 one reason why it is often coupled with an INS, which has excellent short term accuracy, but suffers from drift caused by sensor errors. Coupling the systems provides a better solution than either could produce separately and can result in a reduction in performance requirements for the independent systems while operating. GPS utilizes time-of-arrival measurements made from signals sent by satellites. Since users only receive signals, they operate passively, allowing for an unlimited number of users simultaneously [23]. The signals are received by the user from the satellites, which are at known locations. The time difference between when the signal was sent and received is multiplied by the speed of light to determine the range to a given satellite. The time is known precisely on the satellite by use of redundant atomic clocks. However, the receiver is not equipped with an accurate clock. This results in the receive time not being known precisely. Therefore, time is one of the variables which is solved for in addition to position. Since the time is not precisely known, the measurement made by the receiver is called a pseudorange, because it is not the true range due to the receiver time being unknown. Since this research is not focused on GPS, the actual signal is not described here, but for details on the GPS signal, see [11, 14, 23, 34]. There are four different measurements that can be made from the signal from the satellites. They are pseudorange, doppler, carrier-phase, and carrier-to-noise density. These measurements are raw and should not be confused with the computed outputs of position and velocity generated by the receiver. Access to the raw measurements from a receiver are required for most GPS aided INS methods. The most commonly used measurement is the pseudorange and is often the only measurement used. The pseudorange is the true range between a user and a satellite plus a bias caused by the uncertainty in time along with other error sources. The main source of the bias is the 14 receiver clock, but the other contributors are the satellite clock, atmospheric effects, and multipath interference. The pseudorange equation is given by ⇢ = r + c( tr ts ) + c ttropo + c iono + c tmp + v (2.31) where, ⇢ is the GPS pseudorange (meters), r is the true range from the user to the satellite (meters), c is the speed of light (meters/second), tr is the receiver clock error (seconds), ts is the satellite clock error (seconds), ttropo is the error due to tropospheric delay (seconds), tiono is the error due to ionospheric delay (seconds), is the tmp is the error due to multipath interference (seconds), and v is the error due to receiver noise (meters). The range, r, is the true line-of-sight (LOS) range between the satellite and the receiver. As the signal travels through the atmosphere, the path of the signal is often distorted resulting in the errors from the ionosphere, tiono , and the troposphere, ttropo . Atmospheric modeling and forecasting can be used to mitigate the impact of tiono and ttropo . When the signal is reflected off objects and the ground, it results in multiple copies of the same signal being received. Receiver and antenna design are used to reduce the impact of multipath and block all signals that are not the true LOS signal. By reducing these errors, the dominate term left is from the receiver clock. This error can be modeled as a clock bias term and solved for when computing the position solution. The other remaining errors are assumed to be noise-like. Since range r is a non-linear measurement of position, the receivers calculate the position by linearizing about an initial approximated guess of the position and then solving iteratively. A full description of this method of solving for position is given in [23]. The pseudorange can be expressed in simplified form as seen in equation (2.32). ⇢= p (xmi xt )2 + (ymi yt )2 + (zmi 15 zt )2 + b + ✏ (2.32) where (xmi , ymi , zmi ) is the Location of the ith satellite (meters), (xt , yt , zt ) is the true location of the receiver (meters), b is the receiver clock bias (meters), and ✏ is the error in measurement (meters). If the true position, xt , and bias term are expressed as xt = x0 + x and b = b0 + b, the error terms x and b represent the correction to be applied to the initial estimates x0 and b0 . If ⇢c is a pseudorange with the corrections x and b applied, then the linearized equation is created using a first order Taylor Series approximation [4, 23, 27, 28] as ⇢ = ⇢c ⇢0 = kxt ⇡ (2.33) x0 (xt kxt xk kxt x0 ) x+ b+✏ x0 k x0 k + (b b0 ) + ✏ (2.34) (2.35) Equation (2.35) is written in the matrix form z = Hx + v, as 2 3 2 6 m1 7 6 6 7 6 6 m 7 6 6 27 6 ⇢=6 . 7=6 6 .. 7 6 6 7 6 4 5 4 mn (xm1 x0 ) kxt x0 k (ym1 y0 ) kxt x0 k (zm1 z0 ) kxt x0 k (xm2 x0 ) kxt x0 k (ym2 y0 ) kxt x0 k (zm2 z0 ) kxt x0 k .. . .. . .. . (ymn y0 ) kxt x0 k (xmn x0 ) kxt x0 k (zmn z0 ) kxt x0 k 3 17 2 3 7 17 7 6 x7 4 5+✏ .. 7 7 .7 b 5 1 (2.36) The solution of equation 2.36 can now be found using linear numerical methods such as least-squares. The least-squares solution for the overdetermined system is given by [23] 2 3 6 x7 T 1 T 4 5 = (H H) H ⇢ b 16 (2.37) where 2 6 6 6 6 H=6 6 6 4 (xm1 x0 ) kxt x0 k (ym1 y0 ) kxt x0 k (zm1 z0 ) kxt x0 k (xm2 x0 ) kxt x0 k (ym2 y0 ) kxt x0 k (zm2 z0 ) kxt x0 k .. . (xmn x0 ) kxt x0 k .. . (ymn y0 ) kxt x0 k .. . (zmn z0 ) kxt x0 k 3 17 7 17 7 .. 7 .7 7 5 1 (2.38) These equations are solved iteratively in a method called iterative least-squares. The process is repeated until the correction is below a desired threshold. 2.5 GPS Integrity Monitoring This section provides an overview of integrity monitoring methods used in GNSS systems. It begins with a discussion on Receiver Autonomous Integrity Monitoring (RAIM). That is followed by the parity vector, and the least square residual methods for obtaining a test statistic. Lastly slope is discussed, which is the relationship between the test statistic used and the error limits for the navigation solution. 2.5.1 Receiver Autonomous Integrity Monitoring The GPS system has become the system of choice for navigation due to its performance and reliability. Even though it is a fairly reliable system, its use in safety critical systems required that the reliability be guaranteed. This resulted in a considerable amount of research and development of integrity monitoring algorithms, the foremost of which is Receiver Autonomous Integrity Monitoring (RAIM) [36]. RAIM is the most useful method developed to date in that it is passive and localized to the GPS receiver without a large and complicated infrastructure of additional sensors. RAIM algorithms are not standardized among receivers, but they primarily rely on least squares residuals from a particular instant of data or similar method using a parity vector [28]. These methods have their limitations 17 in availability of being able to detect and exclude bad measurements, and they make the assumption that there is a single measurement error, which is a valid assumption for GPS [2, 4, 27]. 2.5.2 Parity Vector The parity vector method for integrity monitoring was first presented by Potter in [30] for monitoring inertial navigation systems. It was then reintroduces as a method of integrity monitoring for GPS by Sturza [36]. The following is a derivation that mirrors the one presented in mathematical detail by Sturza. The parity vector method is dependent upon the presence of redundant measurements. In other words, the number of measurements m, must exceed the number of states n being estimated, such that m n 1. A linearized measurement model is given by: z = Hx + w + b (2.39) where z is the (m ⇥ 1) measurement vector that results from the product of the (m ⇥ n) measurement matrix H and the (n⇥1) state vector x plus the (m⇥1) vector of measurement noise w with diagonal covariance of 2 I and the (m⇥1) bias vector b that represents faults in the measurements. Assuming the existence of redundant measurements, H is not square. Therefore the Moore-Penrose pseudo-inverse of H is given by: H̄ = (HT H) 1 HT (2.40) Using the pseudo inverse to solve for the least-square estimate of x yields: x̂ = H̄z = (HT H) 1 HT (Hx + w + b) = x + H̄(w + b) 18 (2.41) The measurement matrix H is assumed to consist of independent column vectors, so that it can be successfully decomposed using QR decomposition, where Q 1 = QT and R is an upper triangular matrix. QT z = Rx (2.42) The resulting Q will have dimensions m ⇥ m, and R will have dimensions m ⇥ n with the last m n rows containing only zeros. The QT and R matrices can be subdivided into Qx T and U representing the first n rows of QT and R respectively, and Qp T representing the last m n rows of QT . 2 3 6U7 6 Qx 7 5z = 4 5x 4 0 Qp T 2 T 3 (2.43) The matrix Qp T is defined as the parity matrix P with rows that are orthogonal to z and columns that span the parity space of H [2, 39]. This allows for measurements with unobservable biases to be transformed into the parity space, where they can be observed in the form of the parity vector: p = Pz = P(w + b) (2.44) The resulting elements of the parity vector are normally distributed with mean µ = Pb and covariance 2 I. The parity vector does make the assumption that p and x are indepen- dent and that the noise w is of zero mean allowing for p to be of zero mean when no faults are present. In addition to the parity vector p, it is possible to map the parity vector back to the measurement space and represent the information in the form of a fault vector f . A transformation matrix T can be obtained by augmenting the pseudo-inverse H̄ with the parity 19 matrix P. The matrix T maps the measurement space into the n dimensional state space and the m n dimensional party space. Therefore, T 1 is the transformation from the state-space to the measurement space allowing for the formation of the fault vector f , 2 3 2 3  607 607 f = T 1 4 5 = H̄ PT 4 5 = (PT P)z Pz p (2.45) where the resulting elements of f are normally distributed with mean µ = PT Pb and covariance Σ = 2 PT P. The parity vector p and the fault vector f exist in different spaces, but their inner products yield the same results: pT p = (Pz)T (Pz) = zT PT Pz (2.46) and f T f = (PT Pz)T (PT Pz) = zT PT PPT Pz = zT PT IPz = zT PT Pz (2.47) This inner product can be used as a test statistic D for fault detection. The decision variable D has a chi-square distribution based upon the distribution of the elements of the parity and fault vectors. In the event that there is a fault, the distribution for D will become a non-central chi-square distribution allowing for a threshold test to be used to indicate whether or not a fault has occurred. H0 : D < H1 : D > 20 This decision variable is subjected to a dual hypothesis test where H0 represents no fault and H1 indicates a fault. This is done by comparing D with a threshold , which is based upon a desired probability of false alarm pf a , number of redundant measurements n m, and the covariance of the measurement noise = p 2 erfc pf a = erfc 2.5.3 1 ✓ ✓ 2 . pf a m n p 2 ◆ (2.48) ◆ (2.49) Least Square Residuals Work done by [1, 7, 19, 22, 29] laid the foundation for GPS integrity monitoring using least-square residuals. Least squares residuals makes the same assumption as parity space in that there are redundant measurements available making the system overdetermined with the number of measurements m exceeding the number of states n such that m n 1. For GPS, n = 4 since the states being solved for include the three diminutional position and a clock bias term. In the original work that developed the least-square residual method, the measurement equation was given in terms of the pseudorange: ⇢i = d i  eTi 1 x ✏i (2.50) where ⇢i is the pseudorange of the ith satellite, di is the distance between the user and the ith satellite, ei is the unit vector from the user to the ith satellite, x is the state vector including position and the clock bias term, and ✏i is normally distributed measurement noise with mean µi and variance 2 i. The vector representation of the messurement equation is [29]: 21 2 3 T 6 ei 6 6 6 ⇢=6 6 6 4 z=d 17 .. 7 .7 7 x + ✏ = Hx + ✏ .. 7 7 .7 5 1 .. . .. . eTm (2.51) The least-square estimate is expressed as: x̂(HT H) 1 HT z = H̄z (2.52) with the estimated measurement is given as: ẑ = Hx̂ = HH̄z (2.53) The difference between the actual measurements and the predicted measurements yields the vector of residual errors: ✏ˆ = z ẑ = (I HH̄)z (2.54) Substituting equation (2.51) for z yields the following: ✏ˆ = (I HH̄)(Hx + ✏) (2.55) = Hx HH̄Hx + ✏ HH̄✏ (2.56) = H(I H̄H)x + (I HH̄)✏ (2.57) = H I (HT H) 1 HT H x + (I = H(I I)x + (I = (I HH̄)✏ HH̄)✏ HH̄)✏ (2.58) (2.59) (2.60) 22 The sum of squares error (SSE) is defined as the inner product of ✏ˆT ✏ˆ and makes a useful test statistic. In the case where the noise has a zero mean (no fault bias), the SSE has a chi-square distribution just like the decision variable from the parity vector method. With m 4 degrees of freedom the test statistic used is: r= r ✏ˆT ✏ˆ = m 4 r SSE m 4 (2.61) In the event that the measurements are effected by a non-zero mean in the noise, the test statistic will come from a non-central chi-square distribution with a non-centrality parameter given as [2]: = ⇣⇢ bias ⌘2 (2.62) The non-central chi-square distribution cannot be expressed in closed form, but can be approximated using numeric integration [2, 29]. The integrity checking process is performed by comparing the test statistic r to a threshold . The threshold is generated through Monte Carlo simulation and is selected based on desired false alarm and missed detection requirements. 2.5.4 Slope Method The slope method is used in conjunction with either the parity vector or least-squares methods, and is useful in relating the test statistic to the error in position caused by a biased measurement. The ”slope” is the ratio between the horizontal position error and the test statistic. For GPS, the ”slope” is a linear relationship that approximates the effect of a growing pseudorange bias and its effect on the horizontal position error. For any given satellite, the ”slope” is a function between the horizontal position space and the parity space [47] and is given by 23 Slopei = q H̄21,i + H̄22,i p Si,i (2.63) where S = PT P. This method assumes that there is no noise and considers the effect of the bias only; therefore, the parity vector is p = P(b + 0). The resulting horizontal position error for the ith measurement is [47]: H✏i = Slopei kP k (2.64) The slope of each measurement will be different based upon the measurement geometry. A worst case upper bound can be estimated using the measurement with the largest slope and is given as [47]: |Hbias | = max [Slopei ]kpk i=1:m (2.65) Since the process is not truly deterministic, and there is measurement noise in the system, this estimate does not reflect the true horizontal position error. However, the measurement noise is assumed to be zero-mean additive white gaussian noise (AWGN), making the deterministic method a reasonable method for approximating the expected value of the parity vector [36, 39]. The slope method allows for the projection of position error onto the parity space allowing for easy visualization of thresholds relative to protection levels. 2.6 Vision Based Navigation The objective of the proposed research is to further investigate integrity monitoring techniques in vision aided/based navigation systems. The emphasis is therefore on the measurements made by vision systems in the form of pixel coordinates in an image of a known feature. The following includes essential background relating a real world target/feature 24 and the measurement from the vision system. The focus of this research is on the measurement provided by the imaging system and not the process that created it. Therefore, the following assumptions are being made. • A feature tracker generates measurements in the form of pixel coordinates of known features at a suitable rate. • The vision system is calibrated in a manner that allows for the relationship between pixel coordinates and position in the camera frame to be known with any lens distortion already corrected for. • The relationship between the camera frame and vehicle body frame is known. The use of GPS integrated with an INS has been well established. The two systems compliment each other well, but there are environments and conditions that can result in GPS signals being unavailable. This led to research into the use of optical systems to aid in navigation [40, 44, 43, 25, 5, 13, 38, 46]. Vision-based navigation can be done without the use of an INS, but the vision system performance is based on the quality of measurements that can be made given the environment and availability of features for tracking. The vision system can be used in a similar manner to GPS when tightly coupled with an INS to bound the errors that grow with time [44]. Together, an INS and vision system have the potential to reliably provide accuracy on the level of GPS. Optical navigation can be used in many environments including those, which are unknown [10]. When the environment is unknown without features at known locations, a process called Simultaneous Location and Mapping (SLAM) is used to estimate the location of trackable features at the same time, solving for the navigation states of the vehicle. The research proposed here focuses on navigation in environments with known features 25 at known locations. These will be tracked by a vision system that is passive, taking in a three-dimensional (3-D) scene and projecting it onto a two-dimensional (2-D) image plane. 2.6.1 Projection Model Scene z f f Z Image Figure 2.5: Thin lens camera model. Image f Scene f Figure 2.6: Pinhole Camera Model. The optical properties of a camera govern the relationship between a scene and its 26 projection onto an image. Optics seldom exhibit ideal properties allowing for a simple model. However, many calibration and correction techniques exist to reduce and correct for non-linear optical effects [8, 15, 20, 26, 44, 48]. These corrections allow for the projection to be modeled interns of an ideal thin lens. For an ideal thin lens, the projection onto the image plane is a function of the focal length and the distance to the lens as shown in Figure 2.6.1. The thin lens directs the parallel light rays toward the focus resulting in an inverted image beyond the focus. This is expressed as the fundamental equation for a thin lens equation as 1 1 1 + = Z z f (2.66) where Z is the distance from an object in the scene to the lens, z is the distance from the lens to the image plane, and f is the focal length of the lens [21]. If the aperture of the lens is decreased, it can be modeled as a pinhole camera. Given the pinhole camera model depicted in Figure 2.6, all light must pass through the aperture and projects an image on a plane located at the focal length f from the aperture [21]. W xc yc O xproj s zc f Focus proj H yproj Image Plane Figure 2.7: Camera projection model. 27 Scene sc If the image plane is placed in front of the optical center, the model is further simplified as seen in Figure 2.7. This results in a non-inverted image. Given a point source location, sc relative to the optical center the resulting location on the image plane is given by s proj = ✓ f scz ◆ sc (2.67) with scz as the distance from the optical center of the camera in the zc direction [44]. This camera projection is then converted into a digital image. The image plane coordinates need to be mapped to a coordinate system based on pixels. Assuming a rectangular (M ⇥ N ) pixel grid with a height H and a width W, the transformation from projection coordinates to pixel coordinates is given by 2 6 spix = 4 M H 3 0 3 M +1 6 2 7 07 proj +4 5s 5 N +1 0 2 N W 0 2 (2.68) Combining equations (2.67) and (2.68) yields the transformation from the camera frame to the pixel frame : spix 1 = c sz = 2 6 f 6 6 0 6 4 0 M H 1 pix c T s scz c 0 N fW 0 3 M +1 2 7 7 N +1 7 sc 2 7 1 (2.69) 5 (2.70) where Tpix is the homogeneous transformation matrix from camera to pixel frame [44]. c To convert pixel coordinates back to camera coordinates, the inverse transformation can be used as 28 Tcpix = (Tpix c ) 2 1 H 6 fM 6 = 6 6 0 4 0 Tcpix (2.71) H(M +1) 2f M 0 W fN 0 3 7 7 W (N +1) 7 2f N 7 1 (2.72) 5 xc yc Camera Frame z s c Target p t zn yn xn Navigation Frame Figure 2.8: Target to image transformation geometry. The coordinates are still in terms of a camera model and need to be related to the navigation frame as seen in Figure 2.8. The relationship between navigation frame and the camera frame are given by s n = tn 29 pn (2.73) sc = Ccb Cbn sn (2.74) where sn and sc are the line-of-sight vectors from the camera to the target in the navigation and camera frames respectively, pn is the position of the camera in the navigation frame, and tn is the location of the target in the navigation frame [44]. 2.6.2 Measurement Model Before measurements can be used or analyzed, it is necessary to have a linearlized measurement model. This research makes use of the model created by Veth [43]. For this model, a minimal error state vector for a vision aided inertial system is used, as given by 2 n 3 6 p 7 x=4 5 ψ (2.75) where x is the error state vector, pn is the 3-dimensional error vector in position of the platform, and ψ are the tilt error states[44, 43]. The measurement model for the ith image feature is given by c zi = spix = Tpix c si i (2.76) where zi is the measurement vector from the ith feature, Tpix c is the homogeneous transformation matrix from the camera frame to the pixel frame, and sci is the line-of-sight vector from the camera to the ith feature target. This is a non-linear relationship and is expressed as a non-linear measurement equation h(x) as c h(x) = Tpix c si 30 (2.77) The measurement model matrix is found by taking the first order Taylor series expansion of h(x)[16]. The measurement matrix is by with µ = 1/scz and @h(x) H= @x = x=x̂  @h @pn @h sci Ccb Cbn = µTpix c @pn @h = µTpix c @ψ @sci = @ψ with µ = 1/scz and 2.7 ✓ @sci @ψ Ccb Cbn [(tn sci = [ 0 0 1] @h @ψ (2.78) Ccb Cbn (2.79) @sci @ψ (2.80) ◆ pn ) ⇥] (2.81) = [0 0 1]. Current Vision Integrity Monitoring The area of vision navigation can be subdivided into two categories: methods based on tracking known features and methods based on tracking unknown features. Consequently, the measurements need to be treated differently when performing integrity monitoring. Figure 2.9 shows a break down of the areas for integrity monitoring of vision systems. There has been little research done in the area of integrity monitoring for vision navigation systems. The beginning framework was laid out by Larson in [18], [16], and [17]. In these works, GPS integrity monitoring methods using parity vectors and slope were introduced to vision navigation systems. This work focuses on detecting a single pixel pair error relating to a known feature as seen in Figure 2.9. The following is a summary of that work. 31 Vision Integrity Single Error Pixel Pair Known Features Unknown Features Multiple Pixel Pair Errors Single Error Pixel Pair Systemic Multiple Pixel Pair Errors Random Figure 2.9: Chart showing the areas of Vision Integrity Monitoring. 2.7.1 Parity Vector and Slope This section includes a derivation of the work done in [18], [16], and [17]. This derivation is provided in the same manner as it was in [16]. In that work, four assumptions were made: • Tracked features are known and do not need to be estimated. • An image-based measurement is considered a two element set, consisting of an (x,y) pair. • The bias is multidimensional in that it is a magnitude times sinusoidal components of the angle of the error in the x and y directions. • Noise is assumed to be zero mean additive white gaussian noise. This derivation makes use of the fact that the x and y elements of a pixel pair are measurements linked by a single observation and hold adjacent positions i and j in the measurement vector. The components of the bias vector b are bi = kbk sin ✓ and bj kbk cos ✓, 32 where kbk is the magnitude of the pixel error and ✓ is the angle of the error in the x-y pixel frame. The slope method described in [2] is a ratio of the square vector norm of the horizontal position error k xk2 and the square vector norm of the parity vector kpk2 or residual vector if the residual method is used. If using the parity vector method, the resulting relationship is given by xT x bT H̄Th H̄h b k x h k2 = = kpk2 pT p bT PT Pb (2.82) with H̄ = (HT H) 1 HT , which is the Moore-Penrose pseudo-inverse of H and P is the parity matrix described in the previous section. The subscript h on xh and H̄h indicates that it only includes the horizontal position elements of x and corresponding rows of H̄. Equation (2.82) can be simplified using G = H̄Th H̄h and S = PT P as k xk2 bT Gb = kpk2 bT Sb (2.83) Following the assumption that there is only one error and making the bias vector zero for all elements except bi and bj , the numerator of the ratio is given by bT Gb = b2i Gii + bi bj (Gij + Gji ) + b2j Gjj (2.84) Since G is symmetric, i.e. Gij = Gji , and substituting the sinusoidal definitions of bi and bj yields bT Gb = kbk2 sin2 (✓)Gii + 2kbk2 sin(✓) cos(✓)Gji + kbk2 cos2 (✓)Gjj (2.85) Now using the double angle identity for sine, sin(2✓) = 2 sin(✓) cos(✓), yields the final form of the numerator as 33 ⇤ ⇥ k xh k2 = kbk2 sin2 (✓)Gii + sin(2✓)Gij + cos2 (✓)Gjj (2.86) The denominator is found in a similar manner taking advantage of the symmetry of S: ⇤ ⇥ kpk2 = kbk2 sin2 (✓)Sii + sin(2✓)Sji + cos2 (✓)Sjj (2.87) Canceling the bias term in the numerator and denominator results in the following expression  2 k xk sin (✓)Gii + sin(2✓)Gji + cos2 (✓)Gjj = kpk sin2 (✓)Sii + sin(2✓)Sji + cos2 (✓)Sjj Using the pythagorian identity, cos2 (✓) = 1 1 2 (2.88) sin2 (✓), the expression can be rewritten in terms of sine only as  2 k xk sin (✓)(Gii = kpk sin2 (✓)(Sii Gjj ) + sin(2✓)Gij + Gjj Sjj ) + sin(2✓)Sij + Sjj 1 2 (2.89) Larson showed the use of a slope method, whereby the decision variable D = pT p is related to error in horizontal position as a ratio. This method is useful in estimating the effect of a bias in a measurement on the horizontal position for setting up a detection threshold. This method is also useful in the event that there are multiple measurement errors, but becomes less accurate as the number of errors increases. Further analysis can be seen in [16]. 34 Multiple Errors in Vision Systems The research summarized in section 2.7 [16], [17], and [18] converted the GPS integrity monitoring techniques using parity space and slope to vision measurements and provided a framework common to both navigation systems. However, the assumptions made for GPS are not applicable for vision systems. In GPS systems, it is highly unlikely that there will be more than one bad measurement at a time. The GPS constellation is closely monitored and robust. In the case of vision measurements, there is a much higher likelihood of having more than one bad measurement. The previous research does not address this possibility. Section 3.1, summarizes the test statistic discussed in the previous section, which is designed to detect a bad measurement. This method is then used in section 3.2 as a test to determine if a subset of data is good and an iterative method is used to isolate the bad measurements in the data. 3.1 Test Statistic In [18], [16], and [17], Larson used the slope method from GPS integrity monitoring, whereby the decision variable D = pT p is related to error in horizontal position as a ratio of the squared vector norm of both the horizontal position error and the parity vector: k xk2 xT x bT Gb = = kpk2 pT p bT Sb 35 (3.1) with G = H̄Th H̄h , S = PT P, and H̄ = (HT H) 1 HT , which is the Moore-Penrose pseudo-inverse of H. Assuming that the bias vector b is zero except for the bi and bj components (corresponding to a bias in a singe set of pixel coordinates with corresponding error magnitude and direction ✓), equation (3.1) can be written as (For full details regarding the derivation see [16]):  2 k xk2 sin (✓)(Gii = 2 kpk sin2 (✓)(Sii 3.2 Gjj ) + sin(2✓)Gij + Gjj Sjj ) + sin(2✓)Sij + Sjj 1 2 (3.2) Bayes Algorithm for Isolating Corrupt Measurements The algorithm for isolating faulty measurements is based on Bayes’ Rule given by equation (3.3) and discussed in many books on probability and statistics [35, 3]. P (B|Ai )P (Ai ) P (Ai |B) = P1 j=1 P (B|Aj )P (Aj ) (3.3) When the complete set of measurements fails the test described by section 3.1, it is assumed that there is at least one faulty measurement in the set and that each measurement is equally likely to have the error. Therefore, all the elements of vector P~ represent the probability of error in each of the measurements and are initialized as 1/m where m is the number of measurements. Multiple random subsets of data are created from the original set and tested. If they pass, equation (3.5) is used to update the corresponding elements of P~ related to the measurements in the subset. If they fail, equation (3.4) is used to update the corresponding elements of P~ . After several tests on different subset combinations of the measurements, P~ converges, given a high enough probability of having a passing subset of data. 36 P̄md P~ (k) (P̄md P~ (k)) + Pf a P̄e (3.4) Pmd P~ (k) (Pmd P~ (k)) + P̄md P̄e (3.5) P~ {Error = 1|Alarm}(k + 1) = P P~ {Error = 1|P ass}(k + 1) = P where Pmd is the probability of a missed detection, Pf a is the probability of false alarm for the test and Pe is the probability of an error existing in the subset measurements given the subset of P~ and a bar over a probability is the reverse, P̄e = 1 Pe . The probability of obtaining a random subset of data that passes is based on a hypergeometric distribution given by 0 10 1 BM C BN M C A @ A@ n x x 0 1 P (X = x|N, M, n) = BN C @ A n (3.6) with N as the total number of measurements, n as the number of samples in a subset for testing, M as the number of bad measurements, and x as the number of bad measurements in a subset. Assuming that it only takes one bad measurement to result in a failed test, the probability of a passing subset is given by 37 P (GoodSet) = P (X = 0|N, M, n) 1 0 BN M C A @ n 0 1 = BN C @ A n (3.7) (3.8) Figure 4.1 shows a plot of the probability of passing the test relative to the number of bad measurements given that five measurements are taken at a time for testing. 38 Results This algorithm is demonstrated using a 50 run Monte Carlo experiment. Each experiment is performed with a total of 100 measurements and a varying number of bad measurements. Figure 4.2 shows that the sum of P~ converges to 4.5 after 100 iterative tests. After convergence, all five erroneous measurements can be isolated without any false positives relating to other measurements. It should be noted that the algorithm does not need to run to full convergence to isolate the bad measurements, but it can be run with fewer iterations assuming that every measurement has been included in a test. However, if it has not converged, the likelihood of isolating good measurements is decreased, but may be acceptable as a trade off for computation time required for additional tests. As the number of bad measurements is increased, the sum of P~ increases as seen in P Figures 4.3 and 4.4 show that P~ converging to 8 and 10.8, with 10 and 15 bad measure- ments, respectively. If many more experiments are performed and the steady-state value P~ of P is plotted relative to the number of bad measurements in Figure 4.5, a non linear relationship is seen. This relationship is similar to the probability of passing a test vs. the number of bad measurements as seen in Figure 4.1. 39 Probability of a Good Subset of 5 Measurements 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 10 20 30 40 50 60 70 Number of Bad Measurements 80 90 100 Figure 4.1: Probability of getting a good subset of measurements assuming that there is a total of 100 measurements and 5 measurements in a given subset. A linear relationship is found when the steady-state value of P~ P is plotted relative to the probability of selecting a good subset as seen in Figure 4.6. This relationship varies depending on the total number of measurements N and the number of measurements in a testing subset n but remains linear. This provides a simple tool to uncover the number of bad measurements. 40 Algorithm Results with 5 of 100 Measurements Bad 5 data 15 4.5 Average Sum of Liklihood of Error 4 3.5 3 2.5 2 1.5 1 0.5 0 100 200 300 400 500 600 Number of Tests on Data 700 800 900 1000 Figure 4.2: Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 5 of 100 measurements bad. 41 Algorithm Results with 10 of 100 Measurements Bad 9 8 Average Sum of Liklihood of Error 7 6 5 4 3 2 1 0 0 100 200 300 400 500 600 Number of Tests on Data 700 800 900 1000 Figure 4.3: Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 10 of 100 measurements bad. 42 Algorithm Results with 15 of 100 Measurements Bad 12 Average Sum of Liklihood of Error 10 8 6 4 2 0 0 100 200 300 400 500 600 Number of Tests on Data 700 800 900 1000 Figure 4.4: Average sum of the error probability vector P~ vs. test iteration, given a 50 run Monty Carlo experiment with 15 of 100 measurements bad. 43 Steady−State Likelihood v. # of Bad Measurements 20 18 Steady−state sum of liklihood 16 14 12 10 8 6 4 2 0 0 5 10 15 20 25 30 35 Number of Bad Measurements 40 45 50 Figure 4.5: Steady-state average sum of the error probability vector P~ vs. the number of bad measurements out of 100 total measurements. Probability of Passing Test v. Steady−State of Likelihood 20 18 Steady−state sum of likelihood 16 14 12 10 8 6 4 2 0 0 0.1 0.2 0.3 0.4 0.5 0.6 Probability of a Pass 0.7 0.8 0.9 1 Figure 4.6: Steady-state average sum of the error probability vector P~ vs. the probability of getting a good subset of data with 100 total measurements and a varying number of bad measurements. 44 Conclusion The work in this thesis uncovered a linear relationship between the sum of the error probaP bility vector P~ and the probability of selecting a good subset of measurements. Finding a convenient equation for that linear relationship given the number of measurements and size of the subset, among other variables will be investigated. In summary, vision systems have been shown as good substitutes to GPS for navigation. However, the number and quality of measurements available varies due to the changes in the navigation environment. This leads to a high probability of corrupt measurements and the possibility of multiple corrupt measurements at any instance. The ability to detect and exclude these errors is essential if these systems are to be certified for aviation applications. This thesis laid out a method that provides more information in the event of multiple corrupt measurements, allowing for the number and identification of the corrupt measurements to be determined. 45 Bibliography [1] R. G. Brown. Receiver autonomous integrity monitoring. Global Positioning System: Theory and Applications, II(143-165), 1993. [2] R. G. Brown and G. Chin. Gps raim: Calculation of threshold and protection radius using chi-square methods - a geometric approach. Institute of Navigation Special Monograph Series (Red Book), V:155–178, 1998. [3] G. Casella and R. L. Berger. Statistical Inference. Duxbury Press, New York, 1991. [4] A. Clot, C. Macabiau, I. Nikiforov, and B. Roturier. Sequential raim designed to detect combined step ramp pseudo-range error. In Proceedings of the 19th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2006), 2006. [5] D. D. Diel, P. DeBitetto, and S. Teller. Epipolar constraints for vision aided inertial navigation. In Proceedings of the IEEE Workshop on Motion and Video Computing, 2005. [6] S. Ebcin and M. J. Veth. Tightly-coupled image-aided inertial navigation using unscented kalman filter. In Proceedings of The Institute of Navigation GNSS, 20th International Technical Meeting of the Satellite Devision, 2007. 46 [7] J. L. Farrell and F. van Grass. Statistical validation for gps integrity test. Journal of the Institute of Navigation, 39(2), 1992. [8] M. G. Giebner. Tightly-coupled image-aided inertial navigation system via a kalman filter. Master’s thesis, Air Force Institute of Technology, WPAFB, OH, March 2003. [9] Mohinder S. Grewal and Angus P. Andrews. Kalman Filtering: Theory and Practice Using MATLAB. John Wiley and Sons, 3rd edition, 2008. [10] B. Hofman-Wellenhof, K. Legat, and M. Wieser. Naviagtion: Principals of Positioning and Guidance. Austria: Springer-Verlag Wien New York, 2003. [11] J. K. Holmes. Coherent Spread Spectrum Systems. Wiley-Interscience, New York, NY, 1982. [12] T. Hoshizaki, A. D. Andrisani, A. Braun, A. Mulyana, and J. Bethel. Performance of integrated electro-optical navigation systems. In Proceedings of the 16th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS/GNSS 2003), 2003. [13] I. Kaminer, A. Pascoal, and W. Kang. Integrated vision/inertial navigation system design using nonlinear filtering. In Proceedings of the American Control Conference, page 1910*1914, 1999. [14] Elliott D. Kaplan, editor. Understanding GPS: Principals and Applications. Artech House, Norwood, MA, 1996. [15] R. Klette, K. Schlüns, and A. Koschan. Computer Vision: Three-DImensional Data from Images. Singapore: Springer-Verlag, Pte. Ltd, 1998. [16] Craig D. Larson. An Integrity Framework for Image-Based Navigation Systems. PhD thesis, Air Force Institute of Technology, 2010. 47 [17] Craig D. Larson and John Raquet. The impact of attitude on image-based integrity. In Proceedings of The Institute of Navigation International Technical Meeting, January 2010. [18] Craig D. Larson, John Raquet, and Michael J. Veth. Developing a framework for image-based integrity. In Proceedings of ION GNSS 2009, pages 778–789, September 2009. [19] Y. C. Lee. Navigation system integrity using redundant measurements. In Proceedings of the 17th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2004), 2004. [20] Y. Ma, S. Soatta, J. Košecká, and S.S. Sastry. An Invitation to 3-D Vision: From Images to Geometric Models. Springer-Verlag New York Inc., New York, NY, 2004. [21] Yi Ma, Stefano Soatto, Jana Kosecka, and Shankar S. Sastry. An Invitation to 3-D Vision. Srpinger-Verlag, New York, New York, 2004. [22] I. Martini and G. W. Hein. An integrity monitoring technique for multiple failures detection. Proceedings of the IEEE, pages 450–467, 2006. [23] P Misra and Per K. Enge. Global Positioning System: Signals, Measurements, and Performance. Ganga-Jamuna Press, Lincoln, MA, 2001. [24] S. Moafipoor. Updating the navigation parameters by direct feedback from the image sensor in a multi-sensor system. In Proceedings of the 19th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2006), 2006. [25] A. I. Mourikis, N. Trawny, S. I. Roumeloitis, A. Johnson, and L. Matthies. Visionaided inertial navigation for precise planetary landing: Analysis and experiments. In Proceedings of Robotics: Science and Systems, 2007. 48 [26] M. B. Nielson. Developement and flight test of a robust optical-inertial navigation syatem using low-cost sensors. Master’s thesis, Graduate School of Engineering, Air Force Institute of Technology (AETC), WPAFB, OH, 2008. [27] Igor Nikiforov and Benoı̂t Roturier. Statistical analysis of different raim schemes. In Proceedings of the 15th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GPS 2002), 2002. [28] Igor Nikiforov and Benoı̂t Roturier. Advanced raim algorithms: First results. In Proceedings of the 18th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2005), 2005. [29] B. W. Parkinson and P. Axelrad. Autonomous gps integrity monitoring using the pseudorange residual. Journal of the Institute of Navigation, 35(2), 1988. [30] J. E. Potter and M. C. Suman. arrays of skewed instruments. Thresholdless redundancy management with Integrity in Electronic Flight Control Systems, AGRADOGRAPH-224:15–1 to 15–25, 1977. [31] J. F. Raquet and M. G. Giebner. Navigation using optical measurements of objects at unknown locations. In Proceedings of the 59th Annual Meeting of The Institute of Navigation and CIGTF 22nd Guidance Test Symposium, 2003. [32] Robert Rogers. Applied Mathematics in Integrated Navigation Systems. American Institute of Aeronautics and Astronautics, 2nd edition, 2003. [33] Ben K.H. Soon and Steve Scheding. Sensitivity analysis of an integrated vision and inertial system sensitivity analysis of an integrated vision and inertial system. In Proceedings of IEEE/ION PLANS 2006, 2006. [34] James Spilker Jr. Digital Communications by Satellite. Prentice Hall, Englewood Cliffs, NJ, 1977. 49 [35] Henry Stark and John W. Woods. Probability, Random Processes, and Estimation Theory for Engineers. Prentice Hall, 2nd edition, 1994. [36] M. A. Strurza. Navigation system integrity using redundant measurements. Journal of the Institute of Navigation, 35(4):483–501, Winter 1988-1989. [37] D. H. Titterton and J. L. Weston. Strapdown Inertial Navigation Technology, volume 207 of Progress in Astronautics and Aeronautics. American Institute of Aeronautics and Astronautics, New York, NY, 2nd edition, 2004. [38] N. Trawny, A. I. Mourikis, S. I. Roumeliotis, A. E. Johnson, and J. F. Montgomery. Vision-aided inertial navigation for pin-point landing using observations of mapped landmarks. JOURNAL OF FIELD ROBOTICS, 24(5):357–378, May 2007. [39] F. van Grass and J. L. Farrell. Baseline fault detection and exclusion algorithm. In Proceedings of the 49th Annual Meeting of The Institute of Navigation, 1993. [40] M. Veth, R. Anderson, F. Webber, and M. Nielsen. Tightly-coupled ins, gps, and imaging sensors for precision geolocation. In Proceedings of the 2008 National Technical Meeting of The Institute of Navigation, 2008. [41] M. Veth, J. Raquet, and M. Pachter. Stochastic constraints for efficient image correspondence search. IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 42(3):973–982, Jul 2006. [42] M. J. Veth and J. F. Raquet. Fusion of low-cost imagining and inertial sensors for navigation. In Proceedings of the 19th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2006), 2006. [43] M. J. Veth and J. F. Raquet. Two-dimentional stochastic processes for tight integration of optical and inertial sensors. In Proceedings of The Institute of Navigation National Technical Meeting, January 2006. 50 [44] Michael J. Veth. Fusion of Imaging and Inertial Sensors for Navigation. PhD thesis, Air Force Institute of Technology, 2006. [45] Allen D. Wu and Eric N. Johnson. Autonomous flight in gps-denied environ- ments using monocular vision and inertial sensors. In Proceedings of AIAA Infotech@Aerospace 2010, 2010. [46] Allen D. Wu, Eric N. Johnson, and Alison A. Proctor. Vision-aided inertial navigation for flight control. In Proceedings of AIAA Guidance, Navigation, and Control Conference and Exhibit, 2005. [47] R. S. Young, G. A. McGraw, and B. T. Driscoll. Investigation and comparison of horizontal protection level and horizontal uncertainty level in fde algorithms. In Proceedings of the ION GPS-96, Satellite Devision, Technical Meeting, 1996. [48] Z. Zhang. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330–1334, 2000. 51