If you wish to contribute or participate in the discussions about articles you are invited to contact the Editor

Integrity

From Navipedia
Revision as of 00:04, 12 April 2011 by Duarte.Sampaio (talk | contribs)
Jump to navigation Jump to search


FundamentalsFundamentals
Title Integrity
Author(s) Rui Barradas Pereira
Level Basic
Year of Publication 2011
Logo GMV.png


As it has often been cause of confusion, it is worth trying to clarify the distinction between accuracy and integrity:

  • From a mathematical point of view, the main difference between them is the point of the tail of the statistical distribution of errors at which to place the cut-off. For instance, civil aviation requirements tend to measure accuracy at the 95% percentile (e.g. "95% of the errors shall be below such and such..."), whereas integrity requirements refer to percentiles that range between 99.999% and 99.9999999% (depending on the particular topic under consideration). The intention behind this is to keep the probability of hazardous situations (that would possibly put at risk human lives) extremely low.
  • Another key difference is in the alarms; integrity requirements involve alarms being raised when system's performance is bad enough to become risky, while accuracy requirements do not.
  • From a system performance perspective, accuracy is understood as a global system characteristic, whereas integrity is rather intended as real time decision criterion for using or not using the system. For this reason, it has been a common practice in the civil aviation world to associate integrity with a mechanism, or set of mechanisms (barriers) that is part of the integrity assurance chain but at the same time is completely independent of the other parts of the system for which integrity is to be assured. Examples of this (although not at user level) are EGNOS CPF Check Set (independent from CPF Processing Set) or Galileo GCC IPF (independent from GCC OSPF).

The definitions of the four RNP parameter above are somewhat vague, and need further specification. In the case of integrity, this is achieved by means of the concepts of alert limit, integrity risk, protection level and time to alert. Different definitions of alert limit can be found in the civil aviation standards and regulations depending on the specificity of the subject they are to be applied to. The following generic definition has been taken from [1]:

Alert Limit

The alert limit for a given parameter measurement is the error tolerance not to be exceeded without issuing an alert.

The following more specific definition of alert limit will be used for a positioning system:

The horizontal (respectively vertical) alert limit is the maximum allowable horizontal (respectively vertical) position error beyond which the system should be declared unavailable for the intended application.

Integrity Risk

The integrity risk, although not explicitly defined in the civil aviation standards, is a bound to the probability that integrity is not achieved, which must be prescribed for each system or system's application or operating mode as a requirement. The following simplified definition, will adopted:

Integrity risk is the probability that, at any moment, the position error exceeds the alert limit.

The integrity risk is often assumed to include the time to alert (accurately defined later in this section), a sort of latency time left to the system before it detects a failure after it has occurred. In that approach, the position error should keep exceeding the alert limit for a time longer than the time to alert in order to be accountable for within the integrity risk. However, and since there seems not to exist a standardised definition of the integrity risk, it was deliberately decided to leave aside the time to alert in order to keep the basic concepts as simple and independent as possible, so they do not become confusing when combined into more complex concepts. Integrity risk and time to alert will be combined later on in this section within the concept of integrity failure.

Protection Level

Since during normal operations it is not possible to know the position error of an aircraft, a statistical bound to position error called protection level needs to be computed in order to be able to measure the risk that the alert limit is surpassed. Hence, the system is not declared unavailable when the alert limit is exceeded by the actual position error, but by the protection level (as then the risk for position error to exceed the alert limit would be above the target integrity risk). Protection level is defined in [1] as follows:

The horizontal protection level provides a bound on the horizontal position error with a probability derived from the integrity requirement. Similarly, the vertical protection level provides a bound on the vertical position.

The phrase "a probability derived from the integrity requirement" in the preceding definition, although not explicitly, refers to the integrity risk. A somewhat equivalent definition of horizontal protection level (as well as its vertical analogous, herein omitted as it is identical) is provided in [2] for GNSS airborne equipment operating with an SBAS service:

The horizontal protection level is the radius of a circle in the horizontal plane (the plane tangent to WGS-84 ellipsoid), with its center being at the true position, that describes the region assured to contain the indicated horizontal position. It is the horizontal region where the missed alert requirement can be met. It is based upon the error estimates provided by SBAS.

The same standard [2] defines a specific horizontal protection level (as well as its vertical analogous, which was omitted) for GNSS airborne equipment operating autonomously:

The horizontal protection level is the radius of a circle in the horizontal plane (the plane tangent to WGS-84 ellipsoid), with its center being at the true position, that describes the region assured to contain the indicated horizontal position. It is a horizontal region where the missed alert and false alarm requirements are met for the chosen set of satellites when autonomous fault detection is used. It is a function of the satellite and user geometry and the expected error characteristics: it is not affected by actual measurements. Its value is predictable given reasonable assumptions regarding the expected error characteristics.

Both definitions above are unspecific about how the protection level is linked to the integrity risk. For that reason the following (conceptually equivalent) definition was adopted:

A horizontal (respectively vertical) protection level is a statistical bound of the horizontal (respectively vertical) position error computed so as to guarantee that the probability of the absolute horizontal (respectively vertical) position error exceeding said number is smaller than or equal to the target integrity risk.

Note that any reference to false alarm probability has been intentionally omitted. False alarm probability requirements (if applicable) for any particular positioning system could then be specified, thus setting restrictions to how loose a bound the protection level can be, but this is kept independent of the protection level concept itself. The concepts of horizontal protection level (HPL) and vertical protection level (VPL) are graphically represented in the Horizontal and vertical protection levels figure.

Horizontal and vertical protection levels
Horizontal and vertical protection levels


A quite handy tool to explain an illustrate most of these concepts an their relations (as well as to asses positioning systems’ performance) is the so-called Stanford diagram (or Stanford plot) introduced in [3] and improved in [4]. The layout of a generic Stanford diagram is shown below, in the Stanford diagram figure. For each sample position and protection level, a point is plotted in the Stanford diagram whose abscissa represents the absolute position error and whose ordinate represents its associated protection level. Usually, separate Stanford plots are represented for the horizontal and vertical components of position errors (and corresponding horizontal and vertical protection levels, respectively). The diagonal axis separates those samples in which the position error is covered by the protection level, above the diagonal, from those, below the diagonal, in which the protection level fails to cover the position error. Stanford plots allow an easy and quick check that integrity holds, just by making sure that all sample points lie on the upper side of the diagonal axis. Also, the proximity of the cloud of sample points to the diagonal gives an idea of the achieved level of safety, as any point above the diagonal but very close to it indicates that an integrity event was close to occur:

A horizontal (respectively vertical) integrity event occurs whenever the horizontal (respectively vertical) position error exceeds the horizontal (respectively vertical) protection level.

The preceding definition is not of general consensus, but will be adopted for convenience. When an integrity event occurs, an alarm should be raised within a prescribed (rather short) time lapse after the event (this is the “ability to provide timely warnings” mentioned in the formal definition of integrity). Said prescribed time lapse is known as the time to alert (TTA), defined in [1] as:

Time to Alert

The maximum allowable time elapsed from the onset of the navigation system being out of tolerance until the equipment enunciates the alert.

Strictly speaking, an integrity event should not be considered as such unless it lasts for longer than the TTA without an alarm being raised, since integrity events which either are detected (with corresponding alarm raised) within the TTA or last for shorter than the TTA should not be considered in the integrity statistics (e.g. when qualifying an airborne navigation system). However, and for the sake of clarity, it will be kept the above definition of an integrity event and introduce an additional definition; the integrity failure.

Integrity Failure

An integrity failure is an integrity event that lasts for longer than the TTA and with no alarm raised within the TTA.

The Stanford diagram
The Stanford diagram


The Stanford diagram actually accounts for integrity events and not for integrity failures (according to the above definitions), but allows to distinguish between two types of integrity events: misleading information (or MI) events and hazardously misleading information (or HMI) events:

  • A misleading information event occurs when, being the system declared available, the position error exceeds the protection level but not the alert limit.
  • A hazardously misleading information event occurs when, being the system declared available, the position error exceeds the alert limit.

Both MI and HMI events can either be horizontal or vertical. In the Stanford diagram, as depicted in the figure above, MI events would be represented by those sample points lying below the diagonal axis and on the left-hand side of the vertical dashed line (which is placed at the abscissa equal to the alert limit). The HMI events would be represented by points lying on the right-hand side of the vertical dashed line and below the horizontal dashed line (which is placed at the ordinate equal to the alert limit).

Finally, Stanford diagrams allow also to asses compliance of system availability requirements, which in the plot appear as points whose ordinate is above the alert limit. Hence, correct system performance (in terms of availability and integrity) when all points in the Stanford plot lie inside the triangle defined by the ordinate axis, the diagonal axis, and the horizontal dashed line. Accuracy could also be assessed by including a vertical line in the diagram at the abscissa equal to the target error upper bound, those points lying on the left-hand side of that line corresponding to correct system accuracy performance.

Integrity in Galileo

All the concepts defined so far had their origin in the civil aviation world, and some of them achieved their current degree of maturity thanks to the advent of SBAS systems (and, in particular, of the American WAAS system). The new integrity concept defined for Galileo, conceived for safety-of-life applications, has not yet been adopted by the civil aviation, but also offers a way to achieve GNSS integrity which must be taken into account not only in the context of safety-of-life applications, but also in that of liability critical applications. However, the Galileo integrity concept can be considered to be equivalent to the integrity concepts adopted so far by the civil aviation, the main difference being that, instead of fixing the integrity risk and then computing an associated protection level, Galileo users should fix a target protection level (or similar concept) and then compute the associated integrity risk by means of the integrity information broadcast by Galileo satellites. Whether the Galileo integrity concept actually requires a particular treatment in the context of liability-critical applications is a question beyond the scope of present version of this document and, hence, will have to be discussed on future versions.

References

  1. ^ a b c Annex 10 (Aeronautical Telecommunications) To The Convention On International Civil Aviation, Volume I - Radio Navigation Aids, International Standards And Recommended Practices (SARPs). ICAO Doc. AN10-1, 6th Edition, Jul 2006
  2. ^ a b Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. RTCA DO-229, Dec 2006
  3. ^ WAAS Precision Approach Metrics: Accuracy, Integrity, Continuity and Availability - Oct 1999
  4. ^ The Stanford - ESA Integrity Diagram: Focusing on SBAS Integrity, Sep 2006