If you wish to contribute or participate in the discussions about articles you are invited to contact the Editor

Integrity: Difference between revisions

From Navipedia
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(3 intermediate revisions by 2 users not shown)
Line 4: Line 4:
|Level=Basic
|Level=Basic
|YearOfPublication=2011
|YearOfPublication=2011
|Logo=GMV
|Title={{PAGENAME}}
|Title={{PAGENAME}}
}}
}}
Line 15: Line 16:
* From a mathematical point of view, the main difference between them is the point of the tail of the statistical distribution of errors at which to place the cut-off. For instance, civil aviation requirements tend to measure accuracy at the 95% percentile (e.g. "95% of the errors shall be below such and such..."), whereas integrity requirements refer to percentiles that range between 99.999% and 99.9999999% (depending on the particular topic under consideration). The intention behind this is to keep the probability of hazardous situations (that would possibly put at risk human lives) extremely low.
* From a mathematical point of view, the main difference between them is the point of the tail of the statistical distribution of errors at which to place the cut-off. For instance, civil aviation requirements tend to measure accuracy at the 95% percentile (e.g. "95% of the errors shall be below such and such..."), whereas integrity requirements refer to percentiles that range between 99.999% and 99.9999999% (depending on the particular topic under consideration). The intention behind this is to keep the probability of hazardous situations (that would possibly put at risk human lives) extremely low.
* Another key difference is in the alarms; integrity requirements involve alarms being raised when system's performance is bad enough to become risky, while accuracy requirements do not.
* Another key difference is in the alarms; integrity requirements involve alarms being raised when system's performance is bad enough to become risky, while accuracy requirements do not.
* From a system performance perspective, accuracy is understood as a global system characteristic, whereas integrity is rather intended as real time decision criterion for using or not using the system. For this reason, it has been a common practice to associate integrity with a mechanism, or set of mechanisms (barriers) that is part of the integrity assurance chain but at the same time is completely independent of the other parts of the system for which integrity is to be assured. Examples of this (although not at user level) are [[EGNOS General Introduction|EGNOS]] CPF Check Set (independent from CPF Processing Set) or [[GALILEO General Introduction|Galileo]] GCC IPF (independent from GCC OSPF).
* From a system performance perspective, accuracy is understood as a global system characteristic, whereas integrity is rather intended as real time decision criterion for using or not using the system. For this reason, it has been a common practice to associate integrity with a mechanism, or set of mechanisms (barriers) that is part of the integrity assurance chain but at the same time is completely independent of the other parts of the system for which integrity is to be assured.


==Integrity Parameters==
==Integrity Parameters==
Line 92: Line 93:
* A hazardously misleading information event occurs when, being the system declared available, the position error exceeds the alert limit.
* A hazardously misleading information event occurs when, being the system declared available, the position error exceeds the alert limit.


The ''Stanford diagram'' (or ''Stanford plot'') introduced in <ref>[http://waas.stanford.edu/metrics.html WAAS Precision Approach Metrics: Accuracy, Integrity, Continuity and Availability - Oct 1999]</ref> and improved in <ref>[[The Stanford – ESA Integrity Diagram: Focusing on SBAS Integrity]]</ref> is a quite handy tool to explain an illustrate most of these concepts and their relations (as well as to asses positioning systems’ performance). The layout of a generic Stanford diagram is shown below, in the Stanford diagram figure. For each sample position and protection level, a point is plotted in the Stanford diagram whose abscissa represents the absolute position error and whose ordinate represents its associated protection level. Usually, separate Stanford plots are represented for the horizontal and vertical components of position errors (and corresponding horizontal and vertical protection levels, respectively). The diagonal axis separates those samples in which the position error is covered by the protection level, above the diagonal, from those, below the diagonal, in which the protection level fails to cover the position error. Stanford plots allow an easy and quick check that integrity holds, just by making sure that all sample points lie on the upper side of the diagonal axis. Also, the proximity of the cloud of sample points to the diagonal gives an idea of the achieved level of safety, as any point above the diagonal but very close to it indicates that an ''integrity event'' was close to occur.
The ''Stanford diagram'' (or ''Stanford plot'') introduced in <ref> WAAS Precision Approach Metrics: Accuracy, Integrity, Continuity and Availability - Oct 1999</ref> and improved in <ref>[[The Stanford – ESA Integrity Diagram: Focusing on SBAS Integrity]]</ref> is a quite handy tool to explain an illustrate most of these concepts and their relations (as well as to asses positioning systems’ performance). The layout of a generic Stanford diagram is shown below, in the Stanford diagram figure. For each sample position and protection level, a point is plotted in the Stanford diagram whose abscissa represents the absolute position error and whose ordinate represents its associated protection level. Usually, separate Stanford plots are represented for the horizontal and vertical components of position errors (and corresponding horizontal and vertical protection levels, respectively). The diagonal axis separates those samples in which the position error is covered by the protection level, above the diagonal, from those, below the diagonal, in which the protection level fails to cover the position error. Stanford plots allow an easy and quick check that integrity holds, just by making sure that all sample points lie on the upper side of the diagonal axis. Also, the proximity of the cloud of sample points to the diagonal gives an idea of the achieved level of safety, as any point above the diagonal but very close to it indicates that an ''integrity event'' was close to occur.


Both MI and HMI events can either be horizontal or vertical. In the Stanford diagram, as depicted in the figure above, MI events would be represented by those sample points lying below the diagonal axis and on the left-hand side of the vertical dashed line (which is placed at the abscissa equal to the alert limit). The HMI events would be represented by points lying on the right-hand side of the vertical dashed line and below the horizontal dashed line (which is placed at the ordinate equal to the alert limit).
Both MI and HMI events can either be horizontal or vertical. In the Stanford diagram, as depicted in the figure above, MI events would be represented by those sample points lying below the diagonal axis and on the left-hand side of the vertical dashed line (which is placed at the abscissa equal to the alert limit). The HMI events would be represented by points lying on the right-hand side of the vertical dashed line and below the horizontal dashed line (which is placed at the ordinate equal to the alert limit).
Line 98: Line 99:
Finally, Stanford diagrams allow also to asses compliance of system availability requirements, which in the plot appear as points whose ordinate is above the alert limit. Hence, correct system performance (in terms of availability and integrity) when all points in the Stanford plot lie inside the triangle defined by the ordinate axis, the diagonal axis, and the horizontal dashed line. [[Accuracy]] could also be assessed by including a vertical line in the diagram at the abscissa equal to the target error upper bound, those points lying on the left-hand side of that line corresponding to correct system accuracy performance.
Finally, Stanford diagrams allow also to asses compliance of system availability requirements, which in the plot appear as points whose ordinate is above the alert limit. Hence, correct system performance (in terms of availability and integrity) when all points in the Stanford plot lie inside the triangle defined by the ordinate axis, the diagonal axis, and the horizontal dashed line. [[Accuracy]] could also be assessed by including a vertical line in the diagram at the abscissa equal to the target error upper bound, those points lying on the left-hand side of that line corresponding to correct system accuracy performance.


==Integrity in Galileo==
All the concepts defined so far had their origin in the civil aviation world, and some of them achieved their current degree of maturity thanks to the advent of SBAS systems (and, in particular, of the American WAAS system). The new integrity concept defined for Galileo, conceived for safety-of-life applications, has not yet been adopted by the civil aviation, but also offers a way to achieve GNSS integrity which must be taken into account not only in the context of safety-of-life applications, but also in that of liability critical applications. However, the Galileo integrity concept can be considered to be equivalent to the integrity concepts adopted so far by the civil aviation, the main difference being that, instead of fixing the integrity risk and then computing an associated protection level, Galileo users should fix a target protection level (or similar concept) and then compute the associated integrity risk by means of the integrity information broadcast by Galileo satellites.
Additional information about the Galileo Integrity Concept can be found [[Galileo integrity concept|here]].


==Notes==
==Notes==

Latest revision as of 13:06, 26 July 2018


FundamentalsFundamentals
Title Integrity
Edited by GMV
Level Basic
Year of Publication 2011
Logo GMV.png

Integrity is the measure of the trust that can be placed in the correctness of the information supplied by a navigation system. Integrity includes the ability of the system to provide timely warnings to users when the system should not be used for navigation[nb 1].


Integrity vs. Accuracy

As it has often been cause of confusion, it is worth trying to clarify the distinction between accuracy and integrity:

  • From a mathematical point of view, the main difference between them is the point of the tail of the statistical distribution of errors at which to place the cut-off. For instance, civil aviation requirements tend to measure accuracy at the 95% percentile (e.g. "95% of the errors shall be below such and such..."), whereas integrity requirements refer to percentiles that range between 99.999% and 99.9999999% (depending on the particular topic under consideration). The intention behind this is to keep the probability of hazardous situations (that would possibly put at risk human lives) extremely low.
  • Another key difference is in the alarms; integrity requirements involve alarms being raised when system's performance is bad enough to become risky, while accuracy requirements do not.
  • From a system performance perspective, accuracy is understood as a global system characteristic, whereas integrity is rather intended as real time decision criterion for using or not using the system. For this reason, it has been a common practice to associate integrity with a mechanism, or set of mechanisms (barriers) that is part of the integrity assurance chain but at the same time is completely independent of the other parts of the system for which integrity is to be assured.

Integrity Parameters

This definition of integrity is somewhat vague, and needs further specification. In the case of integrity, this is achieved by means of the concepts of Alert Limit, Integrity Risk, Protection Level and Time to Alert[2]:

Alert Limit: The alert limit for a given parameter measurement is the error tolerance not to be exceeded without issuing an alert.
Time to Alert:The maximum allowable time elapsed from the onset of the navigation system being out of tolerance until the equipment enunciates the alert.
Integrity Risk: Probability that, at any moment, the position error exceeds the Alert Limit.
Protection Level: Statistical bound error computed so as to guarantee that the probability of the absolute position error exceeding said number is smaller than or equal to the target integrity risk.


Alert Limit

The alert limit for a given parameter measurement is the error tolerance not to be exceeded without issuing an alert.

The following more specific definition of alert limit will be used for a positioning system:

The horizontal (respectively vertical) alert limit is the maximum allowable horizontal (respectively vertical) position error beyond which the system should be declared unavailable for the intended application.


Time to Alert

When an integrity event occurs, an alarm should be raised within a prescribed (rather short) time lapse after the event (this is the “ability to provide timely warnings” mentioned in the formal definition of integrity). Said prescribed time lapse is known as the time to alert (TTA), defined in [2] as:

The maximum allowable time elapsed from the onset of the navigation system being out of tolerance until the equipment enunciates the alert.

Strictly speaking, an integrity event should not be considered as such unless it lasts for longer than the TTA without an alarm being raised, since integrity events which either are detected (with corresponding alarm raised) within the TTA or last for shorter than the TTA should not be considered in the integrity statistics (e.g. when qualifying an airborne navigation system). However, and for the sake of clarity, it will be kept the above definition of an integrity event and introduce an additional definition: the integrity failure.


Integrity Risk

The integrity risk, although not explicitly defined in the civil aviation standards, is a bound to the probability that integrity is not achieved, which must be prescribed for each system or system's application or operating mode as a requirement. Hence:

Integrity risk is the probability that, at any moment, the position error exceeds the alert limit.

The integrity risk is often assumed to include the Time to Alert, a sort of latency time left to the system before it detects a failure after it has occurred. In that approach, the position error should keep exceeding the alert limit for a time longer than the time to alert in order to be accountable for within the integrity risk. However, and since there seems not to exist a standardized definition of the integrity risk, time to alert was set aside in order to keep the basic concepts as simple and independent as possible, so they do not become confusing when combined into more complex concepts. Integrity risk and time to alert will be combined later within the concept of Integrity Failure.


Protection Level

Since during normal operations it is not possible to know the position error of an aircraft, a statistical bound to position error called protection level needs to be computed in order to be able to measure the risk that the alert limit is surpassed. Hence, the system is not declared unavailable when the alert limit is exceeded by the actual position error, but by the protection level (as then the risk for position error to exceed the alert limit would be above the target integrity risk). Protection level is defined by ICAO[2] as follows:

The horizontal protection level provides a bound on the horizontal position error with a probability derived from the integrity requirement. Similarly, the vertical protection level provides a bound on the vertical position.

The phrase "a probability derived from the integrity requirement" in the preceding definition, although not explicitly, refers to the integrity risk. A somewhat equivalent definition of horizontal protection level (as well as its vertical analogous, herein omitted as it is identical) is provided in by RTCA[3] for GNSS airborne equipment operating with an SBAS service:

The horizontal protection level is the radius of a circle in the horizontal plane (the plane tangent to WGS-84 ellipsoid), with its center being at the true position, that describes the region assured to contain the indicated horizontal position. It is the horizontal region where the missed alert requirement can be met. It is based upon the error estimates provided by SBAS.
Horizontal and vertical protection levels
Horizontal and vertical protection levels

The same standard [3] defines a specific horizontal protection level (as well as its vertical analogous, which was omitted) for GNSS airborne equipment operating autonomously:

The horizontal protection level is the radius of a circle in the horizontal plane (the plane tangent to WGS-84 ellipsoid), with its center being at the true position, that describes the region assured to contain the indicated horizontal position. It is a horizontal region where the missed alert and false alarm requirements are met for the chosen set of satellites when autonomous fault detection is used. It is a function of the satellite and user geometry and the expected error characteristics: it is not affected by actual measurements. Its value is predictable given reasonable assumptions regarding the expected error characteristics.

Both definitions above are unspecific about how the protection level is linked to the integrity risk. For that reason the following (conceptually equivalent) definition was adopted:

A horizontal (respectively vertical) protection level is a statistical bound of the horizontal (respectively vertical) position error computed so as to guarantee that the probability of the absolute horizontal (respectively vertical) position error exceeding said number is smaller than or equal to the target integrity risk.

Note that any reference to false alarm probability has been intentionally omitted. False alarm probability requirements (if applicable) for any particular positioning system could then be specified, thus setting restrictions to how loose a bound the protection level can be, but this is kept independent of the protection level concept itself. The concepts of horizontal protection level (HPL) and vertical protection level (VPL) are graphically represented in the Horizontal and vertical protection levels figure.

Related with the protection level are integrity events that are defined as:

A horizontal (respectively vertical) integrity event occurs whenever the horizontal (respectively vertical) position error exceeds the horizontal (respectively vertical) protection level.


Integrity Failure

The Stanford diagram
The Stanford diagram

An integrity failure is an integrity event that lasts for longer than the TTA and with no alarm raised within the TTA.

The Stanford diagram actually accounts for integrity events and not for integrity failures (according to the above definitions), but allows to distinguish between two types of integrity events: misleading information (or MI) events and hazardously misleading information (or HMI) events:

  • A misleading information event occurs when, being the system declared available, the position error exceeds the protection level but not the alert limit.
  • A hazardously misleading information event occurs when, being the system declared available, the position error exceeds the alert limit.

The Stanford diagram (or Stanford plot) introduced in [4] and improved in [5] is a quite handy tool to explain an illustrate most of these concepts and their relations (as well as to asses positioning systems’ performance). The layout of a generic Stanford diagram is shown below, in the Stanford diagram figure. For each sample position and protection level, a point is plotted in the Stanford diagram whose abscissa represents the absolute position error and whose ordinate represents its associated protection level. Usually, separate Stanford plots are represented for the horizontal and vertical components of position errors (and corresponding horizontal and vertical protection levels, respectively). The diagonal axis separates those samples in which the position error is covered by the protection level, above the diagonal, from those, below the diagonal, in which the protection level fails to cover the position error. Stanford plots allow an easy and quick check that integrity holds, just by making sure that all sample points lie on the upper side of the diagonal axis. Also, the proximity of the cloud of sample points to the diagonal gives an idea of the achieved level of safety, as any point above the diagonal but very close to it indicates that an integrity event was close to occur.

Both MI and HMI events can either be horizontal or vertical. In the Stanford diagram, as depicted in the figure above, MI events would be represented by those sample points lying below the diagonal axis and on the left-hand side of the vertical dashed line (which is placed at the abscissa equal to the alert limit). The HMI events would be represented by points lying on the right-hand side of the vertical dashed line and below the horizontal dashed line (which is placed at the ordinate equal to the alert limit).

Finally, Stanford diagrams allow also to asses compliance of system availability requirements, which in the plot appear as points whose ordinate is above the alert limit. Hence, correct system performance (in terms of availability and integrity) when all points in the Stanford plot lie inside the triangle defined by the ordinate axis, the diagonal axis, and the horizontal dashed line. Accuracy could also be assessed by including a vertical line in the diagram at the abscissa equal to the target error upper bound, those points lying on the left-hand side of that line corresponding to correct system accuracy performance.


Notes

  1. ^ This definition was adapted from the 2008 US Federal Radionavigation Plan[1]

References

  1. ^ US Federal Radionavigation Plan, DOT-VNTSC-RITA-08-02/DoD-4650.5, 2008
  2. ^ a b c Annex 10 (Aeronautical Telecommunications) To The Convention On International Civil Aviation, Volume I - Radio Navigation Aids, International Standards And Recommended Practices (SARPs). ICAO Doc. AN10-1, 6th Edition, Jul 2006
  3. ^ a b Minimum Operational Performance Standards for Global Positioning System/Wide Area Augmentation System Airborne Equipment. RTCA DO-229, Dec 2006
  4. ^ WAAS Precision Approach Metrics: Accuracy, Integrity, Continuity and Availability - Oct 1999
  5. ^ The Stanford – ESA Integrity Diagram: Focusing on SBAS Integrity