If you wish to contribute or participate in the discussions about articles you are invited to join Navipedia as a registered user

# Best Linear Unbiased Minimum-Variance Estimator (BLUE)

Fundamentals
Title Best Linear Unbiased Minimum-Variance Estimator (BLUE)
Author(s) J. Sanz Subirana, J.M. Juan Zornoza and M. Hernández-Pajares, Technical University of Catalonia, Spain.
Year of Publication 2011

The weighting matrix ${\mathbf W}$ of the Weighted Least Square solution (WLS) is a way to account for the different quality of the data in the adjustment problem. In this article, it is introduced the criteria to define this matrix.

The equations (1) and (2) (see Weighted Least Square solution (WLS))

$\hat{\mathbf X}_{_{\mathbf W}}=({\mathbf G}^T\,{\mathbf W}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf W}\,{\mathbf Y} \qquad \mbox{(1)}$
${\mathbf P}_{_{\mathbf \Delta X_W}}=({\mathbf G}^T\,{\mathbf W}\,{\mathbf G})^{-1}{\mathbf G}^T \,{\mathbf W}\,\,{\mathbf R}\,\,{\mathbf W}\,{\mathbf G}({\mathbf G}^T\,{\mathbf W}\,{\mathbf G})^{-1} \qquad \mbox{(2)}$

are simplified when taking the weighting matrix ${\mathbf W}$ as the inverse of covariance matrix ${\mathbf R}$. That is, when:

${\mathbf W}= {\mathbf R}^{-1} \qquad \mbox{(3)}$

thence, the equations (1) and (2) become

$\hat{\mathbf X}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf R^{-1}}\,{\mathbf Y} \qquad \mbox{(4)}$
${\mathbf P}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1} \qquad \mbox{(5)}$

From a different approach, it can be shown (see for instance [Tapley et al., 2004] [1] or [Bierman, 1976] [2] ) that this solution corresponds to the Best Linear Unbiased Minimum-Variance Estimator.

The minimum variance criteria is widely used because its simplicity. It has the advantage that the complete statistical description of the random errors is not required. Rather, only the first and second order statics for the measurement error are needed [footnotes 1] (i.e., $E[{\boldsymbol \varepsilon}]=0\; ;\; {\mathbf R}=E[{\boldsymbol \varepsilon} \, {\boldsymbol \varepsilon}^T]$).

Unfortunately, the characterisation of measurement error is very difficult and even the covariance matrix is not usually known. A simplification commonly used is to assume that the measurements (prefit-residuals ${\mathbf Y}=\left( y_1, ... , y_n \right)^T$) from the different satellites are uncorrelated. Thence, the weighting matrix ${\mathbf W}$ becomes:

$W={\mathbf R}^{-1}=\left( {\begin{array}{ccc} 1/\sigma^2_{y_ {1}} & & \\ & \ddots & \\ & & 1/\sigma^2_{y{_n}} \end{array} } \right) \qquad \mbox{(6)}$

where $\sigma^2_{y_{i}}$ comes from the uncertainty of the different error sources (satellite clocks, ephemeris, ionosphere, troposphere, multipath and receiver noise):

$\sigma^2_{y_i}\equiv\sigma^2_{_{UERE_i}}=\sigma^2_{clk_i}+\sigma^2_{eph_i}+\sigma^2_{iono_i}+\sigma^2_{tropo_i}+\sigma^2_{mp_i}+\sigma^2_{noise_i} \qquad \mbox{(7)}$

(see for instance [RTCA-MOPS, 2006] [3]), where UERE stands for the User Equivalent Range Error. A simpler model for (7) is to take just an elevation dependent function according to the expression:

$\sigma_{y_i}=a+b\,e^{-elev/c} \qquad \mbox{(8)}$

## Notes

1. ^ The minimum variance estimate (4) gives the maximum likelihood estimate, when the observation errors are assumed to be distributed normally with zero mean and covariance ${\mathbf R}$.

## References

1. ^ [Tapley et al., 2004] Tapley, B., Schutz, B. and Born, G., 2004. Statistical Orbit Determination. Academic Press, Inc., Amsterdam, The Netherlands.Ta:04:ORB
2. ^ [Bierman, 1976] Bierman, G., 1976. Factorization Methods fro Discrete Sequential estimation. Academic Press, New York, New York, USA.
3. ^ [RTCA-MOPS, 2006] RTCA-MOPS, 2006. Minimum Operational Performance Standards for Global Positioning System / Wide Area Augmentation System Airborne Equipment.rtca document 229-c.