If you wish to contribute or participate in the discussions about articles you are invited to contact the Editor

Block-Wise Weighted Least Square: Difference between revisions

From Navipedia
Jump to navigation Jump to search
(Created page with "{{Article Infobox2 |Category=Fundamentals |Title={{PAGENAME}} |Authors= J. Sanz Subirana, JM. Juan Zornoza and M. Hernandez-Pajares, University of Catalunia, Spain. |Level=Medium...")
 
No edit summary
Line 26: Line 26:
\left[
\left[
\begin{array}{c}
\begin{array}{c}
{\mathbf Y_1}
{\mathbf Y_1} \\
{\mathbf Y_2}
{\mathbf Y_2}
\end{array}
\end{array}
\right] =
\right] =
\begin{array}{c}
\left[
\left[
{\mathbf G_1}
{\mathbf G_2}
\end{array}
\right]
</math>
<math>
{\mathbf X}; \qquad
{\mathbf R}= \left[
\begin{array}{cc}
{\mathbf R_1} & {\mathbf 0}
{\mathbf 0} & {\mathbf R_2}
\end{array}
\right]
</math>
:<math>
\begin{array}{c}
{\mathbf Y_1} \\
{\mathbf Y_2}
\end{array} =
\begin{array}{c}
\begin{array}{c}
{\mathbf G_1}\\[0.2cm]
{\mathbf G_1}\\[0.2cm]
{\mathbf G_2}
{\mathbf G_2}
\end{array}
\end{array}
{\mathbf X}\\ ;\qquad
\right] {\mathbf X} ;\qquad
{\mathbf R}=
{\mathbf R}=\left[
\begin{array}{cc}
\begin{array}{cc}
{\mathbf R_1} & {\mathbf 0}  \\[0.2cm]
{\mathbf R_1} & {\mathbf 0}  \\[0.2cm]
{\mathbf 0} & {\mathbf R_2}
{\mathbf 0} & {\mathbf R_2}
\end{array}
\end{array}
\right]
\qquad \mbox{(2)}
\qquad \mbox{(2)}
</math>
</math>
Line 96: Line 73:




'''''Comments:'''''


Comments:
*''' Recursive computation:''' From previous approach, the following recursive computation of estimate <math>{\mathbf X}</math> can be written:
 
* Recursive computation: From previous approach, the following recursive computation of estimate <math>{\mathbf X}</math> can be written:


:<math>
:<math>
Line 111: Line 87:
</math>
</math>


:Note: If only the final estimate is desired, it is best not to process data sequentially using (7), but instead to apply (see [[Best Linear Unbiased Minimum-Variance Estimator (BLUE)]])
 
<math>
:''Note:'' If only the final estimate is desired, it is best not to process data sequentially using (7), but instead to apply (see [[Best Linear Unbiased Minimum-Variance Estimator (BLUE)]])
 
::<math>
\hat{\mathbf X}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf R^{-1}}\,{\mathbf Y}
\hat{\mathbf X}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf R^{-1}}\,{\mathbf Y}
\qquad \mbox{(8)}
\qquad \mbox{(8)}
Line 121: Line 99:




* Constrains: A priory information can be added to the linear system (1)  as constrain equations <math>{\mathbf \Lambda}={\mathbf A} {\mathbf X}</math>  with a given weight <math>{\mathbf W}={\mathbf R_\Lambda}^{-1}</math>. Indeed:
 
* '''Constrains:''' A priory information can be added to the linear system (1)  as constrain equations <math>{\mathbf \Lambda}={\mathbf A} {\mathbf X}</math>  with a given weight <math>{\mathbf W}={\mathbf R_\Lambda}^{-1}</math>. Indeed:


:<math>
:<math>
Line 131: Line 110:
</math>
</math>


==Notes==
<references group="footnotes"/>


==References==
==References==

Revision as of 09:18, 25 July 2011


FundamentalsFundamentals
Title Block-Wise Weighted Least Square
Author(s) J. Sanz Subirana, JM. Juan Zornoza and M. Hernandez-Pajares, University of Catalunia, Spain.
Level Medium
Year of Publication 2011
Logo gAGE.png


Let's consider two linear [math]\displaystyle{ [m_1\times n], [m_2\times n] }[/math] equations systems, sharing the same unknown parameters vector [math]\displaystyle{ {\mathbf X} }[/math]:

[math]\displaystyle{ \begin{array}{l} {\mathbf Y_1}={\mathbf G_1}\,{\mathbf X};{\mathbf R_1}\\[0.3cm] {\mathbf Y_2}={\mathbf G_2}\,{\mathbf X};{\mathbf R_2}\\ \end{array} \qquad \mbox{(1)} }[/math]


where [math]\displaystyle{ {\mathbf R_1} }[/math] and [math]\displaystyle{ {\mathbf R_2} }[/math] are the covariance matrices of measurement vectors [math]\displaystyle{ {\mathbf Y_1} }[/math], [math]\displaystyle{ {\mathbf Y_2} }[/math].


Thence the two systems can be combined into a common [math]\displaystyle{ [(m_1+m_2)\times n] }[/math] system as:

[math]\displaystyle{ \left[ \begin{array}{c} {\mathbf Y_1} \\ {\mathbf Y_2} \end{array} \right] = \left[ \begin{array}{c} {\mathbf G_1}\\[0.2cm] {\mathbf G_2} \end{array} \right] {\mathbf X} ;\qquad {\mathbf R}=\left[ \begin{array}{cc} {\mathbf R_1} & {\mathbf 0} \\[0.2cm] {\mathbf 0} & {\mathbf R_2} \end{array} \right] \qquad \mbox{(2)} }[/math]


where no correlation between the two measurement vectors [math]\displaystyle{ {\mathbf Y_1} }[/math] and [math]\displaystyle{ {\mathbf Y_2} }[/math] is assumed in matrix [math]\displaystyle{ {\mathbf R} }[/math].


From (3) and (4) (see Best Linear Unbiased Minimum-Variance Estimator (BLUE))

[math]\displaystyle{ \hat{\mathbf X}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf R^{-1}}\,{\mathbf Y} \qquad \mbox{(3)} }[/math]
[math]\displaystyle{ {\mathbf P}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1} \qquad \mbox{(4)} }[/math]


it is easy to show that taking the corresponding augmented matrices [math]\displaystyle{ {\mathbf Y} }[/math] and [math]\displaystyle{ {\mathbf G} }[/math], the WLS solution of previous system (\ref{eq:bbwlms}) yields:

[math]\displaystyle{ \hat{\mathbf X}=\left [{\mathbf G_1}^T\,{\mathbf R_1}^{-1}\,{\mathbf G_1} + {\mathbf G_2}^T\,{\mathbf R_2}^{-1}\,{\mathbf G_2} \right ]^{-1} \left [{\mathbf G_1}^T\,{\mathbf R_1^{-1}}\,{\mathbf Y_1} + {\mathbf G_2}^T\,{\mathbf R_2^{-1}}\,{\mathbf Y_2} \right ] \qquad \mbox{(5)} }[/math]
[math]\displaystyle{ {\mathbf P}=\left [{\mathbf G_1}^T\,{\mathbf R_1}^{-1}\,{\mathbf G_1} + {\mathbf G_2}^T\,{\mathbf R_2}^{-1}\,{\mathbf G_2} \right ]^{-1} \qquad \mbox{(6)} }[/math]


Comments:

  • Recursive computation: From previous approach, the following recursive computation of estimate [math]\displaystyle{ {\mathbf X} }[/math] can be written:
[math]\displaystyle{ \begin{array}{rl} {\mathbf P_1}=&\left [ {\mathbf G_1}^T\,{\mathbf R_1}^{-1}\,{\mathbf G_1} \right ]^{-1}\\[0.2cm] \hat{\mathbf X}_{(1)}=&{\mathbf P_1} \cdot \left [{\mathbf G_1}^T\,{\mathbf R_1^{-1}}\,{\mathbf Y_1} \right ]\\[0.4cm] {\mathbf P_2}=&\left [{\mathbf P_1}^{-1}+ {\mathbf G_2}^T\,{\mathbf R_2}^{-1}\,{\mathbf G_2} \right ]^{-1}\\[0.2cm] \hat{\mathbf X}_{(2)}=& {\mathbf P_2} \cdot \left [{\mathbf P_1^{-1}}\,{\mathbf X_{(1)}} + {\mathbf G_2}^T\,{\mathbf R_2^{-1}}\,{\mathbf Y_2} \right ]\\ \end{array} \qquad \mbox{(7)} }[/math]


Note: If only the final estimate is desired, it is best not to process data sequentially using (7), but instead to apply (see Best Linear Unbiased Minimum-Variance Estimator (BLUE))
[math]\displaystyle{ \hat{\mathbf X}=({\mathbf G}^T\,{\mathbf R}^{-1}\,{\mathbf G})^{-1}{\mathbf G}^T\,{\mathbf R^{-1}}\,{\mathbf Y} \qquad \mbox{(8)} }[/math]


and (6), that accumulates the equations without solving until the end Bierman, 1976] [1]. This could be especially useful in case of numerical instabilities, avoiding the propagation of the numerical inaccuracies along the recursive steps.


  • Constrains: A priory information can be added to the linear system (1) as constrain equations [math]\displaystyle{ {\mathbf \Lambda}={\mathbf A} {\mathbf X} }[/math] with a given weight [math]\displaystyle{ {\mathbf W}={\mathbf R_\Lambda}^{-1} }[/math]. Indeed:
[math]\displaystyle{ \begin{array}{l} {\mathbf Y}={\mathbf G}\,\,{\mathbf X};{\mathbf R}\\[0.1cm] {\mathbf \Lambda}={\mathbf A}\,\,{\mathbf X};{\mathbf R_\Lambda} \end{array} \qquad \mbox{(9)} }[/math]


References

  1. ^ [Bierman, 1976] Bierman, G., 1976. Factorization Methods fro Discrete Sequential estimation. Academic Press, New York, New York, USA.