if you consider a Gaussian vector V=(X,Y)inmathbbRd=m+n, you know how to find the conditional distribution of X knowing the value of Y=y, right ? This is exactly the same thing here.
For example, let us suppose that a=0,b=N+1:
- you have a noisy observation Y=(y1,y2)=(Oa,Ob) with know covariance matrix SigmaY
- the data you are looking for, X=(z1,ldots,zN)inmathbbRN, have a known covariance matrix SigmaX
- the covariance matrix E[XYt]=SigmaX,Y is also known.
A quick way to find the conditional distribution of X knowing Y is to write
X=AU+BV
Y=CU
where U,V are independent standard Gaussian random variable of size 2 and N respectively, while AinMN,2(mathbbR) and BinMN,N(mathbbR) and CinM2,2(mathbbR). Because
- CCt=SigmaY gives you C=Sigmafrac12y,
- ACt=SigmaX,Y then gives you A=SigmaX,YSigma−frac12y,
- AAt+BBt=SigmaX then gives you B=(SigmaX−SigmaX,YSigma−1ySigmaX,Y)frac12,
the 3 matrices are easily computable, and C is invertible in the case you are considering. This shows that if you know that Y=y, the conditional law of X|Y=y is given by
X=AC−1y+BV,
which is a Gaussian vector with mean AC−1y=SigmaX,YSigma−1yy and covariance BBt=SigmaX−SigmaX,YSigma−1ySigmaX,Y
No comments:
Post a Comment