Dans la régression linéaire simple, d'où vient la formule de la variance des résidus?


21

Selon un texte que j'utilise, la formule de la variance du ith résiduel est donnée par:

σ2(11n(xix¯)2Sxx)

Je trouve cela difficile à croire car le ith résiduel est la différence entre la ith valeur observée et la ith valeur ajustée; si l'on devait calculer la variance de la différence, à tout le moins, je m'attendrais à des «avantages» dans l'expression résultante. Toute aide pour comprendre la dérivation serait appréciée.


Est-il possible que certains signes " + " dans le texte soient mal interprétés (ou mal lus) en tant que signes " "?
whuber

J'avais pensé cela, mais c'est arrivé deux fois dans le texte (2 chapitres différents) donc j'ai pensé que c'était peu probable. Bien sûr, une dérivation de la formule aiderait! :)
Eric

Les négatifs résultent de la corrélation positive entre une observation et sa valeur ajustée, ce qui réduit la variance de la différence.
Glen_b -Reinstate Monica

@Glen Merci d'avoir expliqué pourquoi il s'avère que la formule est logique, ainsi que la dérivation de votre matrice ci-dessous.
Eric

Réponses:


27

L'intuition concernant les signes «plus» liés à la variance (du fait que même lorsque nous calculons la variance d'une différence de variables aléatoires indépendantes, nous ajoutons leurs variances) est correcte mais fatalement incomplète: si les variables aléatoires impliquées ne sont pas indépendantes , alors les covariances sont également impliquées - et les covariances peuvent être négatives. Il existe une expression qui est presque comme l'expression de la question a été pensé qu'il « devrait » être par l'OP (et moi), et il est la variance de la prédiction erreur , noterons e0=y0y^0 , où y0=β0+β1x0+u0 :

Var(e0)=σ2(1+1n+(x0x¯)2Sxx)

La différence critique entre la variance de l'erreur de prédiction et la variance de l' erreur d' estimation (c'est-à-dire du résiduel) est que le terme d'erreur de l'observation prédite n'est pas corrélé avec l'estimateur , puisque la valeur y0 n'a pas été utilisée dans la construction l'estimateur et le calcul des estimations, étant une valeur hors échantillon.

L'algèbre pour les deux se déroule exactement de la même manière jusqu'à un point (en utilisant 0 au lieu de i ), mais diverge ensuite. Plus précisément:0i

Dans la régression linéaire simple , Var ( u i ) = σ 2 la variance de l'estimateur β = ( β 0 , β 1 ) ' est toujoursyi=β0+β1xi+uiVar(ui)=σ2β^=(β^0,β^1)

Var(β^)=σ2(XX)1

Nous avons

XX=[nxixixi2]

et donc

(XX)1=[xi2xixin][nxi2(xi)2]1

Nous avons

[nxi2(xi)2]=[nxi2n2x¯2]=n[xi2nx¯2]=n(xi2x¯2)nSxx

So

(XX)1=[(1/n)xi2x¯x¯1](1/Sxx)

which means that

Var(β^0)=σ2(1nxi2) (1/Sxx)=σ2nSxx+nx¯2Sxx=σ2(1n+x¯2Sxx)

Var(β^1)=σ2(1/Sxx)

Cov(β^0,β^1)=σ2(x¯/Sxx)

The i-th residual is defined as

u^i=yiy^i=(β0β^0)+(β1β^1)xi+ui

The actual coefficients are treated as constants, the regressor is fixed (or conditional on it), and has zero covariance with the error term, but the estimators are correlated with the error term, because the estimators contain the dependent variable, and the dependent variable contains the error term. So we have

Var(u^i)=[Var(ui)+Var(β^0)+xi2Var(β^1)+2xiCov(β^0,β^1)]+2Cov([(β0β^0)+(β1β^1)xi],ui)

=[σ2+σ2(1n+x¯2Sxx)+xi2σ2(1/Sxx)+2Cov([(β0β^0)+(β1β^1)xi],ui)

Pack it up a bit to obtain

Var(u^i)=[σ2(1+1n+(xix¯)2Sxx)]+2Cov([(β0β^0)+(β1β^1)xi],ui)

The term in the big parenthesis has exactly the same structure with the variance of the prediction error, with the only change being that instead of xi we will have x0 (and the variance will be that of e0 and not of u^i). The last covariance term is zero for the prediction error because y0 and hence u0 is not included in the estimators, but not zero for the estimation error because yi and hence ui is part of the sample and so it is included in the estimator. We have

2Cov([(β0β^0)+(β1β^1)xi],ui)=2E([(β0β^0)+(β1β^1)xi]ui)

=2E(β^0ui)2xiE(β^1ui)=2E([y¯β^1x¯]ui)2xiE(β^1ui)

the last substitution from how β^0 is calculated. Continuing,

...=2E(y¯ui)2(xix¯)E(β^1ui)=2σ2n2(xix¯)E[(xix¯)(yiy¯)Sxxui]

=2σ2n2(xix¯)Sxx[(xix¯)E(yiuiy¯ui)]

=2σ2n2(xix¯)Sxx[σ2nji(xjx¯)+(xix¯)σ2(11n)]

=2σ2n2(xix¯)Sxx[σ2n(xix¯)+(xix¯)σ2]

=2σ2n2(xix¯)Sxx[0+(xix¯)σ2]=2σ2n2σ2(xix¯)2Sxx

Inserting this into the expression for the variance of the residual, we obtain

Var(u^i)=σ2(11n(xix¯)2Sxx)

So hats off to the text the OP is using.

(I have skipped some algebraic manipulations, no wonder OLS algebra is taught less and less these days...)

SOME INTUITION

So it appears that what works "against" us (larger variance) when predicting, works "for us" (lower variance) when estimating. This is a good starting point for one to ponder why an excellent fit may be a bad sign for the prediction abilities of the model (however counter-intuitive this may sound...).
The fact that we are estimating the expected value of the regressor, decreases the variance by 1/n. Why? because by estimating, we "close our eyes" to some error-variability existing in the sample,since we essentially estimating an expected value. Moreover, the larger the deviation of an observation of a regressor from the regressor's sample mean, the smaller the variance of the residual associated with this observation will be... the more deviant the observation, the less deviant its residual... It is variability of the regressors that works for us, by "taking the place" of the unknown error-variability.

But that's good for estimation. For prediction, the same things turn against us: now, by not taking into account, however imperfectly, the variability in y0 (since we want to predict it), our imperfect estimators obtained from the sample show their weaknesses: we estimated the sample mean, we don't know the true expected value -the variance increases. We have an x0 that is far away from the sample mean as calculated from the other observations -too bad, our prediction error variance gets another boost, because the predicted y^0 will tend to go astray... in more scientific language "optimal predictors in the sense of reduced prediction error variance, represent a shrinkage towards the mean of the variable under prediction". We do not try to replicate the dependent variable's variability -we just try to stay "close to the average".


Thank you for a very clear answer! I'm glad that my "intuition" was correct.
Eric

Alecos, I really don't think this is right.
Glen_b -Reinstate Monica

@Alecos the mistake is in taking the parameter estimates to be uncorrelated with the error term. This part: Var(u^i)=Var(ui)+Var(β^0)+xi2Var(β^1)+2xiCov(β^0,β^1) isn't right.
Glen_b -Reinstate Monica

@Eric I apologize for misleading you earlier. I have tried to provide some intuition for both formulas.
Alecos Papadopoulos

+1 You can see why I did the multiple regression case for this... thanks for going to the extra effort of doing the simple-regression case.
Glen_b -Reinstate Monica

19

Sorry for the somewhat terse answer, perhaps overly-abstract and lacking a desirable amount of intuitive exposition, but I'll try to come back and add a few more details later. At least it's short.

Given H=X(XTX)1XT,

Var(yy^)=Var((IH)y)=(IH)Var(y)(IH)T=σ2(IH)2=σ2(IH)

Hence

Var(yiy^i)=σ2(1hii)

In the case of simple linear regression ... this gives the answer in your question.

This answer also makes sense: since y^i is positively correlated with yi, the variance of the difference should be smaller than the sum of the variances.

--

Edit: Explanation of why (IH) is idempotent.

(i) H is idempotent:

H2=X(XTX)1XTX(XTX)1XT =X [(XTX)1XTX] (XTX)1XT=X(XTX)1XT=H

(ii) (IH)2=I2IHHI+H2=I2H+H=IH


1
This is a very nice derivation for its simplicity, although one step that is not clear to me is why (IH)2=(IH). Maybe when you expand on your answer a little, as you're planning to do anyway, you could say a little something about that?
Jake Westfall

@Jake Added a couple of lines at the end
Glen_b -Reinstate Monica
En utilisant notre site, vous reconnaissez avoir lu et compris notre politique liée aux cookies et notre politique de confidentialité.
Licensed under cc by-sa 3.0 with attribution required.