Si X
Comment expliquer intuitivement ( X T X ) - 1
Si X
Comment expliquer intuitivement ( X T X ) - 1
Réponses:
Considérons une régression simple sans terme constant et où le régresseur unique est centré sur sa moyenne d'échantillon. Alors X ′ X
Pourquoi? Parce que plus un régresseur est varié, plus il contient d'informations. Lorsque les régresseurs sont nombreux, cela se généralise à l'inverse de leur matrice variance-covariance, qui prend également en compte la co-variabilité des régresseurs. Dans le cas extrême où X ′ X
Une façon simple de voir σ 2 ( X T X ) - 1
De l'une ou l'autre de ces formules, on peut voir qu'une variabilité plus grande de la variable prédictive conduira généralement à une estimation plus précise de son coefficient. C'est l'idée souvent exploitée dans la conception des expériences, où en choisissant des valeurs pour les prédicteurs (non aléatoires), on essaie de rendre le déterminant de ( X T X ) aussi grand que possible, le déterminant étant une mesure de la variabilité.
La transformation linéaire de la variable aléatoire gaussienne est-elle utile? En utilisant la règle que si, x ∼ N ( μ , Σ ) , alors A x + b ∼ N ( A μ + b , A T Σ A ) .
En supposant que Y = X β + ϵ est le modèle sous-jacent et ϵ ∼ N ( 0 , σ 2 ) .
∴Y∼N(Xβ,σ2)XTY∼N(XTXβ,Xσ2XT)(XTX)−1XTY∼N[β,(XTX)−1σ2]
So (XTX)−1XT
Hope that was helpful.
I'll take a different approach towards developing the intuition that underlies the formula Varˆβ=σ2(X′X)−1
To help develop the intuition, we will assume that the simplest Gauss-Markov assumptions are satisfied: xi
Why should doubling the sample size, ceteris paribus, cause the variance of ˆβ
Let's turn, then, to your main question, which is about developing intuition for the claim that the variance of ˆβ
Because by assumption Varx(1)>Varx(2)
It is reasonably straightforward to generalize the intuition obtained from studying the simple regression model to the general multiple linear regression model. The main complication is that instead of comparing scalar variances, it is necessary to compare the "size" of variance-covariance matrices. Having a good working knowledge of determinants, traces and eigenvalues of real symmetric matrices comes in very handy at this point :-)
Say we have n observations (or sample size) and p parameters.
The covariance matrix Var(ˆβ) of the estimated parameters ˆβ1,ˆβ2 etc. is a representation of the accuracy of the estimated parameters.
If in an ideal world the data could be perfectly described by the model, then the noise will be σ2=0. Now, the diagonal entries of Var(ˆβ) correspond to Var(^β1),Var(^β2) etc. The derived formula for the variance agrees with the intuition that if the noise is lower, the estimates will be more accurate.
In addition, as the number of measurements gets larger, the variance of the estimated parameters will decrease. So, overall the absolute value of the entries of XTX will be higher, as the number of columns of XT is n and the number of rows of X is n, and each entry of XTX is a sum of n product pairs. The absolute value of the entries of the inverse (XTX)−1 will be lower.
Hence, even if there is a lot of noise, we can still reach good estimates ^βi of the parameters if we increase the sample size n.
I hope this helps.
Reference: Section 7.3 on Least squares: Cosentino, Carlo, and Declan Bates. Feedback control in systems biology. Crc Press, 2011.
This builds on @Alecos Papadopuolos' answer.
Recall that the result of a least-squares regression doesn't depend on the units of measurement of your variables. Suppose your X-variable is a length measurement, given in inches. Then rescaling X, say by multiplying by 2.54 to change the unit to centimeters, doesn't materially affect things. If you refit the model, the new regression estimate will be the old estimate divided by 2.54.
The X′X matrix is the variance of X, and hence reflects the scale of measurement of X. If you change the scale, you have to reflect this in your estimate of β, and this is done by multiplying by the inverse of X′X.