La réponse de @joceratops se concentre sur le problème d'optimisation de la probabilité maximale d'estimation. Il s'agit en effet d'une approche flexible qui se prête à de nombreux types de problèmes. Pour estimer la plupart des modèles, y compris les modèles de régression linéaire et logistique, il existe une autre approche générale basée sur la méthode d'estimation des moments.
L'estimateur de régression linéaire peut également être formulé comme la racine de l'équation d'estimation:
0 = XT( Y- Xβ)
À cet égard β is seen as the value which retrieves an average residual of 0. It needn't rely on any underlying probability model to have this interpretation. It is, however, interesting to go about deriving the score equations for a normal likelihood, you will see indeed that they take exactly the form displayed above. Maximizing the likelihood of regular exponential family for a linear model (e.g. linear or logistic regression) is equivalent to obtaining solutions to their score equations.
0=∑i=1nSi(α,β)=∂∂βlogL(β,α,X,Y)=XT(Y−g(Xβ))
Where Yi has expected value g(Xiβ). In GLM estimation, g is said to be the inverse of a link function. In normal likelihood equations, g−1 is the identity function, and in logistic regression g−1 is the logit function. A more general approach would be to require 0=∑ni=1Y−g(Xiβ) which allows for model misspecification.
Additionally, it is interesting to note that for regular exponential families, ∂g(Xβ)∂β=V(g(Xβ)) which is called a mean-variance relationship. Indeed for logistic regression, the mean variance relationship is such that the mean p=g(Xβ) is related to the variance by var(Yi)=pi(1−pi). This suggests an interpretation of a model misspecified GLM as being one which gives a 0 average Pearson residual. This further suggests a generalization to allow non-proportional functional mean derivatives and mean-variance relationships.
A generalized estimating equation approach would specify linear models in the following way:
0=∂g(Xβ)∂βV−1(Y−g(Xβ))
With V a matrix of variances based on the fitted value (mean) given by g(Xβ). This approach to estimation allows one to pick a link function and mean variance relationship as with GLMs.
In logistic regression g would be the inverse logit, and Vii would be given by g(Xiβ)(1−g(Xβ)). The solutions to this estimating equation, obtained by Newton-Raphson, will yield the β obtained from logistic regression. However a somewhat broader class of models is estimable under a similar framework. For instance, the link function can be taken to be the log of the linear predictor so that the regression coefficients are relative risks and not odds ratios. Which--given the well documented pitfalls of interpreting ORs as RRs--behooves me to ask why anyone fits logistic regression models at all anymore.