Qu'est-ce qu'un modèle bayésien?


34

Puis-je appeler un modèle dans lequel le théorème de Bayes est utilisé comme un "modèle bayésien"? Je crains qu'une telle définition ne soit trop large.

Alors, quel est exactement un modèle bayésien?


9
Un modèle bayésien est un modèle statistique composé du couple précédent x probabilité = postérieur x marginal. Le théorème de Bayes est quelque peu secondaire au concept de prieur.
Xi'an

Réponses:


18

Pour l’essentiel, l’inférence repose sur l’utilisation du théorème de Bayes pour obtenir une distribution postérieure d’une quantité ou de quantités d’intérêt formant un modèle (tel que des valeurs de paramètres) basé sur une distribution antérieure des paramètres inconnus pertinents et la vraisemblance issue du modèle.

C'est-à-dire qu'à partir d'un modèle de distribution d'une forme quelconque, , et d'un p antérieur ( θ ) , quelqu'un pourrait chercher à obtenir le p postérieur ( θ | X ) .f(Xi|θ)p(θ)p(θ|X)

Un exemple simple de modèle bayésien est présenté dans cette question et dans les commentaires de celle-ci - Régression linéaire bayésienne, décrits plus en détail dans Wikipedia ici.. Searches turn up discussions of a number of Bayesian models here.

Mais il y a d'autres choses que l'on pourrait essayer de faire avec une analyse bayésienne en plus d'un modèle - voir, par exemple, la théorie de la décision bayésienne.


In linear regression, is θ here equal to the vector [β0,β1,...,βn]? If not, what is it?
BCLC

1
@BCLC It would usually include σ as well.
Glen_b -Reinstate Monica

1
@BCLC You seem to be conflating frequentist and Bayesian inference. Bayesian inference focuses on whatever quantities you're interested in. If you're interested in parameters (e.g. inference about particular coefficients), the idea would be to seek posterior distributions [parameters|data]. If you're interested in the the mean function (μY|X), then you would seek a posterior distribution for that (which is of course a function of the (multivariate) distribution of β). You might use OLS in your estimation, but the parameters of the posterior will be shifted by the prior ...
Glen_b -Reinstate Monica

1
... see the wikipedia page on Bayesian regression and some of the discussions here on CV
Glen_b -Reinstate Monica

1
That calculation sometimes comes up (whether you called it σ2 or ϕ), for various reasons. My earlier comment is not in any way in conflict with that calculation. σ (or equivalently σ2 or ϕ) is a parameter, and you have to deal with it along with the other parameters. However, while it would be rare that you know σ; for example if you are doing Gibbs sampling, the conditional would be relevant. If you just want inference on β, you'd integrate σ (or σ2 etc) out from θ|y rather than condition on σ.
Glen_b -Reinstate Monica

11

A Bayesian model is just a model that draws its inferences from the posterior distribution, i.e. utilizes a prior distribution and a likelihood which are related by Bayes' theorem.


7

Can I call a model wherein Bayes' Theorem is used a "Bayesian model"?

No

I am afraid such a definition might be too broad.

You are right. Bayes' theorem is a legitimate relation between marginal event probabilities and conditional probabilities. It holds regardless of your interpretation of probability.

So what exactly is a Bayesian model?

If you're using prior and posterior concepts anywhere in your exposition or interpretation, then you're likely to be using model Bayesian, but this is not the absolute rule, because these concepts are also used in non-Bayesian approaches.

In a broader sense though you must be subscribing to Bayesian interpretation of probability as a subjective belief. This little theorem of Bayes was extended and stretched by some people into this entire world view and even, shall I say, philosophy. If you belong to this camp then you are Bayesian. Bayes had no idea this would happen to his theorem. He'd be horrified, me thinks.


4
This appears to be the first answer to introduce the important point made in its first line: the mere use of Bayes' Theorem does not make something a Bayesian model. I would like to encourage you to go further with this thought. You seem to back down where you say that "using prior and posterior concepts" makes a model Bayesian. Doesn't that simply amount to applying Bayes' Theorem again? If not, could you explain what you mean by "concepts" in this passage? After all, classical (non-Bayesian) statistics uses priors and posteriors to prove admissibility of many procedures.
whuber

@whuber, it was more like a simple rule of thumb. Whenever I see "prior" in the paper it ends up being or claiming to be from Bayesian point of view. I'll clarify my point though.
Aksakal

5

A statistical model can be seen as a procedure/story describing how some data came to be. A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model).

That means that most other statistical models can be "cast into" a Bayesian model by modifying them to be using probability everywhere. This is especially true for models that rely on maximum likelihood, as maximum likelihood model fitting is a strict subset to Bayesian model fitting.


MLE is used and was developed outside Bayesian model, so it's not very clear what you mean by it being "strict subset to Bayesian model fitting".
Aksakal

From a Bayesian perspective the MLE is what you get when you assume flat priors, fit the model and use the most probable parameter configuration as a point estimate. Whether this is a special case of Bayesian "philosophy of statistics" I leave for others to discuss, but it is certainly a special case of Bayesian model fitting.
Rasmus Bååth

The problem with this statement is that it leaves an impression that you need to be subscribed to some sort of Bayesian thinking in order to use MLE.
Aksakal

1
I'm not sure what you mean. IMO you don't need to subscribe to Bayesian thinking when using Bayesian statistics more than you need to subscribe to matrix think when doing linear algebra or Gaussian thinking when using a normal distribution. I'm also not saying that MLE has to be interpreted as a subset of Bayesian model fitting (even though it falls pretty natural to me).
Rasmus Bååth

3

Your question is more on the semantic side: when can I call a model "Bayesian"?

Drawing conclusions from this excellent paper:

Fienberg, S. E. (2006). When did bayesian inference become "bayesian"? Bayesian Analysis, 1(1):1-40.

there are 2 answers:

  • Your model is first Bayesian if it uses Bayes' rule (that's the "algorithm").
  • More broadly, if you infer (hidden) causes from a generative model of your system, then you are Bayesian (that's the "function").

Surprisingly, the "Bayesian models" terminology that is used all over the field only settled down around the 60s. There are many things to learn about machine learning just by looking at its history!


You seem to mention only one of the "two answers". Maybe write something about both?
Tim

thanks for the note, I edited my answer to separate the 2 parts of my sentence.
meduz
En utilisant notre site, vous reconnaissez avoir lu et compris notre politique liée aux cookies et notre politique de confidentialité.
Licensed under cc by-sa 3.0 with attribution required.