Michael and Fraijo suggested that simply checking whether the parameter value of interested was contained in some credible region was the Bayesian equivalent of inverting confidence intervals. I was a bit skeptical about this at first, since it wasn't obvious to me that this procedure really resulted in a Bayesian test (in the usual sense).
As it turns out, it does - at least if you're willing to accept a certain type of loss functions. Many thanks to Zen, who provided references to two papers that establish a connection between HPD regions and hypothesis testing:
I'll try to summarize them here, for future reference. In analogue with the example in the original question, I'll treat the special case where the hypotheses are
H0:θ∈Θ0={θ0}andH1:θ∈Θ1=Θ∖Θ0,
where
Θ is the parameter space.
Θ0Θ1
π(⋅)θ
T(x)={θ:π(θ|x)>π(θ0|x)}.
This means that T(x) is a HPD region, with credibility P(θ∈T(x)|x).
The Pereira-Stern test rejects Θ0 when P(θ∉T(x)|x) is "small" (<0.05, say). For a unimodal posterior, this means that θ0 is far out in the tails of the posterior, making this criterion somewhat similar to using p-values. In other words, Θ0 is rejected at the 5 % level if and only if it is not contained the in 95 % HPD region.
Let the test function φ be 1 if Θ0 is accepted and 0 if Θ0 is rejected. Madruga et al. proposed the loss function
L(θ,φ,x)={a(1−I(θ∈T(x)),b+cI(θ∈(T(x)),if φ(x)=0if φ(x)=1,
with
a,b,c>0.
Minimization of the expected loss leads to the Pereira-Stern test where Θ0 is rejected if P(θ∉T(x)|x)<(b+c)/(a+c).
So far, all is well. The Pereira-Stern test is equivalent to checking whether θ0 is in an HPD region and there is a loss function that generates this test, meaning that it is founded in decision theory.
The controversial part though is that the loss function depends on x. While such loss functions have appeared in the literature a few times, they don't seem to be generally accepted as being very reasonable.
For further reading on this topic, see a list of papers that cite the Madruga et al. article.
Update October 2012:
I wasn't completely satisfied with the above loss function, as its dependence on x makes the decision-making more subjective than I would like. I spent some more time thinking about this problem and ended up writing a short note about it, posted on arXiv earlier today.
Let qα(θ|x) denote the posterior quantile function of θ, such that P(θ≤qα(θ|x))=α. Instead of HPD sets we consider the central (equal-tailed) interval (qα/2(θ|x),q1−α/2(θ|x)). To test Θ0 using this interval can be justified in the decision-theoretic framework without a loss function that depends on x.
The trick is to reformulate the problem of testing the point-null hypothesis Θ0={θ0} as a three-decision problem with directional conclusions. Θ0 is then tested against both Θ−1={θ:θ<θ0} and Θ1={θ:θ>θ0}.
Let the test function φ=i if we accept Θi (note that this notation is the opposite of that used above!). It turns out that under the weighted 0−1 loss function
L2(θ,φ)=⎧⎩⎨0,α/2,1,if θ∈Θi and φ=i,i∈{−1,0,1},if θ∉Θ0 and φ=0,if θ∈Θi∪Θ0 and φ=−i,i∈{−1,1},
the Bayes test is to reject
Θ0 if
θ0 is not in the central interval.
This seems like a quite reasonable loss function to me. I discuss this loss, the Madruga-Esteves-Wechsler loss and testing using credible sets further in the manuscript on arXiv.