# Bernoulli-beta conjugate model¶

## Posterior predictive distribution¶

If $$X|p \sim \mathcal{BE}(p)$$ with $$p \sim \mathcal{B}(\alpha, \beta)$$, then the posterior predictive probability density function, the expected value and variance of $$X$$ are

$\begin{split}f(x; \alpha, \beta) = \begin{cases} \frac{\beta}{\alpha + \beta} & \text{if x = 0}\\ \frac{\alpha}{\alpha + \beta} & \text{if x = 1}, \end{cases}\end{split}$
$\mathrm{E}[X] = \frac{\alpha}{\alpha + \beta}, \quad \mathrm{Var}[X] = \frac{\alpha \beta}{(\alpha + \beta)^2}.$

## Proofs¶

Posterior predictive probability density function

\begin{align}\begin{aligned}f(x=0) &= \int_0^1 (1-p) \frac{p^{\alpha - 1} (1-p)^{\beta - 1}}{B(\alpha, \beta)} \mathop{dp} = \mathrm{E}[1-p] = \frac{\beta}{\alpha + \beta}.\\f(x=1) &= \int_0^1 p \frac{p^{\alpha - 1} (1-p)^{\beta - 1}}{B(\alpha, \beta)} \mathop{dp} = \mathrm{E}[p] = \frac{\alpha}{\alpha + \beta}.\end{aligned}\end{align}

Posterior predictive expected value

$\mathrm{E}[X] = \mathrm{E}[\mathrm{E}[X | p]] = \mathrm{E}[p] = \frac{\alpha}{\alpha + \beta}.$

Posterior predictive variance

$\mathrm{Var}[X] = \mathrm{E}[X^2] - \mathrm{E}[X]^2 = \frac{\alpha}{\alpha + \beta} - \left(\frac{\alpha}{\alpha + \beta}\right)^2 = \frac{\alpha \beta}{(\alpha + \beta)^2}.$