In: Statistics and Probability
If a random variable X has a beta distribution, its probability density function is
fX (x) = 1 xα−1(1 − x)β−1 B(α,β)
for x between 0 and 1 inclusive. The pdf is zero outside of [0,1]. The B() in the denominator is the beta function, given by beta(a,b) in R.
Write your own version of dbeta() using the beta pdf formula given above. Call your function mydbeta(). Your function can be simpler than dbeta(): use only three arguments (x, shape1, and shape2) and don’t bother with ncp or log.
Use your function mydbeta() for this part. Experiment with different values of α and β and plot the resulting beta pdf to answer the following questions. The parameters α and β have to be greater than zero. Otherwise there’s no restriction. In R, plot 3 pdf’s for each one.
1. What values for α and β produce a pdf which is uniform on [0,1]?
2. What relationship between α and β produces a symmetric pdf?
4. a) What values for α and β produce a triangular pdf?
b) How should I choose α and β if I want a big spike at x = .5, and 0 everywhere
else?
At this point, we are going to have a very “meta” discussion about how we represent probabilities. Until now, probabilities have just been numbers in the range 0 to 1. However, if we have uncertainty about our probability, it would make sense to represent our probabilities as random variables (and thus articulate the relative likelihood of our belief). Imagine we have a coin and we would like to know its probability of coming up heads (p). We flip the coin (n + m) times, and it comes up heads n times. One way to calculate the probability is to assume that it is exactly p = n n+m . That number, however, is a coarse estimate, especially if n + m is small. Intuitively it doesn’t capture our uncertainty about the value of p. Just like with other random variables, it often makes sense to hold a distributed belief about the value of p. To formalize the idea that we want a distribution for p we are going to use a random variable X to represent the probability of the coin coming up heads. Before flipping the coin, we could say that our belief about the coin’s success probability is uniform: X ∼ Uni(0, 1). If we let N be the number of heads that came up, given that the coin flips are independent, (N|X) ∼ Bin(n + m, x). We want to calculate the probability density function for X|N. We can start by applying Bayes’ Theorem: f X|N (x|n) = P(N = n|X = x) f X (