In: Statistics and Probability
Gaussian Mixture Model:
the initial means and variances of two clusters in a GMM are as follows: ?(1)=−3, ?(2)=2, ?21=?22=4. Let ?1=?2=0.5.
Let ?(1)=0.2, ?(2)=−0.9, ?(3)=−1, ?(4)=1.2, ?(5)=1.8 be five points that need to cluster.
Need to find
1) p(1|1)
2) p(1|2)
3) p(1|3)
4) p(1|4)
5) p(1|5)
Initial derivations
We are now going to introduce some additional notation. Just a word of warning. Math is coming on! Don’t worry. I’ll try to keep the notation as clean as possible for better understanding of the derivations. First, let’s suppose we want to know what is the probability that a data point xn comes from Gaussian k. We can express this as:Which reads “given a data point x, what is the probability it came from Gaussian k?” In this case, z is a latent variable that takes only two possible values. It is one when x came from Gaussian k, and zero otherwise. Actually, we don’t get to see this z variable in reality, but knowing its probability of occurrence will be useful in helping us determine the Gaussian mixture parameters, as we discuss later.
Likewise, we can state the following:
Which means that the overall probability of observing a point that comes from Gaussian k is actually equivalent to the mixing coefficient for that Gaussian. This makes sense, because the bigger the Gaussian is, the higher we would expect this probability to be. Now let z be the set of all possible latent variables z, hence:
We know beforehand that each z occurs independently of others and that they can only take the value of one when k is equal to the cluster the point comes from. Therefore:
Now, what about finding the probability of observing our data given that it came from Gaussian k? Turns out to be that it is actually the Gaussian function itself! Following the same logic we used to define p(z), we can state:
Ok, now you may be asking, why are we doing all this? Remember our initial aim was to determine what the probability of z given our observation x? Well, it turns out to be that the equations we have just derived, along with the Bayes rule, will help us determine this probability. From the product rule of probabilities, we know that
Hmm, it seems to be that now we are getting somewhere. The operands on the right are what we have just found. Perhaps some of you may be anticipating that we are going to use the Bayes rule to get the probability we eventually need. However, first we will need p(xn), not p(xn, z). So how do we get rid of z here? Yes, you guessed it right. Marginalization! We just need to sum up the terms on z, hence
This is the equation that defines a Gaussian Mixture, and you can clearly see that it depends on all parameters that we mentioned previously! To determine the optimal values for these we need to determine the maximum likelihood of the model. We can find the likelihood as the joint probability of all observations xn, defined by:
Like we did for the original Gaussian density function, let’s apply the log to each side of the equation:
Great! Now in order to find the optimal parameters for the Gaussian mixture, all we have to do is to differentiate this equation with respect to the parameters and we are done, right? Wait! Not so fast. We have an issue here. We can see that there is a logarithm that is affecting the second summation. Calculating the derivative of this expression and then solving for the parameters is going to be very hard!
What can we do? Well, we need to use an iterative method to estimate the parameters. But first, remember we were supposed to find the probability of z given x? Well, let’s do that since at this point we already have everything in place to define what this probability will look like.
From Bayes rule, we know that
From our earlier derivations we learned that:
So let’s now replace these in the previous equation:
And this is what we’ve been looking for! Moving forward we are going to see this expression a lot. Next we will continue our discussion with a method that will help us easily determine the parameters for the Gaussian mixture.