1/26/2017

Expectation-maximization Algorithm

Today, I learned about the Expectation-maximization Algorithm.

Summary:
It is a method of iteration to get the best parameters for MLE.

Intuitively, this means that by maximizing in regard to a parameterization Θp-1, we obtain a parameterization Θp that maximizes the log likelihood. Based on this result, the EM algorithm works by iterating between two steps. In the first (E-step), it finds the expected value of the complete likelihood given the current parameterization Θp-1. In the second step (M-step), it looks for the set of parameters Θp that maximize the expectation from the E-step. At each iteration, the EM increases the log-likelihood converging to a local maximum. These steps are repeated P times or until a convergence criterion is fulfilled.

For now, I can only find the theoretical part of the method and I cannot find the examples or applications of them. As a result, it is difficult for me to apply the method. I will try to find details about examples and applications of the method ASAP.

Tomorrow, I will try to find examples of the methods so that I know how to apply the methods.

No comments:

Post a Comment