Exercise

Maximizing Likelihood, Part 1

Previously, we chose the sample mean as an estimate of the population model paramter mu. But how do we know that the sample mean is the best estimator? This is tricky, so let's do it in two parts.

In Part 1, you will use a computational approach to compute the log-likelihood of a given estimate. Then, in Part 2, we will see that when you compute the log-likelihood for many possible guess values of the estimate, one guess will result in the maximum likelihood.

Instructions

100 XP
  • Compute the mean() and std() of the preloaded sample_distances as the guessed values of the probability model parameters.
  • Compute the probability, for each distance, using gaussian_model() built from sample_mean and sample_stdev.
  • Compute the loglikelihood as the sum() of the log() of the probabilities probs.