Get startedGet started for free

Tasting the Bayes

1. Tasting the Bayes

Welcome back! Let's now use a Bayesian model to estimate the parameters of a probability distribution!

2. Binomial distribution

We will work with the so-called binomial distribution. It's a discrete distribution, allowing for only two values: a success, conventionally denoted as 1, and a failure, typically denoted as 0. A success could mean many things, depending on the use case: it could be tossing heads with a coin, curing a sick patient, or a user clicking on an online ad. There are only two outcomes, so when there is no success, there is a failure. The binomial distribution has one parameter: the probability of success. When we want to model some phenomenon using the binomial distribution, we need a list of observed results (successes and failures) and the task is to estimate the probability of success of the binomial distribution describing our data.

3. Binomial distribution in Python

In Python, numpy's random-dot-binomial function is a convenient tool to work with the binomial distribution. By default, it accepts two parameters: the number of trials and the probability of success. Consider coin-tossing, where getting heads is a success. We flip 100 coins, each with 50% probability of coming up heads. The output is the number of successes: we got 51 heads in 100 tosses. If we run the same code again, the output will be different - that's the random nature of coin-tossing. We can also use the random-dot-binomial function to generate random draws from the binomial distribution. To do this, we set the first argument, the number of trials, to 1, and we pass one more argument: size, which is the number of draws that we want to get. We flip the coin 5 times, and receive 3 heads and 2 tails.

4. Heads probability

In the exercises, you will be using a function called get-heads-prob. It's a custom function that I have prepared for you. You will see what's inside it later in the course, but for now, we will use it as a black box. Get-heads-prob implements a Bayesian model for estimating the probability of heads when tossing a coin. Its input is a list of coin tosses. And the output? Recall that in the Bayesian approach, parameters are described not by a single number, but rather by probability distributions. The output is a long list expressing the distribution of the probability of heads. Let's see it in practice. First, we create a list of 1000 coin tosses with the 50% heads probability using the numpy-dot-random-dot-binomial function.

5. Heads probability

Then, we pass these tosses to get-heads-prob. Let's take a look at the output. As expected, it's a long list of draws of the probability of success, which in this scenario is the probability of tossing a heads. We can visualize it as we've done earlier in the course with seaborn's kdeplot function. We set shading to true and label the plot as "heads probability". The plot shows that according to the model, the probability of success, which is tossing a heads, is likely between 49 and 52%, which is correct, given what we know about the true value - we have generated the coin tosses with 50% success probability. In the exercises, you will explore the get-heads-prob function and you might discover some surprising features!

6. Let's toss some coins!

Let's toss some coins!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.