Exercise

# Calculating accuracy metrics: precision

The Precision score is an important metric used to measure the accuracy of a classification algorithm. It is calculated as the **fraction of True Positives over the sum of True Positives and False Positives**, or
$$\frac{\text{# of True Positives}}{\text{# of True Positives} + \text{# of False Positives}}.$$

- we define
**True Positives**as the number of employees who actually left, and were classified correctly as leaving - we define
**False Positives**as the number of employees who actually stayed, but were wrongly classified as leaving

If there are no False Positives, the precision score is equal to 1. If there are no True Positives, the recall score is equal to 0.

In this exercise, we will calculate the precision score (using the `sklearn`

function `precision_score`

) for our initial classification model.

The variables `features_test`

and `target_test`

are available in your workspace.

Instructions

**100 XP**

- Import the function
`precision_score`

from the module`sklearn.metrics`

. - Use the initial model to predict churn (based on features of the test set).
- Calculate the precision score by comparing
`target_test`

with the test set predictions.