Exercise

# Pruning with confidence

Once again, you've come up short: you found multiple useful rules, but can't narrow it down to one. Even worse, the two rules you found used the same itemset, but just swapped the antecedents and consequents. You decide to see whether pruning by another metric might allow you to narrow things down to a single association rule.

What would be the right metric? Both lift and support are identical for all rules that can be generated from an itemset, so you decide to use confidence instead, which differs for rules produced from the same itemset. Note that `pandas`

is available as `pd`

and the one-hot encoded transaction data is available as `onehot`

. Additionally, `apriori`

has been imported from `mlxtend`

.

Instructions

**100 XP**

- Import
`association_rules`

from`mlxtend`

. - Complete the statement for the
`apriori`

algorithm using a support value of 0.0015 and a maximum itemset length of 2. - Complete the statement for association rules using confidence as the metric and a threshold value of 0.5.