Pruning with confidence
Once again, you've come up short: you found multiple useful rules, but can't narrow it down to one. Even worse, the two rules you found used the same itemset, but just swapped the antecedents and consequents. You decide to see whether pruning by another metric might allow you to narrow things down to a single association rule.
What would be the right metric? Both lift and support are identical for all rules that can be generated from an itemset, so you decide to use confidence instead, which differs for rules produced from the same itemset. Note that pandas
is available as pd
and the one-hot encoded transaction data is available as onehot
. Additionally, apriori
has been imported from mlxtend
.
This exercise is part of the course
Market Basket Analysis in Python
Exercise instructions
- Import
association_rules
frommlxtend
. - Complete the statement for the
apriori
algorithm using a support value of 0.0015 and a maximum itemset length of 2. - Complete the statement for association rules using confidence as the metric and a threshold value of 0.5.
Hands-on interactive exercise
Have a go at this exercise by completing this sample code.
# Import the association rules function
____
# Compute frequent itemsets using the Apriori algorithm
frequent_itemsets = ____(onehot, ____,
____, use_colnames = True)
# Compute all association rules using confidence
rules = ____(frequent_itemsets,
metric = "____",
min_threshold = ____)
# Print association rules
print(rules)