Get startedGet started for free

Congratulations!

1. Congratulations!

Fantastic! We've reached the end of the course.

2. Chapters 1-2 recap

We have managed to cover the fundamentals of an important aspect of data science - anomaly detection. In chapter one, we learned all about detecting univariate outliers using basic visualization techniques and some fancy ones like Median Absolute Deviation. In chapter two, we kicked it up a notch by learning about the most versatile outlier classifier - Isolation Forest. We learned how the iTrees are built in rich detail and how PyOD combines them to build the robust IForest estimator. Chapter two also covered outlier probabilities so that we can measure the confidence of classifiers in their predictions.

3. Chapters 3-4 recap

Chapter three was all about distance and density-based classifiers. We learned about how to use the k-nearest-neighbors algorithm for outlier detection and its most popular distance-calculation methods. There was also a video on QuantileTransformer which is the swiss-army knife of transformers because it works on any distribution and makes them normal. There was also the LOF algorithm to compliment KNN. In chapter four, we learn about time series datasets and how to load and visualize them through time series decomposition. We have used the algorithms from previous chapters to detect outliers from time series. We also explored combining multiple outlier classifiers into a single, robust and trustworthy ensemble and thus, making your predictions more stable. Finally, the last video focused on dealing with identified outliers. We learned the scenarios where you should keep outliers and when to drop them.

4. Thank you!

Thank you very much for taking this course on anomaly detection. I wish you the best of luck in your data science journey!

Create Your Free Account

or

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.