We explore Explainable AI (XAI) techniques, categorizing them into model-specific, model-agnostic, local, and global explanations to clarify AI decision-making. We discuss regression and classification for model-specific insights and introduce SHAP and LIME to interpret black box models. Additionally, we address the complexity of Large Language Models (LLMs), emphasizing the need for transparency in their decision-making processes.