A Decision Tree Algorithm is a supervised machine learning algorithm used for classification and regression tasks. It works by recursively splitting the dataset into smaller subsets based on feature values, forming a tree-like structure where each node represents a decision rule. Entropy (Information Gain) (used in classification) Mean Squared Error (MSE) (used in regression) Advantages: ✅ Easy to interpret and visualize. ✅ Handles both numerical and categorical data. ✅ Requires little data preprocessing (e.g., no need for feature scaling). Disadvantages: ❌ Prone to overfitting (if not pruned properly). ❌ Sensitive to noisy data. ❌ Can become biased if one class dominates. Common Decision Tree Algorithms: ID3 (Iterative Dichotomiser 3) – Uses entropy/information gain. C4.5 – An improvement over ID3, supports missing values. CART (Classification and Regression Trees) – Uses Gini impurity and MSE. 1️⃣ Compute Entropy(S) for the entire dataset. 2️⃣ Compute Entropy(A) ...