Model Evaluation
Learn essential metrics and validation techniques to properly evaluate your machine learning models.
1 min read
Why Evaluation Matters#
A model that looks good on training data might fail in production. Proper evaluation ensures your model actually works.
Golden Rule
Never evaluate on training data. Always use a held-out test set or cross-validation.
Classification Metrics#
| Feature | Metric | Formula | When to Use |
|---|---|---|---|
| Accuracy | (TP+TN)/Total | Balanced classes | |
| Precision | TP/(TP+FP) | Cost of false positives high | |
| Recall | TP/(TP+FN) | Cost of false negatives high | |
| F1 Score | 2*(P*R)/(P+R) | Balance precision & recall |
python
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
accuracy = accuracy_score(y_true, y_pred)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred)
Cross-Validation#
python
from sklearn.model_selection import cross_val_score
scores = cross_val_score(model, X, y, cv=5)
print(f"Accuracy: {scores.mean():.3f} (+/- {scores.std() * 2:.3f})")
Key Takeaways#
๐ฏ
Match Metric to Problem
Accuracy isn't always right. Choose based on business cost.
๐
Use Cross-Validation
Single splits are unreliable. CV gives confidence intervals.
Continue Learning
Ready to level up your skills?
Explore more guides and tutorials to deepen your understanding and become a better developer.