Class Imbalance: When dealing with imbalanced data, metrics like accuracy can be misleading. A model that simply predicts the majority class all the time can achieve high accuracy, but it wouldn't be very useful for identifying the minority class (which is likely more important in this scenario).
F1 Score: The F1 score is the harmonic mean of precision and recall. Precision measures the proportion of positive predictions that are actually correct, while recall measures the proportion of actual positive cases that are correctly identified. By considering both metrics, F1 score provides a balanced view of the model's performance in identifying both positive and negative cases.
Minimizing False Positives and False Negatives: Since a high F1 score indicates a good balance between precision and recall, it translates to minimizing both false positives (incorrect positive predictions) and false negatives (missed positive cases).