Which of the following is a reason for using Jensen-Shannon (JS) distance over a Kolmogorov-Smirnov (KS) test for numeric feature drift detection?
Which of the following is a reason for using Jensen-Shannon (JS) distance over a Kolmogorov-Smirnov (KS) test for numeric feature drift detection?
Jensen-Shannon (JS) distance produces a value between 0 and 1 that represents the divergence between two distributions. This value can be interpreted directly and doesn't require setting arbitrary thresholds or cutoffs. In contrast, the Kolmogorov-Smirnov (KS) test often involves determining a critical value based on the chosen significance level.
D is the answer. For E, it's also need set a threshold
This is a key advantage of using Jensen-Shannon divergence. It produces a value between 0 and 1, which represents the divergence between two distributions. This value can be interpreted without needing to set arbitrary thresholds or cutoffs. In contrast, the KS test involves comparing the test statistic to a critical value, which can depend on the significance level chosen.
not sure about this one...
D is correct