\(\)
Within the topic of explainable AI, counterfactual explanations to classifiers have received significant recent attention.
We study counterfactual explanations that try to explain why a data point received an undesirable classification
by providing the closest data point that would have received a desirable one. Within the context of one the simplest and most popular classification models ---\(k\)-nearest neighbors (\(k\)-NN)--- the solution to such optimal counterfactual explanation
is still very challenging computationally. In this work, we present techniques that significantly improve the computational times to find such optimal counterfactual explanations for \(k\)-NN.