CARE-AI Seminar: What, Really, Are Simple Models Good For?

Date and Time

CARE-AI Seminar Series Flyer

Details

Works on interpretability tend to focus on the limitations of complex blackbox models. It is tempting to assume that simple models can achieve what black-box models cannot. This paper cautions against this temptation.

Using decision trees as the primary example, I argue that simpler models do not better serve the ethical, scientific, and epistemic ends we want interpretability for. The functional form of a model is largely irrelevant. This does not mean that we should not prefer simple models when they are as accurate as their black-box counterparts, but we should rethink our reasons for this preference.

Register now.

Events Archive