Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
We are pleased to deliver the following event through our (virtual and free) seminar series: Frontiers of Big Data, AI and Analytics.
Time: 15 April 2021 9:00AM - 10:30AM (Australian Eastern Standard Time (GMT+10)).
Speaker: Professor Cynthia Rudin (Duke University)
Discussion theme: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.
Abstract of talk
With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes. Cynthia will give several reasons why we should use interpretable models, the most compelling of which is that for high stakes decisions, interpretable models do not seem to lose accuracy over black boxes - in fact, the opposite is true, where when we understand what the models are doing, we can troubleshoot them to ultimately gain accuracy.
Registration
Early registration is encouraged to secure your seat.
The Zoom link will be provided 1 day prior to the event.
Structure of the event
Introduction
Talk given by a speaker (30 minutes)
A conversation with a discussant (35 minutes)
Q&A from audience (15 minutes)
Closing remark