Academic

News


Filter by
Jump to
Search

THIS SEMINAR IS CANCELLED

Dr Feng Jean University of Washington

Date:12 February 2020, Wednesday

Location:S16-06-118, DSAP Seminar Room

Time:03:00pm - 04:00pm

 

THIS SEMINAR IS CANCELLED.  WE APOLOGISED FOR ANY INCONVENIENCE. 

THANK YOU FOR YOUR KIND UNDERSTANDING.

 

Before we can realize the potential of machine learning (ML) in healthcare, we need to ensure the safety of these models for medical decision-making. In this talk, we explore 1) how to learn safer ML models and 2) how to allow modifications to them over time. As a first step to learning safer models, I will discuss the need for fail-safes in ML. Typical models give predictions for all inputs, which can be overconfident or even misleading for out-of-sample inputs. I introduce a penalization method for learning models that can abstain from making predictions and apply it to predict patient outcomes in the intensive care unit. In the second half of the talk, we discuss how the regulatory bodies like the US Food and Drug Association might evaluate and approve modifications to ML-based Software as a Medical Device (SaMD) without hindering innovation. To this end, we consider policies that can automatically evaluate proposed modifications without human intervention. I define a framework for evaluating the error rates of such policies and show that policies without error rate guarantees are prone to “bio-creep.” I then show how to protect against it by combining group sequential and online hypothesis testing methods.