Academic

News


Filter by
Jump to
Search

Multi-stage learning in Reproducing Kernel Hilbert Space

Dr Zhu LiUniversity College London

Date:7 February 2024, Wednesday

Location:Zoom: https://nus-sg.zoom.us/j/84820732012?pwd=S25TVitqTnJkQ3FpQWEvQ3FWQnBZQT09

Time:4pm, Singapore

Classical machine learning framework focuses on single-stage learning, where we directly estimate the target prediction function from training data sets. However, due to the high volume of data and extremely complicated models used, recent learning paradigm shifts to the multi-stage learning setting, where we train a model X on one or many tasks at initial stages, followed by fine-tuning model X for another task. Despite the fact that multi-stage learning achieves many success empirically, the theoretical foundation, such as consistency and generalization properties, is rather limited.

In the first part of the talk, I will explain how we study the theoretical properties of multi-stage learning in the context of meta-learning. In doing so, we provide the first theoretical justification for the employment of meta-learning in practice, even when the prediction function is highly nonlinear. While meta-learning focuses on learning model X at the first stage with many tasks, in the second part of the talk, I will address how we could learn model X with only one task. Specifically, we study the multi-stage learning under the conditional average treatment effect setting where we need to estimate the kernel conditional mean embedding at the first stage. We show that a minimax optimal learning rate can be achieved for estimating the conditional mean embedding.