Date:18 January 2024, Thursday
Location:S16-06-118, Seminar Room
Time:3pm, Singapore
Although distributional reinforcement learning (DRL) has been widely examined in the past few years, very few studies investigate the validity of the obtained Q-function estimator in the distributional setting. We discuss some of our works in ensuring the monotonicity of the obtained quantile estimates and the theoretical necessity. Moreover, we undertake a comprehensive analysis of how the approximation errors within the Q-function impact the overall training process in DRL. We both theoretically analyze and empirically demonstrate techniques to reduce both bias and variance in these error terms, ultimately resulting in improved performance in practical applications.