Model selection by penalization in mixture of experts models with a non-asymptotic approach.

Image credit: TrungTin Nguyen


This study is devoted to the problem of model selection among a collection of Gaussian-gated localized mixtures of experts models characterized by the number of mixture components, and the complexity of Gaussian mean experts, in a penalized maximum likelihood estimation framework. In particular, we establish non-asymptotic risk bounds that take the form of weak oracle inequalities, provided that lower bounds of the penalties hold. Their good empirical behavior is then demonstrated on synthetic and real datasets.

Jun 13, 2022 2:00 PM — Jun 17, 2022 4:00 PM
Université Claude Bernard Lyon 1, France
TrungTin Nguyen
TrungTin Nguyen
Postdoctoral Research Fellow

A central theme of my research is data science at the intersection of statistical learning, machine learning and optimization.