Model selection by penalization in mixture of experts models with a non-asymptotic approach

Abstract

This study is devoted to the problem of model selection among a collection of Gaussian-gated localized mixtures of experts models characterized by the number of mixture components, and the complexity of Gaussian mean experts, in a penalized maximum likelihood estimation framework. In particular, we establish non-asymptotic risk bounds that take the form of weak oracle inequalities, provided that lower bounds of the penalties hold. Their good empirical behavior is then demonstrated on synthetic and real datasets.

Publication
53èmes Journées de Statistique de la Société Française de Statistique (SFdS)
TrungTin Nguyen
TrungTin Nguyen
Postdoctoral Research Fellow

A central theme of my research is data science at the intersection of statistical learning, machine learning and optimization.

Next
Previous

Related