Non-asymptotic penalization criteria for model selection in mixture of experts models

Image credit: Dung N. Nguyen & Florence Forbes

Abstract

Mixture of experts (MoE) is a popular class of models in statistics and machine learning that has sustained attention over the years, due to its flexibility and effectiveness. We consider the Gaussian-gated localized MoE (GLoME) regression model for modeling heterogeneous data. This model poses challenging questions with respect to the statistical estimation and model selection problems, including feature selection, both from the computational and theoretical points of view. We study the problem of selecting of the GLoME model characterized by the number of components, in a penalized maximum likelihood estimation framework. We provide a lower bound on the penalty that ensures a weak oracle inequality is satisfied by our estimator. To support our theoretical result, we perform numerical experiments on simulated and real data, which illustrate the performance of our finite-sample oracle inequality.

Date
Apr 8, 2021 9:00 AM — Apr 9, 2021 3:00 PM
Location
Laboratory of Mathematics Raphaël Salem (LMRS, UMR CNRS 6085)
Rouen, Normandie, France
TrungTin Nguyen
TrungTin Nguyen
Postdoctoral Research Fellow

A central theme of my research is data science at the intersection of statistical learning, machine learning and optimization.

Next
Previous

Related