# Biography

Hi there and welcome! My Vietnamese name is Nguyễn Trung Tín. I therefore used “TrungTin Nguyen” or “Trung Tin Nguyen” in my English publications. Also, the first name is “Tín” or “Tin” for short. I am currently a Postdoctoral Fellow at the Inria Grenoble-Rhône-Alpes in the Statify team where I am very fortunate to be mentored by Senior Researcher Florence Forbes, Senior Lecturer Hien Duy Nguyen, and Associate Researcher Julyan Arbel. I completed my Ph.D. Degree in Statistics and Data Science at Normandie Univ, UNICAEN, CNRS, LMNO, Caen, France in December 2021 where I am very fortunate to be advised by Professor Faicel Chamroukhi. During my Ph.D. research, I am also very fortunate to collaborate with Professor Geoff McLachlan focusing in mixture models. I received a Visiting PhD Fellowship at the Inria Grenoble-Rhône-Alpes Research Centre, working with Senior Researcher Florence Forbes and Associate Researcher Julyan Arbel in the Statify team under a project LANDER (from September 2020 to January 2021).

A central theme of my research focuses on Data Science, at the interface of:

• Statistical learning: Model selection (minimal penalties and slope heuristics, non-asymptotic oracle inequalities), simulation-based inference (approximate Bayesian computation, Bayesian synthetic likelihood, method of moments), Bayesian nonparametrics (Gibbs-type priors such as Dirichlet process mixture models), high-dimensional statistics (variable selection and regularization such as Lasso, graphical models).
• Machine learning: Supervised learning (deep hierarchical mixtures of experts (MoE) such as polynomial regression and logistic regression, deep neural networks), unsupervised learning (clustering such as mixture models, dimensionality reduction such as principal component analysis, deep generative models such as variational autoencoders, generative adversarial networks and normalizing flow models), reinforcement learning (partially observable Markov decision process).
• Optimization: Robust and effective optimization algorithms for deep hierarchical MoE (expectation–maximization (EM) algorithm, variational Bayesian EM algorithm, online and mini-batch majorization-minimization (MM) algorithm), difference of convex algorithm, optimal transport (Wasserstein distance).
• Biostatistics: Statistical learning and machine learning for large biological data sets (genomics, transcriptomics, proteomics).

### Interests

• Data Science
• Statistics
• Artificial Intelligence

### Education

• Ph.D. in Statistics and Data Science, 2018-2021

Université de Caen Normandie, France

• M.S. in Applied Mathematics, 2017-2018

Université d'Orléans, France

• B.S. Honors Program in Mathematics and Computer Science, 2013-2017

Vietnam National University-Ho Chi Minh Univeristy of Science, Vietnam

# Publications

(2022). Summary statistics and discrepancy measures for approximate Bayesian computation via surrogate posteriors. Statistics and Computing.

(2022). A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts. Electronic Journal of Statistics.

(2022). Mixture of expert posterior surrogates for approximate Bayesian computation. 53èmes Journées de Statistique de la Société Française de Statistique (SFdS).

(2022). Model selection by penalization in mixture of experts models with a non-asymptotic approach. 53èmes Journées de Statistique de la Société Française de Statistique (SFdS).

(2022). Approximation of probability density functions via location-scale finite mixtures in Lebesgue spaces. Communications in Statistics - Theory and Methods.

(2021). Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models. Journal of Statistical Distributions and Applications.

(2021). A non-asymptotic model selection in block-diagonal mixture of polynomial experts models. arXiv preprint arXiv:2104.08959.

(2020). An l1-oracle inequality for the Lasso in mixture-of-experts regression models. arXiv preprint arXiv:2009.10622..

(2020). Approximation by finite mixtures of continuous density functions that vanish at infinity. Cogent Mathematics & Statistics.

# Recent & Upcoming Talks

### A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models

Mixture of experts (MoE) are a popular class of statistical and machine learning models that have gained attention over the years due …

### Bayesian nonparametric mixture of experts for high-dimensional inverse problems

A wide class of problems can be formulated as inverse problems where the goal is to find parameter values that best explain some …

### Model selection by penalization in mixture of experts models with a non-asymptotic approach.

This study is devoted to the problem of model selection among a collection of Gaussian-gated localized mixtures of experts models …

### A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts models.

Mixture of experts (MoE) are a popular class of statistical and machine learning models that have gained attention over the years due …

### A non-asymptotic approach for model selection via penalization in mixture of experts models

Mixture of experts (MoE), originally introduced as a neural network, is a popular class of statistical and machine learning models that …

### A non-asymptotic model selection in mixture of experts models

Mixture of experts (MoE), originally introduced as a neural network, is a popular class of statistical and machine learning models that …

### Model Selection and Approximation in High-dimensional Mixtures of Experts Models$:$ From Theory to Practice

Mixtures of experts (MoE) models are a ubiquitous tool for the analysis of heterogeneous data across many fields including statistics, …

### Model Selection and Approximation in High-dimensional Mixtures of Experts Models From Theory to Practice

Mixtures of experts (MoE) models are a ubiquitous tool for the analysis of heterogeneous data across many fields including statistics, …

### Approximation and non-asymptotic model selection in mixture of experts models

Mixtures of experts (MoE) models are a ubiquitous tool for the analysis of heterogeneous data across many fields including statistics, …

### Approximate Bayesian computation with surrogate posteriors

A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a discrepancy that describes how different the …

### A non-asymptotic model selection in mixture of experts models

Mixture of experts (MoE) is a popular class of models in statistics and machine learning that has sustained attention over the years, …

### Approximate Bayesian computation with surrogate posteriors

A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a discrepancy that describes how different the …

### Distance-based ABC procedures

Approximate Bayesian computation (ABC) has become an essential part of the Bayesian toolbox for addressing problems in which the …

### Non-asymptotic penalization criteria for model selection in mixture of experts models

Mixture of experts (MoE) is a popular class of models in statistics and machine learning that has sustained attention over the years, …

### Approximate Bayesian computation with surrogate posteriors

A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a discrepancy that describes how different the …