Hello and welcome! My Vietnamese name is Nguyễn Trung Tín. I therefore used “TrungTin Nguyen” or “Trung Tin Nguyen” in my English publications. The first name is also “Tín” or “Tin” for short.

I will be a Postdoctoral Research Fellow at the The University of Queensland in the School of Mathematics and Physics from December 2023, where I am very fortunate to be mentored by Hien Duy Nguyen, and Xin Guo.

I am currently a Postdoctoral Research Fellow at the Inria centre at the University Grenoble Alpes in the Statify team, where I am very fortunate to be mentored by Florence Forbes, and Julyan Arbel and collaborate with Hien Duy Nguyen, Nhat Ho, Huy Nguyen, Khai Nguyen, Quang Pham, Binh Nguyen, Giang Truong Do, Le Huy Khiem, Dung Ngoc Nguyen, and Ho Minh Duy Nguyen (Collabolators in random order).

I completed my Ph.D. Degree in Statistics and Data Science at Normandie Univ in December 2021, where I am very fortunate to have been advised by Faicel Chamroukhi. During my Ph.D. research, I am grateful to collaborate with Hien Duy Nguyen, and Geoff McLachlan. I received a Visiting PhD Fellowship for 4 months at the Inria centre at the University Grenoble Alpes in the Statify team within a project LANDER.

A central theme of my research is data science, at the intersection of, at the interface of:

  • Statistical learning: Model selection (minimal penalties and slope heuristics, non-asymptotic oracle inequalities), simulation-based inference (approximate Bayesian computation, Bayesian synthetic likelihood, method of moments), Bayesian nonparametrics (Gibbs-type priors, Dirichlet process mixture), high-dimensional statistics (variable selection via Lasso and penalization, graphical models), uncertainty estimation.
  • Machine learning: Supervised learning (deep hierarchical mixture of experts (DMoE), deep neural networks), unsupervised learning (clustering via mixture models, dimensionality reduction via principal component analysis, deep generative models via variational autoencoders, generative adversarial networks and normalizing flows), reinforcement learning (partially observable Markov decision process).
  • Optimization: Robust and effective optimization algorithms for mixture models (expectation–maximization, variational Bayesian expectation–maximization, Markov chain Monte Carlo methods), difference of convex algorithm, optimal transport (Wasserstein distance, voronoi loss function).
  • Applications: Natural language processing (large language model), remote sensing (planetary science, e.g., retrieval of Mars surface physical properties from hyper-spectral images), audio processing (sound source localization), biostatistics (genomics, transcriptomics, proteomics), computer vision (image segmentation).


  • Data Science
  • Statistics
  • Artificial Intelligence


  • Ph.D. in Statistics and Data Science, 2018-2021

    Université de Caen Normandie, France

  • M.S. in Applied Mathematics, 2017-2018

    Université d'Orléans, France

  • B.S. Honors Program in Mathematics and Computer Science, 2013-2017

    Vietnam National University-Ho Chi Minh Univeristy of Science, Vietnam


(2023). Demystifying Softmax Gating Function in Gaussian Mixture of Experts. Thirty-seventh Conference on Neural Information Processing Systems as a spotlight.

PDF Project

(2023). A General Theory for Softmax Gating Multinomial Logistic Mixture of Experts. arXiv:2310.14188.

PDF Project

(2023). HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts. Accepted at The 2023 Conference on Empirical Methods in Natural Language Processing in main conference.

PDF Project

(2023). Bayesian nonparametric mixture of experts for high-dimensional inverse problems. hal-04015203.

PDF Project

(2023). Concentration results for approximate Bayesian computation without identifiability. hal-03987197.


(2023). A non-asymptotic risk bound for model selection in high-dimensional mixture of experts via joint rank and variable selection. hal-03984011. Accepted at AJCAI 2023 as a long oral presentation.

PDF Project

(2022). A non-asymptotic approach for model selection via penalization in high-dimensional mixture of experts. Electronic Journal of Statistics.

PDF Project DOI

(2022). Mixture of expert posterior surrogates for approximate Bayesian computation. 53èmes Journées de Statistique de la Société Française de Statistique (SFdS).

PDF Project

(2022). Model selection by penalization in mixture of experts models with a non-asymptotic approach. 53èmes Journées de Statistique de la Société Française de Statistique (SFdS).

PDF Project

(2022). Approximation of probability density functions via location-scale finite mixtures in Lebesgue spaces. Communications in Statistics - Theory and Methods.

PDF Project

(2021). Approximations of conditional probability density functions in Lebesgue spaces via mixture of experts models. Journal of Statistical Distributions and Applications.

PDF Project

(2021). A non-asymptotic model selection in block-diagonal mixture of polynomial experts models. arXiv preprint arXiv:2104.08959.

PDF Project

(2020). An l1-oracle inequality for the Lasso in mixture-of-experts regression models. arXiv preprint arXiv:2009.10622..

PDF Project

(2020). Approximation by finite mixtures of continuous density functions that vanish at infinity. Cogent Mathematics & Statistics.

PDF Project