HyperRouter: Towards Efficient Training and Inference of Sparse Mixture of Experts

Abstract

By routing input tokens to only a few split experts, Sparse Mixture-of-Experts has enabled efficient training of large language models. Recent findings suggest that fixing the routers can achieve competitive performance by alleviating the collapsing problem, where all experts eventually learn similar representations. However, this strategy has two key limitations$:$ (i) the policy derived from random routers might be sub-optimal, and (ii) it requires a substantial number of experts during evaluation, leading to limited efficiency gains during inference. This work introduces HyperRouter, which dynamically generates the router’s parameters through a fixed hypernetwork and trainable embeddings. Consequently, HyperRouter achieves a balance between training the routers and freezing them to learn an improved routing policy. Extensive experiments across a wide range of tasks demonstrate the superior performance and efficiency gains of HyperRouter compared to existing routing methods. Our implementation will be made available upon acceptance.

Publication
In Proceedings of the 2023 Empirical Methods in Natural Language Processing, EMNLP 2023 Main, Acceptance rate 14% over 1041 submissions
TrungTin Nguyen
TrungTin Nguyen
Postdoctoral Research Fellow

A central theme of my research is data science at the intersection of statistical learning, machine learning and optimization.

Next
Previous

Related