MoS: Unleashing parameter efficiency of low-rank adaptation with mixture of shards

Jan 1, 2025·
Sheng Wang
Sheng Wang
Equal contribution
,
Liheng Chen
,
Pengan Chen
,
Jingwei Dong
,
Boyang Xue
,
Jiyue Jiang
,
Lingpeng Kong
,
Chuan Wu
· 0 min read
Abstract
MoS introduces a novel approach to parameter-efficient fine-tuning by unleashing the efficiency of low-rank adaptation through mixture of shards, significantly reducing trainable parameters for multi-user personalization scenarios.
Type
Publication
In The Thirteenth International Conference on Learning Representations
publications
Sheng Wang
Authors
Sheng Wang (Forence)
PhD Graduate in Computer Science
Sheng Wang is a PhD graduate from The University of Hong Kong, supervised by Prof. Chuan Wu and Prof. Lingpeng Kong. His research focuses on Agent, LLM Super-Alignment, and Data Synthesis. He has published 14+ papers in top-tier conferences including NIPS2025 (Spotlight), ICLR2025, ACL2024/2025, EMNLP2025.
Authors
Authors