Model Inference Systems
- [ASPLOS 2025] Klotski: Efficient Mixture-of-Expert Inference via Expert-Aware Multi-Batch Pipeline
- Authors: Zhiyuan Fang, Yuegui Huang, Zicong Hong, Yufeng Lyu, Wuhui Chen, Yue Yu, Fan Yu, Zibin Zheng
- Link
- [ASPLOS 2025] MoE-Lightning: High-Throughput MoE Inference on Memory-constrained GPUs
- Authors: Shiyi Cao, Shu Liu, Tyler Griggs, Peter Schafhalter, Xiaoxuan Liu, Ying Sheng, Joseph E. Gonzalez, Matei Zaharia, Ion Stoica
- Link, Code
- [FAST 2025] Mooncake: Trading More Storage for Less Computation — A KVCache-centric Architecture for Serving LLM Chatbot
- Authors: Ruoyu Qin, Zheming Li, Weiran He, Jialei Cui, Feng Ren, Mingxing Zhang, Yongwei Wu, Weimin Zheng, Xinran Xu
- Link, Code
- [ICCAD 2025] AdapMoE: Adaptive Sensitivity-based Expert Gating and Management for Efficient MoE Inference
- Authors: Shuzhang Zhong, Ling Liang, Yuan Wang, Runsheng Wang, Ru Huang, Meng Li
- Link, Code
- [arXiv 2025] MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism
- Authors: Ruidong Zhu, Ziheng Jiang, Chao Jin, Peng Wu, Cesar A. Stuardo, Dongyang Wang, Xinlei Zhang, Huaping Zhou, Haoran Wei, Yang Cheng, Jianzhe Xiao, Xinyi Zhang, Lingjun Liu, Haibin Lin, Li-Wen Chang, Jianxi Ye, Xiao Yu, Xuanzhe Liu, Xin Jin, Xin Liu
- Link
- [OSDI 2024] DistServe: Disaggregating Prefill and Decoding for Goodput-optimized Large Language Model Serving
- Authors: Yinmin Zhong, Shengyu Liu, Junda Chen, Jianbo Hu, Yibo Zhu, Xuanzhe Liu, Xin Jin, Hao Zhang
- Link, Code
- [OSDI 2024] Taming Throughput-Latency Tradeoff in LLM Inference with Sarathi-Serve
- Authors: Amey Agrawal, Nitin Kedia, Ashish Panwar, Jayashree Mohan, Nipun Kwatra, Bhargav Gulavani, Alexey Tumanov, Ramachandran Ramjee
- Link, Code
- [SOSP 2024] LoongServe: Efficiently Serving Long-Context Large Language Models with Elastic Sequence Parallelism
- Authors: Bingyang Wu, Shengyu Liu, Yinmin Zhong, Peng Sun, Xuanzhe Liu, Xin Jin
- Link, Code
- [SOSP 2024] PowerInfer: Fast Large Language Model Serving with a Consumer-grade GPU
- Authors: Yixin Song, Zeyu Mi, Haotong Xie, Haibo Chen
- Link, Code
- [SOSP 2023] Efficient Memory Management for Large Language Model Serving with PagedAttention
- Authors: Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, Ion Stoica
- Link, Code
- [OSDI 2022] Orca: A Distributed Serving System for Transformer-Based Generative Models
- Authors: Gyeong-In Yu, Joo Seong Jeong, Geon-Woo Kim, Soojeong Kim, Byung-Gon Chun
- Link