Carnegie Mellon University
Browse
Cheng_cmu_0041O_11102.pdf (2.38 MB)

SpecInfer: Accelerating Generative Large Language Model Serving with Speculative Inference and Token Tree Verification

Download (2.38 MB)
thesis
posted on 2024-02-12, 20:06 authored by Xinhao Cheng

 The high computational and memory requirements of generative large language models (LLMs) make it challenging to serve them quickly and cheaply. This paper introduces SpecInfer, an LLM serving system that accelerates generative LLM inference with speculative inference and token tree verification. A key insight behind SpecInfer is to combine various collectively boost-tuned small language models to jointly predict the LLM’s outputs; the predictions are organized as a token tree, whose nodes each represent a candidate token sequence. The correctness of all candidate token sequences represented by a token tree is verified against the LLM in parallel using a novel tree- based parallel decoding mechanism. SpecInfer uses an LLM as a token tree verifier instead of an incremental decoder, which significantly reduces the end-to-end latency and computational requirement for serving generative LLMs while provably preserving model quality. Our evaluation shows that SpecInfer outperforms existing LLM serving systems by 1.3-2.4x for distributed LLM inference and by 2.6-3.5x for offloading- based LLM inference, while preserving the same generative performance. SpecInfer is publicly available at https://github.com/flexflow/FlexFlow/tree/inference 

History

Date

2024-01-01

Degree Type

  • Master's Thesis

Department

  • Information Networking Institute

Degree Name

  • Master of Science (MS)

Advisor(s)

Jia Zhihao

Usage metrics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC