XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization Paper • 2508.10395 • Published Aug 14, 2025 • 42
QuantSpec: Self-Speculative Decoding with Hierarchical Quantized KV Cache Paper • 2502.10424 • Published Feb 5, 2025 • 1