A High-Performance LLM Inference Engine with vLLM-Style Continuous Batching
-
Updated
Dec 15, 2025 - C++
A High-Performance LLM Inference Engine with vLLM-Style Continuous Batching
gLLM: Global Balanced Pipeline Parallelism System for Distributed LLM Serving with Token Throttling
OpenAI-compatible server with continuous batching for MLX on Apple Silicon
Add a description, image, and links to the continuous-batching topic page so that developers can more easily learn about it.
To associate your repository with the continuous-batching topic, visit your repo's landing page and select "manage topics."