This project made for very light llm client.
the main idea is do not use any llm client library
.
- use parameter of LLMConfig
from lite_llm_client import LiteLLMClient, OpenAIConfig, LLMMessage, LLMMessageRole
client = LiteLLMClient(OpenAIConfig(api_key="YOUR API KEY"))
answer = client.chat_completions(messages=[LLMMessage(role=LLMMessage.USER, content="hello ai?")])
print(answer)
- use .env
- rename
.env_example
to.env
- replace YOUR KEY to real api_key
- rename
OPENAI_API_KEY=YOUR KEY
ANTHROPIC_API_KEY=YOUR KEY
GEMINI_API_KEY=YOUR KEY
- gemini path may not stable. guide code has
/v1beta/...
. sometimes gemini returns http 500 error
- support multimodal (image and text)
- apply opentelemetry in async functions
- support batch (OpenAI)
-
2025-01-11
support json mode(OpenAI) -
2024-11-01
apply opentelemetry in sync functions
-
2024-10-25
fix exception when use new model name as str type
-
2024-07-21
support OpenAI -
2024-07-25
support Anthropic -
2024-07-27
add options for inference -
2024-07-28
support Gemini -
2024-07-30
support streaming (OpenAI). simple SSE implement. -
2024-07-31
support streaming (Anthropic). -
2024-08-01
support streaming (Gemini). unstable google gemini. -
2024-08-13
support inference result(token count, stop readon)