Skip to content

Commit e1c0025

Browse files
committed
docs litellm x langfuse cookbook
1 parent a551f97 commit e1c0025

File tree

1 file changed

+92
-3
lines changed

1 file changed

+92
-3
lines changed

cookbook/logging_observability/LiteLLM_Proxy_Langfuse.ipynb

+92-3
Original file line numberDiff line numberDiff line change
@@ -128,21 +128,110 @@
128128
"cell_type": "markdown",
129129
"metadata": {},
130130
"source": [
131-
"### 2.3 View Traces on Langfuse"
131+
"### 2.3 View Traces on Langfuse\n",
132+
"LiteLLM will send the request / response, model, tokens (input + output), cost to Langfuse.\n",
133+
"\n",
134+
"![image_description](litellm_proxy_langfuse.png)"
132135
]
133136
},
134137
{
135138
"cell_type": "markdown",
136139
"metadata": {},
137140
"source": [
138-
"### 2.4 Call Anthropic, Bedrock models "
141+
"### 2.4 Call Anthropic, Bedrock models \n",
142+
"\n",
143+
"Now we can call `us.amazon.nova-micro-v1:0` and `claude-3-5-sonnet-20241022` models defined on your config.yaml both in the OpenAI request / response format."
144+
]
145+
},
146+
{
147+
"cell_type": "code",
148+
"execution_count": 24,
149+
"metadata": {},
150+
"outputs": [
151+
{
152+
"data": {
153+
"text/plain": [
154+
"ChatCompletion(id='chatcmpl-7756e509-e61f-4f5e-b5ae-b7a41013522a', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=\"Langfuse is an observability tool designed specifically for machine learning models and applications built with natural language processing (NLP) and large language models (LLMs). It focuses on providing detailed insights into how these models perform in real-world scenarios. Here are some key features and purposes of Langfuse:\\n\\n1. **Real-time Monitoring**: Langfuse allows developers to monitor the performance of their NLP and LLM applications in real time. This includes tracking the inputs and outputs of the models, as well as any errors or issues that arise during operation.\\n\\n2. **Error Tracking**: It helps in identifying and tracking errors in the models' outputs. By analyzing incorrect or unexpected responses, developers can pinpoint where and why errors occur, facilitating more effective debugging and improvement.\\n\\n3. **Performance Metrics**: Langfuse provides various performance metrics, such as latency, throughput, and error rates. These metrics help developers understand how well their models are performing under different conditions and workloads.\\n\\n4. **Traceability**: It offers detailed traceability of requests and responses, allowing developers to follow the path of a request through the system and see how it is processed by the model at each step.\\n\\n5. **User Feedback Integration**: Langfuse can integrate user feedback to provide context for model outputs. This helps in understanding how real users are interacting with the model and how its outputs align with user expectations.\\n\\n6. **Customizable Dashboards**: Users can create custom dashboards to visualize the data collected by Langfuse. These dashboards can be tailored to highlight the most important metrics and insights for a specific application or team.\\n\\n7. **Alerting and Notifications**: It can set up alerts for specific conditions or errors, notifying developers when something goes wrong or when performance metrics fall outside of acceptable ranges.\\n\\nBy providing comprehensive observability for NLP and LLM applications, Langfuse helps developers to build more reliable, accurate, and user-friendly models and services.\", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739554005, model='us.amazon.nova-micro-v1:0', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=380, prompt_tokens=5, total_tokens=385, completion_tokens_details=None, prompt_tokens_details=None))"
155+
]
156+
},
157+
"execution_count": 24,
158+
"metadata": {},
159+
"output_type": "execute_result"
160+
}
161+
],
162+
"source": [
163+
"import openai\n",
164+
"client = openai.OpenAI(\n",
165+
" api_key=LITELLM_VIRTUAL_KEY,\n",
166+
" base_url=LITELLM_PROXY_BASE_URL\n",
167+
")\n",
168+
"\n",
169+
"response = client.chat.completions.create(\n",
170+
" model=\"us.amazon.nova-micro-v1:0\",\n",
171+
" messages = [\n",
172+
" {\n",
173+
" \"role\": \"user\",\n",
174+
" \"content\": \"what is Langfuse?\"\n",
175+
" }\n",
176+
" ],\n",
177+
")\n",
178+
"\n",
179+
"response"
139180
]
140181
},
141182
{
142183
"cell_type": "markdown",
143184
"metadata": {},
144185
"source": [
145-
"## 3. Advanced - Set Langfuse Trace ID, Tags, Metadata "
186+
"## 3. Advanced - Set Langfuse Trace ID, Tags, Metadata \n",
187+
"\n",
188+
"Here is an example of how you can set Langfuse specific params on your client side request. See full list of supported langfuse params [here](https://docs.litellm.ai/docs/observability/langfuse_integration)\n",
189+
"\n",
190+
"You can view the logged trace of this request [here](https://us.cloud.langfuse.com/project/clvlhdfat0007vwb74m9lvfvi/traces/567890?timestamp=2025-02-14T17%3A30%3A26.709Z)"
191+
]
192+
},
193+
{
194+
"cell_type": "code",
195+
"execution_count": 27,
196+
"metadata": {},
197+
"outputs": [
198+
{
199+
"data": {
200+
"text/plain": [
201+
"ChatCompletion(id='chatcmpl-789babd5-c064-4939-9093-46e4cd2e208a', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=\"Langfuse is an observability platform designed specifically for monitoring and improving the performance of natural language processing (NLP) models and applications. It provides developers with tools to track, analyze, and optimize how their language models interact with users and handle natural language inputs.\\n\\nHere are some key features and benefits of Langfuse:\\n\\n1. **Real-Time Monitoring**: Langfuse allows developers to monitor their NLP applications in real time. This includes tracking user interactions, model responses, and overall performance metrics.\\n\\n2. **Error Tracking**: It helps in identifying and tracking errors in the model's responses. This can include incorrect, irrelevant, or unsafe outputs.\\n\\n3. **User Feedback Integration**: Langfuse enables the collection of user feedback directly within the platform. This feedback can be used to identify areas for improvement in the model's performance.\\n\\n4. **Performance Metrics**: The platform provides detailed metrics and analytics on model performance, including latency, throughput, and accuracy.\\n\\n5. **Alerts and Notifications**: Developers can set up alerts to notify them of any significant issues or anomalies in model performance.\\n\\n6. **Debugging Tools**: Langfuse offers tools to help developers debug and refine their models by providing insights into how the model processes different types of inputs.\\n\\n7. **Integration with Development Workflows**: It integrates seamlessly with various development environments and CI/CD pipelines, making it easier to incorporate observability into the development process.\\n\\n8. **Customizable Dashboards**: Users can create custom dashboards to visualize the data in a way that best suits their needs.\\n\\nLangfuse aims to help developers build more reliable, accurate, and user-friendly NLP applications by providing them with the tools to observe and improve how their models perform in real-world scenarios.\", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739554281, model='us.amazon.nova-micro-v1:0', object='chat.completion', service_tier=None, system_fingerprint=None, usage=CompletionUsage(completion_tokens=346, prompt_tokens=5, total_tokens=351, completion_tokens_details=None, prompt_tokens_details=None))"
202+
]
203+
},
204+
"execution_count": 27,
205+
"metadata": {},
206+
"output_type": "execute_result"
207+
}
208+
],
209+
"source": [
210+
"import openai\n",
211+
"client = openai.OpenAI(\n",
212+
" api_key=LITELLM_VIRTUAL_KEY,\n",
213+
" base_url=LITELLM_PROXY_BASE_URL\n",
214+
")\n",
215+
"\n",
216+
"response = client.chat.completions.create(\n",
217+
" model=\"us.amazon.nova-micro-v1:0\",\n",
218+
" messages = [\n",
219+
" {\n",
220+
" \"role\": \"user\",\n",
221+
" \"content\": \"what is Langfuse?\"\n",
222+
" }\n",
223+
" ],\n",
224+
" extra_body={\n",
225+
" \"metadata\": {\n",
226+
" \"generation_id\": \"1234567890\",\n",
227+
" \"trace_id\": \"567890\",\n",
228+
" \"trace_user_id\": \"user_1234567890\",\n",
229+
" \"tags\": [\"tag1\", \"tag2\"]\n",
230+
" }\n",
231+
" }\n",
232+
")\n",
233+
"\n",
234+
"response"
146235
]
147236
},
148237
{

0 commit comments

Comments
 (0)