Skip to content

Commit cb61325

Browse files
gustavocidornelaswhoseoyster
authored andcommitted
chore: add Semantic Kernel tracing example
1 parent 5b1f462 commit cb61325

File tree

1 file changed

+175
-0
lines changed

1 file changed

+175
-0
lines changed
Lines changed: 175 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,175 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "2722b419",
6+
"metadata": {},
7+
"source": [
8+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openlayer-ai/openlayer-python/blob/main/examples/tracing/semantic-kernel/semantic_kernel.ipynb)\n",
9+
"\n",
10+
"\n",
11+
"# <a id=\"top\">Semantic Kernel quickstart</a>\n",
12+
"\n",
13+
"This notebook shows how to export traces captured by [Semantic Kernel](https://learn.microsoft.com/en-us/semantic-kernel/overview/) to Openlayer. The integration is done via the Openlayer's [OpenTelemetry endpoint](https://www.openlayer.com/docs/integrations/opentelemetry)."
14+
]
15+
},
16+
{
17+
"cell_type": "code",
18+
"execution_count": null,
19+
"id": "020c8f6a",
20+
"metadata": {},
21+
"outputs": [],
22+
"source": [
23+
"!pip install openlit semantic-kernel"
24+
]
25+
},
26+
{
27+
"cell_type": "markdown",
28+
"id": "75c2a473",
29+
"metadata": {},
30+
"source": [
31+
"## 1. Set the environment variables"
32+
]
33+
},
34+
{
35+
"cell_type": "code",
36+
"execution_count": 1,
37+
"id": "f3f4fa13",
38+
"metadata": {},
39+
"outputs": [],
40+
"source": [
41+
"import os\n",
42+
"\n",
43+
"os.environ[\"OPENAI_API_KEY\"] = \"YOUR_OPENAI_API_KEY_HERE\"\n",
44+
"\n",
45+
"# Env variables pointing to Openlayer's OpenTelemetry endpoint\n",
46+
"os.environ[\"OTEL_EXPORTER_OTLP_ENDPOINT\"] = \"https://api.openlayer.com/v1/otel\"\n",
47+
"os.environ[\"OTEL_EXPORTER_OTLP_HEADERS\"] = \"Authorization=Bearer YOUR_OPENLAYER_API_KEY_HERE, x-bt-parent=pipeline_id:YOUR_OPENLAYER_PIPELINE_ID_HERE\""
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"id": "9758533f",
53+
"metadata": {},
54+
"source": [
55+
"## 2. Initialize OpenLIT and Semantic Kernel"
56+
]
57+
},
58+
{
59+
"cell_type": "code",
60+
"execution_count": 2,
61+
"id": "c35d9860-dc41-4f7c-8d69-cc2ac7e5e485",
62+
"metadata": {},
63+
"outputs": [],
64+
"source": [
65+
"import openlit\n",
66+
"\n",
67+
"openlit.init()"
68+
]
69+
},
70+
{
71+
"cell_type": "code",
72+
"execution_count": 3,
73+
"id": "9c0d5bae",
74+
"metadata": {},
75+
"outputs": [],
76+
"source": [
77+
"from semantic_kernel import Kernel\n",
78+
"\n",
79+
"kernel = Kernel()"
80+
]
81+
},
82+
{
83+
"cell_type": "markdown",
84+
"id": "72a6b954",
85+
"metadata": {},
86+
"source": [
87+
"## 3. Use LLMs as usual\n",
88+
"\n",
89+
"That's it! Now you can continue using LLMs and workflows as usual. The trace data is automatically exported to Openlayer and you can start creating tests around it."
90+
]
91+
},
92+
{
93+
"cell_type": "code",
94+
"execution_count": 4,
95+
"id": "e00c1c79",
96+
"metadata": {},
97+
"outputs": [],
98+
"source": [
99+
"from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion\n",
100+
"\n",
101+
"kernel.add_service(\n",
102+
" OpenAIChatCompletion(ai_model_id=\"gpt-4o-mini\"),\n",
103+
")"
104+
]
105+
},
106+
{
107+
"cell_type": "code",
108+
"execution_count": 5,
109+
"id": "abaf6987-c257-4f0d-96e7-3739b24c7206",
110+
"metadata": {},
111+
"outputs": [],
112+
"source": [
113+
"from semantic_kernel.prompt_template import InputVariable, PromptTemplateConfig\n",
114+
"\n",
115+
"prompt = \"\"\"{{$input}}\n",
116+
"Please provide a concise response to the question above.\n",
117+
"\"\"\"\n",
118+
"\n",
119+
"prompt_template_config = PromptTemplateConfig(\n",
120+
" template=prompt,\n",
121+
" name=\"question_answerer\",\n",
122+
" template_format=\"semantic-kernel\",\n",
123+
" input_variables=[\n",
124+
" InputVariable(name=\"input\", description=\"The question from the user\", is_required=True),\n",
125+
" ]\n",
126+
")\n",
127+
"\n",
128+
"summarize = kernel.add_function(\n",
129+
" function_name=\"answerQuestionFunc\",\n",
130+
" plugin_name=\"questionAnswererPlugin\",\n",
131+
" prompt_template_config=prompt_template_config,\n",
132+
")"
133+
]
134+
},
135+
{
136+
"cell_type": "code",
137+
"execution_count": null,
138+
"id": "49c606ac",
139+
"metadata": {},
140+
"outputs": [],
141+
"source": [
142+
"await kernel.invoke(summarize, input=\"What's the meaning of life?\")"
143+
]
144+
},
145+
{
146+
"cell_type": "code",
147+
"execution_count": null,
148+
"id": "f0377af7",
149+
"metadata": {},
150+
"outputs": [],
151+
"source": []
152+
}
153+
],
154+
"metadata": {
155+
"kernelspec": {
156+
"display_name": "semantic-kernel-2",
157+
"language": "python",
158+
"name": "python3"
159+
},
160+
"language_info": {
161+
"codemirror_mode": {
162+
"name": "ipython",
163+
"version": 3
164+
},
165+
"file_extension": ".py",
166+
"mimetype": "text/x-python",
167+
"name": "python",
168+
"nbconvert_exporter": "python",
169+
"pygments_lexer": "ipython3",
170+
"version": "3.10.16"
171+
}
172+
},
173+
"nbformat": 4,
174+
"nbformat_minor": 5
175+
}

0 commit comments

Comments
 (0)