You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+39-12Lines changed: 39 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,16 +31,12 @@ The full API of this library can be found in [api.md](api.md). You may find basi
31
31
```python
32
32
from llama_stack_client import LlamaStackClient
33
33
34
-
client = LlamaStackClient(
35
-
base_url=f"http://{host}:{port}",
36
-
)
34
+
client = LlamaStackClient()
37
35
38
-
response = client.chat.completions.create(
39
-
messages=[{"role": "user", "content": "hello world, write me a 2 sentence poem about the moon"}],
40
-
model="meta-llama/Llama-3.2-3B-Instruct",
41
-
stream=False,
36
+
response = client.models.register(
37
+
model_id="model_id",
42
38
)
43
-
print(response)
39
+
print(response.identifier)
44
40
```
45
41
46
42
While you can provide an `api_key` keyword argument, we recommend using [python-dotenv](https://pypi.org/project/python-dotenv/) to add `LLAMA_STACK_CLIENT_API_KEY="My API Key"` to your `.env` file so that your API Key is not stored in source control.
Functionality between the synchronous and asynchronous clients is otherwise identical.
111
106
107
+
### With aiohttp
108
+
109
+
By default, the async client uses `httpx` for HTTP requests. However, for improved concurrency performance you may also use `aiohttp` as the HTTP backend.
110
+
111
+
You can enable this by installing `aiohttp`:
112
+
113
+
```sh
114
+
# install from PyPI
115
+
pip install --pre llama_stack_client[aiohttp]
116
+
```
117
+
118
+
Then you can enable it by instantiating the client with `http_client=DefaultAioHttpClient()`:
119
+
120
+
```python
121
+
import asyncio
122
+
from llama_stack_client import DefaultAioHttpClient
123
+
from llama_stack_client import AsyncLlamaStackClient
124
+
125
+
126
+
asyncdefmain() -> None:
127
+
asyncwith AsyncLlamaStackClient(
128
+
http_client=DefaultAioHttpClient(),
129
+
) as client:
130
+
response =await client.models.register(
131
+
model_id="model_id",
132
+
)
133
+
print(response.identifier)
134
+
135
+
136
+
asyncio.run(main())
137
+
```
138
+
112
139
## Streaming responses
113
140
114
141
We provide support for streaming responses using Server Side Events (SSE).
0 commit comments