We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
2 parents efa67b6 + a03bd77 commit bbe2481Copy full SHA for bbe2481
docs/concepts/streaming.md
@@ -8,7 +8,7 @@ This can enhance the user experience by already showing part of the response but
8
If you want to stream all the tokens generated quickly to your console output,
9
you can use the `settings.console_stream = True` setting.
10
11
-## `strem_to()` wrapper
+## `stream_to()` wrapper
12
13
For streaming with non runnable funcchains you can wrap the LLM generation call into the `stream_to()` context manager. This would look like this:
14
0 commit comments