diff --git a/README.md b/README.md
index 20790a9fe4..6262c1d58c 100644
--- a/README.md
+++ b/README.md
@@ -23,7 +23,7 @@ Full documentation is available [here](https://docs.chainlit.io). You can ask Ch
> Check out [Literal AI](https://literalai.com), our product to monitor and evaluate LLM applications! It works with any Python or TypeScript applications and [seamlessly](https://docs.chainlit.io/data-persistence/overview) with Chainlit by adding a `LITERAL_API_KEY` in your project.
-
+
## Installation
@@ -47,8 +47,10 @@ Create a new file `demo.py` with the following code:
import chainlit as cl
-@cl.step
-def tool():
+@cl.step(type="tool")
+async def tool():
+ # Fake tool
+ await cl.sleep(2)
return "Response from the tool!"
@@ -65,11 +67,12 @@ async def main(message: cl.Message):
None.
"""
+ final_answer = await cl.Message(content="").send()
+
# Call the tool
- tool()
+ final_answer.content = await tool()
- # Send the final answer.
- await cl.Message(content="This is the final answer").send()
+ await final_answer.update()
```
Now run it!
diff --git a/images/quick-start.png b/images/quick-start.png
index 73b4f38832..4ebc3aec22 100644
Binary files a/images/quick-start.png and b/images/quick-start.png differ