Skip to content

Commit cf49a5c

Browse files
authored
docs: add CTA for Ragas app (#2023)
1 parent 944bb1d commit cf49a5c

File tree

10 files changed

+38
-510
lines changed

10 files changed

+38
-510
lines changed

README.md

+22-6
Original file line numberDiff line numberDiff line change
@@ -85,15 +85,21 @@ await metric.single_turn_ascore(SingleTurnSample(**test_data))
8585

8686
Find the complete [Quickstart Guide](https://docs.ragas.io/en/latest/getstarted/evals)
8787

88-
### Analyze your Evaluation
88+
## Want help in improving your AI application using evals?
8989

90-
Sign up for [app.ragas.io](https://app.ragas.io) to review, share and analyze your evaluations</a>
90+
In the past 2 years, we have seen and helped improve many AI applications using evals.
91+
92+
We are compressing this knowledge into a product to replace vibe checks with eval loops so that you can focus on building great AI applications.
93+
94+
If you want help with improving and scaling up your AI application using evals.
95+
96+
97+
🔗 Book a [slot](https://bit.ly/3EBYq4J) or drop us a line: [[email protected]](mailto:[email protected]).
98+
99+
100+
![](/docs/_static/ragas_app.gif)
91101

92-
<p align="left">
93-
<img src="docs/getstarted/ragas_get_started_evals.gif" height="300">
94-
</p>
95102

96-
See [how to use it](https://docs.ragas.io/en/latest/getstarted/evals/#analyzing-results)
97103

98104
## 🫂 Community
99105

@@ -132,3 +138,13 @@ At Ragas, we believe in transparency. We collect minimal, anonymized usage data
132138
✅ Publicly available aggregated [data](https://github.com/explodinggradients/ragas/issues/49)
133139

134140
To opt-out, set the `RAGAS_DO_NOT_TRACK` environment variable to `true`.
141+
142+
### Cite Us
143+
```
144+
@misc{ragas2024,
145+
author = {ExplodingGradients},
146+
title = {Ragas: Supercharge Your LLM Application Evaluations},
147+
year = {2024},
148+
howpublished = {\url{https://github.com/explodinggradients/ragas}},
149+
}
150+
```

docs/_static/ragas_app.gif

211 KB
Loading

docs/getstarted/evals.md

+7-42
Original file line numberDiff line numberDiff line change
@@ -166,58 +166,23 @@ Output
166166
3 summarise given text\nSupply chain challenges ... Supply chain challenges in North America, caus... 1
167167
```
168168

169-
Viewing the sample-level results in a CSV file, as shown above, is fine for quick checks but not ideal for detailed analysis or comparing results across evaluation runs. For a better experience, use [app.ragas.io](https://app.ragas.io/) to view, analyze, and compare evaluation results interactively.
169+
Viewing the sample-level results in a CSV file, as shown above, is fine for quick checks but not ideal for detailed analysis or comparing results across evaluation runs.
170170

171+
### Want help in improving your AI application using evals?
171172

172-
## Analyzing Results
173+
In the past 2 years, we have seen and helped improve many AI applications using evals.
173174

174-
For this you may sign up and setup [app.ragas.io](https://app.ragas.io) easily. If not, you may use any alternative tools available to you.
175+
We are compressing this knowledge into a product to replace vibe checks with eval loops so that you can focus on building great AI applications.
175176

176-
In order to use the [app.ragas.io](http://app.ragas.io) dashboard, you need to have an account on [app.ragas.io](https://app.ragas.io/). If you don't have one, you can sign up for one [here](https://app.ragas.io/login). You will also need to generate a [Ragas APP token](https://app.ragas.io/dashboard/settings/app-tokens).
177-
178-
Once you have the API key, you can use the `upload()` method to export the results to the dashboard.
179-
180-
```python
181-
import os
182-
os.environ["RAGAS_APP_TOKEN"] = "your_app_token"
183-
```
184-
185-
Now you can view the results in the dashboard by following the link in the output of the `upload()` method.
186-
187-
```python
188-
results.upload()
189-
```
190-
191-
![](ragas_get_started_evals.gif)
177+
If you want help with improving and scaling up your AI application using evals.
192178

193179

180+
🔗 Book a [slot](https://bit.ly/3EBYq4J) or drop us a line: [[email protected]](mailto:[email protected]).
194181

195-
## Aligning Metrics
196182

197-
In the example above, we can see that the LLM-based metric mistakenly marks some summary as accurate, even though it missed critical details like growth numbers and market domain. Such mistakes can occur when the metric does not align with your specific evaluation preferences. For example,
183+
![](../_static/ragas_app.gif)
198184

199-
![](eval_mistake1.png)
200-
201-
202-
To fix these results, ragas provides a way to align the metric with your preferences, allowing it to learn like a machine learning model. Here's how you can do this in three simple steps:
203-
204-
1. **Annotate**: Accept, reject, or edit evaluation results to create training data (at least 15-20 samples).
205-
2. **Download**: Save the annotated data using the `Annotated JSON` button in [app.ragas.io](https://app.ragas.io/).
206-
3. **Train**: Use the annotated data to train your custom metric.
207-
208-
To learn more about this, refer to how to [train your own metric guide](./../howtos/customizations/metrics/train_your_own_metric.md)
209-
210-
[Download sample annotated JSON](../_static/sample_annotated_summary.json)
211-
212-
```python
213-
from ragas.config import InstructionConfig, DemonstrationConfig
214-
demo_config = DemonstrationConfig(embedding=evaluator_embeddings)
215-
inst_config = InstructionConfig(llm=evaluator_llm)
216-
217-
metric.train(path="<your-annotated-json.json>", demonstration_config=demo_config, instruction_config=inst_config)
218-
```
219185

220-
Once trained, you can re-evaluate the same or different test datasets. You should notice that the metric now aligns with your preferences and makes fewer mistakes, improving its accuracy.
221186

222187

223188
## Up Next

docs/getstarted/rag_eval.md

+6-13
Original file line numberDiff line numberDiff line change
@@ -176,26 +176,19 @@ Output
176176
{'context_recall': 1.0000, 'faithfulness': 0.8571, 'factual_correctness': 0.7280}
177177
```
178178

179-
## Analyze Results
179+
### Want help in improving your AI application using evals?
180180

181-
Once you have evaluated, you may want to view, analyse and share results. This is important to interpret the results and understand the performance of your RAG system. For this you may sign up and setup [app.ragas.io]() easily. If not, you may use any alternative tools available to you.
181+
In the past 2 years, we have seen and helped improve many AI applications using evals.
182182

183-
In order to use the [app.ragas.io](http://app.ragas.io) dashboard, you need to have an account on [app.ragas.io](https://app.ragas.io/). If you don't have one, you can sign up for one [here](https://app.ragas.io/login). You will also need to generate a [Ragas APP token](https://app.ragas.io/dashboard/settings/app-tokens).
183+
We are compressing this knowledge into a product to replace vibe checks with eval loops so that you can focus on building great AI applications.
184184

185-
Once you have the API key, you can use the `upload()` method to export the results to the dashboard.
185+
If you want help with improving and scaling up your AI application using evals.
186186

187-
```python
188-
import os
189-
os.environ["RAGAS_APP_TOKEN"] = "your_app_token"
190-
```
191187

192-
Now you can view the results in the dashboard by following the link in the output of the `upload()` method.
188+
🔗 Book a [slot](https://bit.ly/3EBYq4J) or drop us a line: [[email protected]](mailto:[email protected]).
193189

194-
```python
195-
result.upload()
196-
```
190+
![](../_static/ragas_app.gif)
197191

198-
![](rag_eval.gif)
199192

200193
## Up Next
201194

docs/getstarted/rag_testset_generation.md

+2-15
Original file line numberDiff line numberDiff line change
@@ -58,21 +58,8 @@ dataset.to_pandas()
5858
Output
5959
![testset](./testset_output.png)
6060

61-
You can also use other tools like [app.ragas.io](https://app.ragas.io/) or any other similar tools available for you in the [Integrations](./../howtos/integrations/index.md) section.
62-
63-
In order to use the [app.ragas.io](https://app.ragas.io/) dashboard, you need to have an account on [app.ragas.io](https://app.ragas.io/). If you don't have one, you can sign up for one [here](https://app.ragas.io/login). You will also need to have a [Ragas APP token](https://app.ragas.io/settings/api-keys).
64-
65-
Once you have the API key, you can use the `upload()` method to export the results to the dashboard.
66-
67-
```python
68-
import os
69-
os.environ["RAGAS_APP_TOKEN"] = "your_app_token"
70-
dataset.upload()
71-
```
72-
73-
Now you can view the results in the dashboard by following the link in the output of the `upload()` method.
74-
75-
![Visualization with Ragas Dashboard](./testset_output_dashboard.png)
61+
!!! note
62+
Generating synthetic test data can be confusing and hard, but if you need we are happy to help you with it. We have built pipelines to generate test data for various use cases. If you need help with it, please talk to us by booking a [slot](https://bit.ly/3EBYq4J) or drop us a line: [[email protected]](mailto:[email protected]).
7663

7764
## A Deeper Look
7865

docs/howtos/applications/_metrics_llm_calls.md

-74
This file was deleted.

0 commit comments

Comments
 (0)